接触了DELF模型,用来找图片之间的局部相似图片(注重语义上的),从github上下的源码,找图片相似性分为两步:
1 抽取特征
2 匹配图片
为将其改成适用于自己任务,对代码进行了修改
txt文件的命令行参数换成了图片的文件路径
parser.add_argument( '--full_images', type=str, default='list_images.txt', help=""" """) parser.add_argument( '--part_images', type=str, default='list_images.txt', help=""" Path to list of images whose DELF features will be extracted. """)相应的解析参数的代码也进行了修改
def _ReadImageList(full,part): """Helper function to read image paths. Args: list_path: Path to list of images, one image path per line. Returns: image_paths: List of image paths. """ image_paths = [full,part] return image_paths脚本文件批量处理图片:
import os full_images = [] part_images = [] for path in os.listdir('full_path'): full_images.append(path) for path in os.listdir('part_path'): part_images.append(path) full_images = sorted(full_images) part_images = sorted(part_images) assert(len(full_images)==len(part_images)) assert(full_images[0].split('_')[1]==part_images[0].split('_')[1]) for i in range(133,len(full_images)): first_command = "python3.5 extract_features.py \ --config_path delf_config_example.pbtxt \ --full_images imgs/full/"+full_images[i]+" \ --part_images imgs/part/"+part_images[i]+" \ --output_dir data/" print('start '+full_images[i]) print('start '+part_images[i]) os.system(first_command) second_command = "python3.5 match_images.py \ --image_1_path imgs/full/"+full_images[i]+" \ --image_2_path imgs/part/"+part_images[i]+" \ --features_1_path data/"+full_images[i].split('.')[0]+".delf \ --features_2_path data/"+part_images[i].split('.')[0]+".delf \ --output_image output/"+full_images[i].split('_')[1].split('.')[0]+'.png' os.system(second_command)其中遇到几个问题,图片不清晰,是因为matplotlib分辨率过低。 plt.rcParams['savefig.dpi']=300 plt.rcParams['figure.dpi']=300gpu内存使用过高导致tf.ssesion开启失败。
sudo fuser -v /dev/nvidia* #查找占用GPU资源的PID kill -9 pid