Caffe深度学习入门(5)——caffenet 微调网络 训练自己的数据并测试训练的模型

xiaoxiao2021-02-28  32

微调网络,通常我们有一个初始化的模型参数文件,这里是不同于training from scratch,scrachtch指的是我们训练一个新的网络,在训练过程中,这些参数都被随机初始化,而fine-tuning,是我们可以在ImageNet上1000类分类训练好的参数的基础上,根据我们的分类识别任务进行特定的微调

这里我以一个车的识别为例,假设我们有1种车需要识别,我的任务对象是车,现在有ImageNet的模型参数文件,在这里使用的网络模型是CaffeNet,是一个小型的网络,其实别的网络如GoogleNet也是一样的原理。那么这个任务的变化可以表示为:

任务:分类 类别数目:1000(ImageNet上1000类的分类任务)------> 1(自己的特定数据集的分类任务车)

那么在网络的微调中,我们的整个流程分为以下几步:

1. 依然是准备好我们的训练数据和测试数据 2. 计算数据集的均值文件,因为集中特定领域的图像均值文件会跟ImageNet上比较General的数据的均值不太一样 3. 修改网络最后一层的输出类别,并且需要加快最后一层的参数学习速率 4. 调整Solver的配置参数,通常学习速率和步长,迭代次数都要适当减少 5. 启动训练,并且需要加载pretrained模型的参数

1.准备数据集 这一点就不用说了,准备两个txt文件,放成list的形式,可以参考caffe下的example,图像路径之后一个空格之后跟着类别的ID,如下,这里记住ID必须从0开始,要连续,否则会出错,loss不下降,按照要求写就OK。 这个是训练的图像label,测试的也同理

1. 创建lmdb文件,使用caffe下的convert_imageset 工具,具体命令如下: ./build/tools/convert_imageset /media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/ data/cartest/carlist.txt data/cartest/train_car_lmdb -resize_width=227 -resize_height=227 -check_size -shuffle true

其中第一个参数是基地址路径用来拼接的,第二个是label的文件,第三个是生成的数据库文件支持leveldb或者lmdb,接着是resize的大小,最后是否随机图片顺序

计算均值,使用caffe下的convert_imageset 工具,具体命令

./build/tools/compute_image_mean /media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/train_car_lmdb/ data/carmean.binaryproto

第一个参数是基地址路径用来拼接的,第二个是lmdb文件,第三个是生成的均值文件carmean.binaryproto

3.调整网络层参数 参照Caffe上的例程,我用的是CaffeNet,首先在输入层data层,修改我们的source 和 meanfile, 根据之前生成的lmdb 和mean.binaryproto修改即可。

最后输出层是fc8,

1.首先修改名字,这样预训练模型赋值的时候这里就会因为名字不匹配从而重新训练,也就达成了我们适应新任务的目的。 2.调整学习速率,因为最后一层是重新学习,因此需要有更快的学习速率相比较其他层,因此我们将,weight和bias的学习速率加快10倍。

修改./models/bvlc_reference_caffenet/train_cal_resnet_lily.prototxt中 train和test对应的相关路径

mean_file: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/carmean.binaryproto source: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/train_car_lmdb"

修改./models/bvlc_reference_caffenet/solver_resnet_lily.prototxt

net: "models/bvlc_reference_caffenet/train_val_resnet_lily.prototxt" test_iter: 100 test_interval: 1000 base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 20000 display: 20 max_iter: 50000 momentum: 0.9 weight_decay: 0.0005 snapshot: 10000 snapshot_prefix: "models/bvlc_reference_caffenet/caffenet_resnet_model_lily" solver_mode: GPU

原来是fc8,记得把跟fc8连接的名字都要修改掉,修改修改./models/bvlc_reference_caffenet/train_val_resnet_lily.prototxt 后如下

layer { name: "fc8_comp_model" type: "InnerProduct" bottom: "fc7" top: "fc8_comp_model" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "accuracy" type: "Accuracy" bottom: "fc8_comp_model" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8_comp_model" bottom: "label" top: "loss" }

主要的调整有:test_iter从1000改为了100,因为数据量减少了,base_lr从0.01变成了0.001,这个很重要,微调时的基本学习速率不能太大,学习策略没有改变,步长从原来的100000变成了20000,最大的迭代次数也从450000变成了50000,动量和权重衰减项都没有修改,依然是GPU模型,网络模型文件和快照的路径根据自己修改

train_val_resnet_lily.prototxt完整文件为:

name: "CaffeNet" layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/carmean.binaryproto" } # mean pixel / channel-wise mean instead of mean image # transform_param { # crop_size: 227 # mean_value: 104 # mean_value: 117 # mean_value: 123 # mirror: true # } data_param { source: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/train_car_lmdb" batch_size: 256 backend: LMDB } } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { mirror: false crop_size: 227 mean_file: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/carmean.binaryproto" } # mean pixel / channel-wise mean instead of mean image # transform_param { # crop_size: 227 # mean_value: 104 # mean_value: 117 # mean_value: 123 # mirror: false # } data_param { source: "/media/***/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/data/cartest/train_car_lmdb" batch_size: 50 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8_comp_model" type: "InnerProduct" bottom: "fc7" top: "fc8_comp_model" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 1000 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "accuracy" type: "Accuracy" bottom: "fc8_comp_model" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "fc8_comp_model" bottom: "label" top: "loss" } 训练的指令如下: ./build/tools/caffe train --solver ./models/bvlc_reference_caffenet/solver_resnet_lily.prototxt --weights ./models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel --gpu 0

测试指令:

python ./python/classify02.py --model_def ./models/bvlc_reference_caffenet/train_val_test_resnet_lily.prototxt --pretrained_model ./models/bvlc_reference_caffenet/caffenet_resnet_model_lily_iter_50000.caffemodel --labels_file ./data/cartest/cartest.txt --center_only ./data/cartest/JPEGImages/crk201706301341.jpg foo

注意这里的train_val_test_resnet_lily.prototxt文件与训练时的文件train_val_resnet_lily.prototxt文件是不一样的。

train_val_resnet_lily.prototxt文件为:

name: "train_resnet_lily" layer { name: "data" type: "Input" top: "data" input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: "gaussian" std: 0.01 } bias_filler { type: "constant" value: 0 } } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: "gaussian" std: 0.005 } bias_filler { type: "constant" value: 1 } } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8_comp_model" type: "InnerProduct" bottom: "fc7" top: "fc8_comp_model" param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 1000 } } layer { name: "prob" type: "Softmax" bottom: "fc8_comp_model" top: "prob" }

报错及解决方案

在最后一步测试的时候运行报错

报错:

File "python/classify.py", line 138, in <module> main(sys.argv) File "python/classify.py", line 110, in main channel_swap=channel_swap) File "/media/futurus/801328a5-39c6-4e08-b070-19fc662a5236/resnet/caffe/python/caffe/classifier.py", line 29, in __init__ in_ = self.inputs[0] IndexError: list index out of range

参考解决方案; 加入:

net: "train_resnet_lily" input: "data" input_shape { dim: 10 dim: 3 dim: 224 dim: 224 }

加入之后 又报了其他错误:

[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 1:4: Message type "caffe.NetParameter" has no field named "net". F0125 11:48:14.708683 42586 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: ./models/bvlc_reference_caffenet/train_val_test_resnet_lily.prototxt *** Check failure stack trace: *** Aborted (core dumped)

根据网友的博客修改加入为

net: "train_resnet_lily" input: "data" input_dim: 10 input_dim: 3 input_dim: 224 input_dim: 224

测试还不不行,还是报错。

最终解决方案如下,运行通过。加入

name: "train_resnet_lily" layer { name: "data" type: "Input" top: "data" input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } } }

Reference:

https://www.cnblogs.com/louyihang-loves-baiyan/p/5038758.html http://blog.csdn.net/sunshine_in_moon/article/details/49472901

转载请注明原文地址: https://www.6miu.com/read-1450071.html

最新回复(0)