当前位置: 首页 > news >正文

免费淘宝客网站建设网页设计实验报告问题讨论

免费淘宝客网站建设,网页设计实验报告问题讨论,服务器租用托管,页面设计实训心得之前用pytorch构建了squeezenet#xff0c;个人觉得pytorch是最好用的#xff0c;但是有的工程就是需要caffe结构的#xff0c;所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理#xff0c;跟pytorch不同#xff0c;pytorch读取数据只需要给数据… 之前用pytorch构建了squeezenet个人觉得pytorch是最好用的但是有的工程就是需要caffe结构的所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理跟pytorch不同pytorch读取数据只需要给数据集所在目录即可直接从中读取数据而caffe需要一个包含每张图片的绝对路径以及所在类别的txt文件从中读取数据。写一个生成次txt文件的脚本 import os import randomfolder cotta # 数据集目录相对路径 names os.listdir(folder)f1 open(/train_txt/train_cotta.txt, a) # 生成的txt地址 f2 open(/train_txt/test_water_workcloth.txt, a)for name in names:imgnames os.listdir(folder / name)random.shuffle(imgnames)numimg len(imgnames)for i in range(numimg):f1.write(%s %s\n % (folder / name / imgnames[i], name[0]))# if i int(0.9*numimg):# f1.write(%s %s\n%(folder / name / imgnames[i], name[0]))# else:# f2.write(%s %s\n%(folder / name / imgnames[i], name[0])) # f2.close() f1.close()数据集的目录也要跟pytorch的一致一个类的数据放在一个目录中目录名为类名。且脚本与该目录同级。 运行脚本后生成的txt内容如下 /cotta/0_other/0_1_391_572_68_68.jpg 0 /cotta/1_longSleeves/9605_1_5_565_357_82_70.jpg 1 /cotta/2_cotta/713_0.99796_1_316_162_96_87.jpg 2 ...... 图片相对路径 图片所属类别网络结构配置文件 trainval.prototxt layer {name: datatype: ImageDatatop: datatop: labeltransform_param {mirror: truecrop_size: 96}image_data_param {source: /train_txt/train_cotta.txt # 生成的txt的相对路径root_folder: /data/ # 存放数据集目录的路径batch_size: 64shuffle: truenew_height: 96new_width: 96}} layer {name: conv1type: Convolutionbottom: datatop: conv1convolution_param {num_output: 96kernel_size: 3stride: 1pad: 1weight_filler {type: xavier}} }layer { name: BatchNorm1 type: BatchNorm bottom: conv1 top: BatchNorm1 }layer {name: relu_conv1type: ReLUbottom: BatchNorm1top: BatchNorm1 } layer {name: pool1type: Poolingbottom: BatchNorm1top: pool1pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: fire2/squeeze1x1type: Convolutionbottom: pool1top: fire2/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_squeeze1x1 type: BatchNorm bottom: fire2/squeeze1x1 top: fire2/bn_squeeze1x1 }layer {name: fire2/relu_squeeze1x1type: ReLUbottom: fire2/bn_squeeze1x1top: fire2/bn_squeeze1x1 } layer {name: fire2/expand1x1type: Convolutionbottom: fire2/bn_squeeze1x1top: fire2/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_expand1x1 type: BatchNorm bottom: fire2/expand1x1 top: fire2/bn_expand1x1 }layer {name: fire2/relu_expand1x1type: ReLUbottom: fire2/bn_expand1x1top: fire2/bn_expand1x1 } layer {name: fire2/expand3x3type: Convolutionbottom: fire2/bn_expand1x1top: fire2/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire2/bn_expand3x3 type: BatchNorm bottom: fire2/expand3x3 top: fire2/bn_expand3x3 }layer {name: fire2/relu_expand3x3type: ReLUbottom: fire2/bn_expand3x3top: fire2/bn_expand3x3 } layer {name: fire2/concattype: Concatbottom: fire2/bn_expand1x1bottom: fire2/bn_expand3x3top: fire2/concat }#fire2 ends: 128 channels layer {name: fire3/squeeze1x1type: Convolutionbottom: fire2/concattop: fire3/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_squeeze1x1 type: BatchNorm bottom: fire3/squeeze1x1 top: fire3/bn_squeeze1x1 }layer {name: fire3/relu_squeeze1x1type: ReLUbottom: fire3/bn_squeeze1x1top: fire3/bn_squeeze1x1 } layer {name: fire3/expand1x1type: Convolutionbottom: fire3/bn_squeeze1x1top: fire3/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_expand1x1 type: BatchNorm bottom: fire3/expand1x1 top: fire3/bn_expand1x1 }layer {name: fire3/relu_expand1x1type: ReLUbottom: fire3/bn_expand1x1top: fire3/bn_expand1x1 } layer {name: fire3/expand3x3type: Convolutionbottom: fire3/bn_expand1x1top: fire3/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire3/bn_expand3x3 type: BatchNorm bottom: fire3/expand3x3 top: fire3/bn_expand3x3 }layer {name: fire3/relu_expand3x3type: ReLUbottom: fire3/bn_expand3x3top: fire3/bn_expand3x3 } layer {name: fire3/concattype: Concatbottom: fire3/bn_expand1x1bottom: fire3/bn_expand3x3top: fire3/concat }#fire3 ends: 128 channelslayer {name: bypass_23type: Eltwisebottom: fire2/concatbottom: fire3/concattop: fire3_EltAdd }layer {name: fire4/squeeze1x1type: Convolutionbottom: fire3_EltAddtop: fire4/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_squeeze1x1 type: BatchNorm bottom: fire4/squeeze1x1 top: fire4/bn_squeeze1x1 }layer {name: fire4/relu_squeeze1x1type: ReLUbottom: fire4/bn_squeeze1x1top: fire4/bn_squeeze1x1 } layer {name: fire4/expand1x1type: Convolutionbottom: fire4/bn_squeeze1x1top: fire4/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_expand1x1 type: BatchNorm bottom: fire4/expand1x1 top: fire4/bn_expand1x1 }layer {name: fire4/relu_expand1x1type: ReLUbottom: fire4/bn_expand1x1top: fire4/bn_expand1x1 } layer {name: fire4/expand3x3type: Convolutionbottom: fire4/bn_expand1x1top: fire4/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire4/bn_expand3x3 type: BatchNorm bottom: fire4/expand3x3 top: fire4/bn_expand3x3 }layer {name: fire4/relu_expand3x3type: ReLUbottom: fire4/bn_expand3x3top: fire4/bn_expand3x3 } layer {name: fire4/concattype: Concatbottom: fire4/bn_expand1x1bottom: fire4/bn_expand3x3top: fire4/concat } #fire4 ends: 256 channelslayer {name: pool4type: Poolingbottom: fire4/concattop: pool4pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire4 ends: 256 channels / pooled layer {name: fire5/squeeze1x1type: Convolutionbottom: pool4top: fire5/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_squeeze1x1 type: BatchNorm bottom: fire5/squeeze1x1 top: fire5/bn_squeeze1x1 }layer {name: fire5/relu_squeeze1x1type: ReLUbottom: fire5/bn_squeeze1x1top: fire5/bn_squeeze1x1 } layer {name: fire5/expand1x1type: Convolutionbottom: fire5/bn_squeeze1x1top: fire5/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_expand1x1 type: BatchNorm bottom: fire5/expand1x1 top: fire5/bn_expand1x1 }layer {name: fire5/relu_expand1x1type: ReLUbottom: fire5/bn_expand1x1top: fire5/bn_expand1x1 } layer {name: fire5/expand3x3type: Convolutionbottom: fire5/bn_expand1x1top: fire5/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire5/bn_expand3x3 type: BatchNorm bottom: fire5/expand3x3 top: fire5/bn_expand3x3 }layer {name: fire5/relu_expand3x3type: ReLUbottom: fire5/bn_expand3x3top: fire5/bn_expand3x3 } layer {name: fire5/concattype: Concatbottom: fire5/bn_expand1x1bottom: fire5/bn_expand3x3top: fire5/concat }#fire5 ends: 256 channels layer {name: bypass_45type: Eltwisebottom: pool4bottom: fire5/concattop: fire5_EltAdd }layer {name: fire6/squeeze1x1type: Convolutionbottom: fire5_EltAddtop: fire6/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_squeeze1x1 type: BatchNorm bottom: fire6/squeeze1x1 top: fire6/bn_squeeze1x1 }layer {name: fire6/relu_squeeze1x1type: ReLUbottom: fire6/bn_squeeze1x1top: fire6/bn_squeeze1x1 } layer {name: fire6/expand1x1type: Convolutionbottom: fire6/bn_squeeze1x1top: fire6/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_expand1x1 type: BatchNorm bottom: fire6/expand1x1 top: fire6/bn_expand1x1 }layer {name: fire6/relu_expand1x1type: ReLUbottom: fire6/bn_expand1x1top: fire6/bn_expand1x1 } layer {name: fire6/expand3x3type: Convolutionbottom: fire6/bn_expand1x1top: fire6/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire6/bn_expand3x3 type: BatchNorm bottom: fire6/expand3x3 top: fire6/bn_expand3x3 }layer {name: fire6/relu_expand3x3type: ReLUbottom: fire6/bn_expand3x3top: fire6/bn_expand3x3 } layer {name: fire6/concattype: Concatbottom: fire6/bn_expand1x1bottom: fire6/bn_expand3x3top: fire6/concat } #fire6 ends: 384 channelslayer {name: fire7/squeeze1x1type: Convolutionbottom: fire6/concattop: fire7/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_squeeze1x1 type: BatchNorm bottom: fire7/squeeze1x1 top: fire7/bn_squeeze1x1 }layer {name: fire7/relu_squeeze1x1type: ReLUbottom: fire7/bn_squeeze1x1top: fire7/bn_squeeze1x1 } layer {name: fire7/expand1x1type: Convolutionbottom: fire7/bn_squeeze1x1top: fire7/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_expand1x1 type: BatchNorm bottom: fire7/expand1x1 top: fire7/bn_expand1x1 }layer {name: fire7/relu_expand1x1type: ReLUbottom: fire7/bn_expand1x1top: fire7/bn_expand1x1 } layer {name: fire7/expand3x3type: Convolutionbottom: fire7/bn_expand1x1top: fire7/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire7/bn_expand3x3 type: BatchNorm bottom: fire7/expand3x3 top: fire7/bn_expand3x3 }layer {name: fire7/relu_expand3x3type: ReLUbottom: fire7/bn_expand3x3top: fire7/bn_expand3x3 } layer {name: fire7/concattype: Concatbottom: fire7/bn_expand1x1bottom: fire7/bn_expand3x3top: fire7/concat } #fire7 ends: 384 channels layer {name: bypass_67type: Eltwisebottom: fire6/concatbottom: fire7/concattop: fire7_EltAdd }layer {name: fire8/squeeze1x1type: Convolutionbottom: fire7_EltAddtop: fire8/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_squeeze1x1 type: BatchNorm bottom: fire8/squeeze1x1 top: fire8/bn_squeeze1x1 }layer {name: fire8/relu_squeeze1x1type: ReLUbottom: fire8/bn_squeeze1x1top: fire8/bn_squeeze1x1 } layer {name: fire8/expand1x1type: Convolutionbottom: fire8/bn_squeeze1x1top: fire8/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_expand1x1 type: BatchNorm bottom: fire8/expand1x1 top: fire8/bn_expand1x1 }layer {name: fire8/relu_expand1x1type: ReLUbottom: fire8/bn_expand1x1top: fire8/bn_expand1x1 } layer {name: fire8/expand3x3type: Convolutionbottom: fire8/bn_expand1x1top: fire8/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire8/bn_expand3x3 type: BatchNorm bottom: fire8/expand3x3 top: fire8/bn_expand3x3 }layer {name: fire8/relu_expand3x3type: ReLUbottom: fire8/bn_expand3x3top: fire8/bn_expand3x3 } layer {name: fire8/concattype: Concatbottom: fire8/bn_expand1x1bottom: fire8/bn_expand3x3top: fire8/concat } #fire8 ends: 512 channelslayer {name: pool8type: Poolingbottom: fire8/concattop: pool8pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire8 ends: 512 channels layer {name: fire9/squeeze1x1type: Convolutionbottom: pool8top: fire9/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_squeeze1x1 type: BatchNorm bottom: fire9/squeeze1x1 top: fire9/bn_squeeze1x1 }layer {name: fire9/relu_squeeze1x1type: ReLUbottom: fire9/bn_squeeze1x1top: fire9/bn_squeeze1x1 } layer {name: fire9/expand1x1type: Convolutionbottom: fire9/bn_squeeze1x1top: fire9/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_expand1x1 type: BatchNorm bottom: fire9/expand1x1 top: fire9/bn_expand1x1 }layer {name: fire9/relu_expand1x1type: ReLUbottom: fire9/bn_expand1x1top: fire9/bn_expand1x1 } layer {name: fire9/expand3x3type: Convolutionbottom: fire9/bn_expand1x1top: fire9/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire9/bn_expand3x3 type: BatchNorm bottom: fire9/expand3x3 top: fire9/bn_expand3x3 }layer {name: fire9/relu_expand3x3type: ReLUbottom: fire9/bn_expand3x3top: fire9/bn_expand3x3 } layer {name: fire9/concattype: Concatbottom: fire9/bn_expand1x1bottom: fire9/bn_expand3x3top: fire9/concat } #fire9 ends: 512 channelslayer {name: conv10_newtype: Convolutionbottom: fire9/concattop: conv10convolution_param {num_output: 3kernel_size: 1weight_filler {type: gaussianmean: 0.0std: 0.01}} }layer {name: pool10type: Poolingbottom: conv10top: pool10pooling_param {pool: AVEglobal_pooling: true} }# loss, top1, top5 layer {name: losstype: SoftmaxWithLossbottom: pool10bottom: labeltop: lossinclude { # phase: TRAIN} } layer {name: accuracytype: Accuracybottom: pool10bottom: labeltop: accuracy#include {# phase: TEST#} } 在最后一层卷积层conv10中的num_output修改类别数量。 模型超参配置文件 solver.prototxt test_iter: 2000 #not subject to iter_size test_interval: 1000000 # base_lr: 0.0001 base_lr: 0.005 # 学习率 display: 40 # max_iter: 600000 max_iter: 200000 # 迭代数 iter_size: 2 #global batch size batch_size * iter_size lr_policy: poly power: 1.0 #linearly decrease LR momentum: 0.9 weight_decay: 0.0002 snapshot: 10000 # 每多少次迭代保存一个模型 snapshot_prefix: /data/zxc/classfication/model/model_cotta/cotta_ # 模型保存路径 solver_mode: GPU random_seed: 42 net: ./trainNets_drive/trainval.prototxt # 网络结构配置文件的路径 test_initialization: false average_loss: 40max_itercaffe用的是迭代数而不是pytorch的轮数。pytorch中训练完全部的训练集为一轮而caffe中训练完一个batch_size的数据为一个迭代。如果想要等价与轮数的话一轮就等于len(train_data) / batch_size。如果有余数就要看pytorch里的dataloader里面设置舍去还是为一个batch如果舍去就是向下取整如果不舍去就是向上取整snapshot_prefix最后一部分为每个保存模型的前缀如图 运行命令 将运行命令写入bash文件中 train.sh /home/seg/anaconda3/envs/zxc/bin/caffe train -gpu 1 -solver ./solvers/solver_3.prototxt -weights/data/classfication/model/model_cotta/cotta__iter_200000.caffemodel 21 | tee log_3_4_class.txt -gpu选择哪块卡如果就一块就是0-solver后面跟网络超参配置文件路径-weights后面跟预训练模型可以用官方给的squeezenet的caffe版本的预训练模型我这里是训练中断从断点继续训练 编写完成后source activate 环境名称进入source环境然后source train.sh运行bash文件就能开始训练。
http://www.huolong8.cn/news/279998/

相关文章:

  • 做外链网站wordpress图片外链插件
  • 招远市建设局网站徐州人力资源招聘网
  • 做农产品交易网站有哪些盘锦市网站建设
  • 网站描述在哪里写注册公司入口官网
  • 南京app研发公司seo优化方案报价
  • 国外优秀vi设计网站evora wordpress
  • 网站建设陆金手指下拉壹玖微信小程序网站建设哪家好
  • 河北路泰建设工程有限公司网站成都电商app开发
  • 网课网站门头沟做网站
  • 网站空间申请开通做网站管理系统
  • 贵州大地建设集团网站企业网站设计公司
  • 游戏门户网站建设wordpress解析完403
  • 邯郸市建设局查中级职称网站网站通栏图片代码
  • iis提示网站建设中网站建设1
  • 怎么做打码网站阿里云空间如何装wordpress
  • 网站建设程序有哪些内容网站备案需要钱吗
  • 手机网站开发环境搭建山东企业展厅设计公司
  • 亿赐客网站云南省建设厅网站处长
  • 宁波做网站gs软件开发主要工作内容
  • 成都做网站开发的公司接了做网站的单子流程
  • 做网站和网页有什么区别北京seo服务行者
  • 重庆门户网站推广方案北京王府井集团股份有限公司
  • 家具网站开发设计论文一级水蜜桃
  • 太原网站建设推广公司推荐成都网站建设技术外包
  • 自动生成效果图的软件网站seo分析工具
  • 怎么做网站的步骤成都的做网站公司
  • 黄龙云 加强网站建设卧龙区网站建设价格
  • 知名广州网站建设qq电脑版
  • 最新采购求购信息网站wordpress 获取文章第一张图片
  • 做单挣钱的网站芭嘞seo