电影网站排名怎么做,温州阀门外贸网站建设,网站欢迎界面源码,河南网站设计价格1.图片识别 2.视频识别
[YOLOv7]基于YOLO#xff06;Deepsort的人流量统计系统(源码#xff06;部署教程)_哔哩哔哩_bilibili
3.Deepsort目标追踪
#xff08;1#xff09;获取原始视频帧 #xff08;2#xff09;利用目标检测器对视频帧中的目标进行检测 #xff08…1.图片识别 2.视频识别
[YOLOv7]基于YOLODeepsort的人流量统计系统(源码部署教程)_哔哩哔哩_bilibili
3.Deepsort目标追踪
1获取原始视频帧 2利用目标检测器对视频帧中的目标进行检测 3将检测到的目标的框中的特征提取出来该特征包括表观特征方便特征对比避免ID switch和运动特征运动特征方 便卡尔曼滤波对其进行预测 4计算前后两帧目标之前的匹配程度利用匈牙利算法和级联匹配为每个追踪到的目标分配ID。 Deepsort的前身是sort算法sort算法的核心是卡尔曼滤波算法和匈牙利算法。 卡尔曼滤波算法作用该算法的主要作用就是当前的一系列运动变量去预测下一时刻的运动变量但是第一次的检测结果用来初始化卡尔曼滤波的运动变量。匈牙利算法的作用简单来讲就是解决分配问题就是把一群检测框和卡尔曼预测的框做分配让卡尔曼预测的框找到和自己最匹配的检测框达到追踪的效果。sort工作流程如下图所示 Detections是通过目标检测到的框框。Tracks是轨迹信息。
整个算法的工作流程如下
1将第一帧检测到的结果创建其对应的Tracks。将卡尔曼滤波的运动变量初始化通过卡尔曼滤波预测其对应的框框。
2将该帧目标检测的框框和上一帧通过Tracks预测的框框一一进行IOU匹配再通过IOU匹配的结果计算其代价矩阵cost matrix其计算方式是1-IOU。
3将2中得到的所有的代价矩阵作为匈牙利算法的输入得到线性的匹配的结果这时候我们得到的结果有三种第一种是Tracks失配Unmatched Tracks我们直接将失配的Tracks删除第二种是Detections失配Unmatched Detections我们将这样的Detections初始化为一个新的Tracksnew Tracks第三种是检测框和预测的框框配对成功这说明我们前一帧和后一帧追踪成功将其对应的Detections通过卡尔曼滤波更新其对应的Tracks变量。
4反复循环2-3步骤直到视频帧结束。
Deepsort算法流程
由于sort算法还是比较粗糙的追踪算法当物体发生遮挡的时候特别容易丢失自己的ID。而Deepsort算法在sort算法的基础上增加了级联匹配Matching Cascade和新轨迹的确认confirmed。Tracks分为确认态confirmed和不确认态unconfirmed新产生的Tracks是不确认态的不确认态的Tracks必须要和Detections连续匹配一定的次数默认是3才可以转化成确认态。确认态的Tracks必须和Detections连续失配一定次数默认30次才会被删除。 Deepsort算法的工作流程如下图所示 整个算法的工作流程如下
1将第一帧次检测到的结果创建其对应的Tracks。将卡尔曼滤波的运动变量初始化通过卡尔曼滤波预测其对应的框框。这时候的Tracks一定是unconfirmed的。
2将该帧目标检测的框框和第上一帧通过Tracks预测的框框一一进行IOU匹配再通过IOU匹配的结果计算其代价矩阵cost matrix其计算方式是1-IOU。
3将2中得到的所有的代价矩阵作为匈牙利算法的输入得到线性的匹配的结果这时候我们得到的结果有三种第一种是Tracks失配Unmatched Tracks我们直接将失配的Tracks因为这个Tracks是不确定态了如果是确定态的话则要连续达到一定的次数默认30次才可以删除删除第二种是Detections失配Unmatched Detections我们将这样的Detections初始化为一个新的Tracksnew Tracks第三种是检测框和预测的框框配对成功这说明我们前一帧和后一帧追踪成功将其对应的Detections通过卡尔曼滤波更新其对应的Tracks变量。
4反复循环2-3步骤直到出现确认态confirmed的Tracks或者视频帧结束。
5通过卡尔曼滤波预测其确认态的Tracks和不确认态的Tracks对应的框框。将确认态的Tracks的框框和是Detections进行级联匹配之前每次只要Tracks匹配上都会保存Detections其的外观特征和运动信息默认保存前100帧利用外观特征和运动信息和Detections进行级联匹配,这么做是因为确认态confirmed的Tracks和Detections匹配的可能性更大。
6进行级联匹配后有三种可能的结果。第一种Tracks匹配这样的Tracks通过卡尔曼滤波更新其对应的Tracks变量。第二第三种是Detections和Tracks失配这时将之前的不确认状态的Tracks和失配的Tracks一起和Unmatched Detections一一进行IOU匹配再通过IOU匹配的结果计算其代价矩阵cost matrix其计算方式是1-IOU。
7将6中得到的所有的代价矩阵作为匈牙利算法的输入得到线性的匹配的结果这时候我们得到的结果有三种第一种是Tracks失配Unmatched Tracks我们直接将失配的Tracks因为这个Tracks是不确定态了如果是确定态的话则要连续达到一定的次数默认30次才可以删除删除第二种是Detections失配Unmatched Detections我们将这样的Detections初始化为一个新的Tracksnew Tracks第三种是检测框和预测的框框配对成功这说明我们前一帧和后一帧追踪成功将其对应的Detections通过卡尔曼滤波更新其对应的Tracks变量。
8反复循环5-7步骤直到视频帧结束。
4.准备YOLOv7格式数据集
如果不懂yolo格式数据集是什么样子的建议先学习一下该博客。大部分CVer都会推荐用labelImg进行数据的标注我也不例外推荐大家用labelImg进行数据标注。不过这里我不再详细介绍如何使用labelImg网上有很多的教程。同时标注数据需要用到图形交互界面远程服务器就不太方便了因此建议在本地电脑上标注好后再上传到服务器上。
这里假设我们已经得到标注好的yolo格式数据集那么这个数据集将会按照如下的格式进行存放。 不过在这里面train_list.txt和val_list.txt是后来我们要自己生成的而不是labelImg生成的其他的则是labelImg生成的。
接下来就是生成 train_list.txt和val_list.txt。train_list.txt存放了所有训练图片的路径val_list.txt则是存放了所有验证图片的路径如下图所示一行代表一个图片的路径。这两个文件的生成写个循环就可以了不算难。
5.修改配置文件
总共有两个文件需要配置一个是/yolov7/cfg/training/yolov7.yaml这个文件是有关模型的配置文件一个是/yolov7/data/coco.yaml这个是数据集的配置文件。
第一步复制yolov7.yaml文件到相同的路径下然后重命名我们重命名为yolov7-Helmet.yaml。
第二步打开yolov7-Helmet.yaml文件进行如下图所示的修改这里修改的地方只有一处就是把nc修改为我们数据集的目标总数即可。然后保存。 第三步复制coco.yaml文件到相同的路径下然后重命名我们命名为Helmet.yaml。
第四步打开Helmet.yaml文件进行如下所示的修改需要修改的地方为5处。
第一处把代码自动下载COCO数据集的命令注释掉以防代码自动下载数据集占用内存第二处修改train的位置为train_list.txt的路径第三处修改val的位置为val_list.txt的路径第四处修改nc为数据集目标总数第五处修改names为数据集所有目标的名称。然后保存。 6.训练代码
import argparse
import logging
import math
import os
import random
import time
from copy import deepcopy
from pathlib import Path
from threading import Threadimport numpy as np
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
import torch.utils.data
import yaml
from torch.cuda import amp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.utils.tensorboard import SummaryWriter
from tqdm import tqdmimport test # import test.py to get mAP after each epoch
from models.experimental import attempt_load
from models.yolo import Model
from utils.autoanchor import check_anchors
from utils.datasets import create_dataloader
from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \check_requirements, print_mutation, set_logging, one_cycle, colorstr
from utils.google_utils import attempt_download
from utils.loss import ComputeLoss, ComputeLossOTA
from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel
from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resumelogger logging.getLogger(__name__)def train(hyp, opt, device, tb_writerNone):logger.info(colorstr(hyperparameters: ) , .join(f{k}{v} for k, v in hyp.items()))save_dir, epochs, batch_size, total_batch_size, weights, rank, freeze \Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank, opt.freeze# Directorieswdir save_dir / weightswdir.mkdir(parentsTrue, exist_okTrue) # make dirlast wdir / last.ptbest wdir / best.ptresults_file save_dir / results.txt# Save run settingswith open(save_dir / hyp.yaml, w) as f:yaml.dump(hyp, f, sort_keysFalse)with open(save_dir / opt.yaml, w) as f:yaml.dump(vars(opt), f, sort_keysFalse)# Configureplots not opt.evolve # create plotscuda device.type ! cpuinit_seeds(2 rank)with open(opt.data) as f:data_dict yaml.load(f, Loaderyaml.SafeLoader) # data dictis_coco opt.data.endswith(coco.yaml)# Logging- Doing this before checking the dataset. Might update data_dictloggers {wandb: None} # loggers dictif rank in [-1, 0]:opt.hyp hyp # add hyperparametersrun_id torch.load(weights, map_locationdevice).get(wandb_id) if weights.endswith(.pt) and os.path.isfile(weights) else Nonewandb_logger WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict)loggers[wandb] wandb_logger.wandbdata_dict wandb_logger.data_dictif wandb_logger.wandb:weights, epochs, hyp opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resumingnc 1 if opt.single_cls else int(data_dict[nc]) # number of classesnames [item] if opt.single_cls and len(data_dict[names]) ! 1 else data_dict[names] # class namesassert len(names) nc, %g names found for nc%g dataset in %s % (len(names), nc, opt.data) # check# Modelpretrained weights.endswith(.pt)if pretrained:with torch_distributed_zero_first(rank):attempt_download(weights) # download if not found locallyckpt torch.load(weights, map_locationdevice) # load checkpointmodel Model(opt.cfg or ckpt[model].yaml, ch3, ncnc, anchorshyp.get(anchors)).to(device) # createexclude [anchor] if (opt.cfg or hyp.get(anchors)) and not opt.resume else [] # exclude keysstate_dict ckpt[model].float().state_dict() # to FP32state_dict intersect_dicts(state_dict, model.state_dict(), excludeexclude) # intersectmodel.load_state_dict(state_dict, strictFalse) # loadlogger.info(Transferred %g/%g items from %s % (len(state_dict), len(model.state_dict()), weights)) # reportelse:model Model(opt.cfg, ch3, ncnc, anchorshyp.get(anchors)).to(device) # createwith torch_distributed_zero_first(rank):check_dataset(data_dict) # checktrain_path data_dict[train]test_path data_dict[val]# Freezefreeze [fmodel.{x}. for x in (freeze if len(freeze) 1 else range(freeze[0]))] # parameter names to freeze (full or partial)for k, v in model.named_parameters():v.requires_grad True # train all layersif any(x in k for x in freeze):print(freezing %s % k)v.requires_grad False# Optimizernbs 64 # nominal batch sizeaccumulate max(round(nbs / total_batch_size), 1) # accumulate loss before optimizinghyp[weight_decay] * total_batch_size * accumulate / nbs # scale weight_decaylogger.info(fScaled weight_decay {hyp[weight_decay]})pg0, pg1, pg2 [], [], [] # optimizer parameter groupsfor k, v in model.named_modules():if hasattr(v, bias) and isinstance(v.bias, nn.Parameter):pg2.append(v.bias) # biasesif isinstance(v, nn.BatchNorm2d):pg0.append(v.weight) # no decayelif hasattr(v, weight) and isinstance(v.weight, nn.Parameter):pg1.append(v.weight) # apply decayif hasattr(v, im):if hasattr(v.im, implicit): pg0.append(v.im.implicit)else:for iv in v.im:pg0.append(iv.implicit)if hasattr(v, imc):if hasattr(v.imc, implicit): pg0.append(v.imc.implicit)else:for iv in v.imc:pg0.append(iv.implicit)if hasattr(v, imb):if hasattr(v.imb, implicit): pg0.append(v.imb.implicit)else:for iv in v.imb:pg0.append(iv.implicit)if hasattr(v, imo):if hasattr(v.imo, implicit): pg0.append(v.imo.implicit)else:for iv in v.imo:pg0.append(iv.implicit)if hasattr(v, ia):if hasattr(v.ia, implicit): pg0.append(v.ia.implicit)else:for iv in v.ia:pg0.append(iv.implicit)if hasattr(v, attn):if hasattr(v.attn, logit_scale): pg0.append(v.attn.logit_scale)if hasattr(v.attn, q_bias): pg0.append(v.attn.q_bias)if hasattr(v.attn, v_bias): pg0.append(v.attn.v_bias)if hasattr(v.attn, relative_position_bias_table): pg0.append(v.attn.relative_position_bias_table)if hasattr(v, rbr_dense):if hasattr(v.rbr_dense, weight_rbr_origin): pg0.append(v.rbr_dense.weight_rbr_origin)if hasattr(v.rbr_dense, weight_rbr_avg_conv): pg0.append(v.rbr_dense.weight_rbr_avg_conv)if hasattr(v.rbr_dense, weight_rbr_pfir_conv): pg0.append(v.rbr_dense.weight_rbr_pfir_conv)if hasattr(v.rbr_dense, weight_rbr_1x1_kxk_idconv1): pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1)if hasattr(v.rbr_dense, weight_rbr_1x1_kxk_conv2): pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2)if hasattr(v.rbr_dense, weight_rbr_gconv_dw): pg0.append(v.rbr_dense.weight_rbr_gconv_dw)if hasattr(v.rbr_dense, weight_rbr_gconv_pw): pg0.append(v.rbr_dense.weight_rbr_gconv_pw)if hasattr(v.rbr_dense, vector): pg0.append(v.rbr_dense.vector)if opt.adam:optimizer optim.Adam(pg0, lrhyp[lr0], betas(hyp[momentum], 0.999)) # adjust beta1 to momentumelse:optimizer optim.SGD(pg0, lrhyp[lr0], momentumhyp[momentum], nesterovTrue)optimizer.add_param_group({params: pg1, weight_decay: hyp[weight_decay]}) # add pg1 with weight_decayoptimizer.add_param_group({params: pg2}) # add pg2 (biases)logger.info(Optimizer groups: %g .bias, %g conv.weight, %g other % (len(pg2), len(pg1), len(pg0)))del pg0, pg1, pg2# Scheduler https://arxiv.org/pdf/1812.01187.pdf# https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLRif opt.linear_lr:lf lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp[lrf]) hyp[lrf] # linearelse:lf one_cycle(1, hyp[lrf], epochs) # cosine 1-hyp[lrf]scheduler lr_scheduler.LambdaLR(optimizer, lr_lambdalf)# plot_lr_scheduler(optimizer, scheduler, epochs)# EMAema ModelEMA(model) if rank in [-1, 0] else None# Resumestart_epoch, best_fitness 0, 0.0if pretrained:# Optimizerif ckpt[optimizer] is not None:optimizer.load_state_dict(ckpt[optimizer])best_fitness ckpt[best_fitness]# EMAif ema and ckpt.get(ema):ema.ema.load_state_dict(ckpt[ema].float().state_dict())ema.updates ckpt[updates]# Resultsif ckpt.get(training_results) is not None:results_file.write_text(ckpt[training_results]) # write results.txt# Epochsstart_epoch ckpt[epoch] 1if opt.resume:assert start_epoch 0, %s training to %g epochs is finished, nothing to resume. % (weights, epochs)if epochs start_epoch:logger.info(%s has been trained for %g epochs. Fine-tuning for %g additional epochs. %(weights, ckpt[epoch], epochs))epochs ckpt[epoch] # finetune additional epochsdel ckpt, state_dict# Image sizesgs max(int(model.stride.max()), 32) # grid size (max stride)nl model.model[-1].nl # number of detection layers (used for scaling hyp[obj])imgsz, imgsz_test [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples# DP modeif cuda and rank -1 and torch.cuda.device_count() 1:model torch.nn.DataParallel(model)# SyncBatchNormif opt.sync_bn and cuda and rank ! -1:model torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)logger.info(Using SyncBatchNorm())# Trainloaderdataloader, dataset create_dataloader(train_path, imgsz, batch_size, gs, opt,hyphyp, augmentTrue, cacheopt.cache_images, rectopt.rect, rankrank,world_sizeopt.world_size, workersopt.workers,image_weightsopt.image_weights, quadopt.quad, prefixcolorstr(train: ))mlc np.concatenate(dataset.labels, 0)[:, 0].max() # max label classnb len(dataloader) # number of batchesassert mlc nc, Label class %g exceeds nc%g in %s. Possible class labels are 0-%g % (mlc, nc, opt.data, nc - 1)# Process 0if rank in [-1, 0]:testloader create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloaderhyphyp, cacheopt.cache_images and not opt.notest, rectTrue, rank-1,world_sizeopt.world_size, workersopt.workers,pad0.5, prefixcolorstr(val: ))[0]if not opt.resume:labels np.concatenate(dataset.labels, 0)c torch.tensor(labels[:, 0]) # classes# cf torch.bincount(c.long(), minlengthnc) 1. # frequency# model._initialize_biases(cf.to(device))if plots:#plot_labels(labels, names, save_dir, loggers)if tb_writer:tb_writer.add_histogram(classes, c, 0)# Anchorsif not opt.noautoanchor:check_anchors(dataset, modelmodel, thrhyp[anchor_t], imgszimgsz)model.half().float() # pre-reduce anchor precision# DDP modeif cuda and rank ! -1:model DDP(model, device_ids[opt.local_rank], output_deviceopt.local_rank,# nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698find_unused_parametersany(isinstance(layer, nn.MultiheadAttention) for layer in model.modules()))# Model parametershyp[box] * 3. / nl # scale to layershyp[cls] * nc / 80. * 3. / nl # scale to classes and layershyp[obj] * (imgsz / 640) ** 2 * 3. / nl # scale to image size and layershyp[label_smoothing] opt.label_smoothingmodel.nc nc # attach number of classes to modelmodel.hyp hyp # attach hyperparameters to modelmodel.gr 1.0 # iou loss ratio (obj_loss 1.0 or iou)model.class_weights labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weightsmodel.names names# Start trainingt0 time.time()nw max(round(hyp[warmup_epochs] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)# nw min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to 1/2 of trainingmaps np.zeros(nc) # mAP per classresults (0, 0, 0, 0, 0, 0, 0) # P, R, mAP.5, mAP.5-.95, val_loss(box, obj, cls)scheduler.last_epoch start_epoch - 1 # do not movescaler amp.GradScaler(enabledcuda)compute_loss_ota ComputeLossOTA(model) # init loss classcompute_loss ComputeLoss(model) # init loss classlogger.info(fImage sizes {imgsz} train, {imgsz_test} test\nfUsing {dataloader.num_workers} dataloader workers\nfLogging results to {save_dir}\nfStarting training for {epochs} epochs...)torch.save(model, wdir / init.pt)for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------model.train()# Update image weights (optional)if opt.image_weights:# Generate indicesif rank in [-1, 0]:cw model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weightsiw labels_to_image_weights(dataset.labels, ncnc, class_weightscw) # image weightsdataset.indices random.choices(range(dataset.n), weightsiw, kdataset.n) # rand weighted idx# Broadcast if DDPif rank ! -1:indices (torch.tensor(dataset.indices) if rank 0 else torch.zeros(dataset.n)).int()dist.broadcast(indices, 0)if rank ! 0:dataset.indices indices.cpu().numpy()# Update mosaic border# b int(random.uniform(0.25 * imgsz, 0.75 * imgsz gs) // gs * gs)# dataset.mosaic_border [b - imgsz, -b] # height, width bordersmloss torch.zeros(4, devicedevice) # mean lossesif rank ! -1:dataloader.sampler.set_epoch(epoch)pbar enumerate(dataloader)logger.info((\n %10s * 8) % (Epoch, gpu_mem, box, obj, cls, total, labels, img_size))if rank in [-1, 0]:pbar tqdm(pbar, totalnb) # progress baroptimizer.zero_grad()for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------ni i nb * epoch # number integrated batches (since train start)imgs imgs.to(device, non_blockingTrue).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0# Warmupif ni nw:xi [0, nw] # x interp# model.gr np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss 1.0 or iou)accumulate max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())for j, x in enumerate(optimizer.param_groups):# bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0x[lr] np.interp(ni, xi, [hyp[warmup_bias_lr] if j 2 else 0.0, x[initial_lr] * lf(epoch)])if momentum in x:x[momentum] np.interp(ni, xi, [hyp[warmup_momentum], hyp[momentum]])# Multi-scaleif opt.multi_scale:sz random.randrange(imgsz * 0.5, imgsz * 1.5 gs) // gs * gs # sizesf sz / max(imgs.shape[2:]) # scale factorif sf ! 1:ns [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)imgs F.interpolate(imgs, sizens, modebilinear, align_cornersFalse)# Forwardwith amp.autocast(enabledcuda):pred model(imgs) # forwardif loss_ota not in hyp or hyp[loss_ota] 1:loss, loss_items compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_sizeelse:loss, loss_items compute_loss(pred, targets.to(device)) # loss scaled by batch_sizeif rank ! -1:loss * opt.world_size # gradient averaged between devices in DDP modeif opt.quad:loss * 4.# Backwardscaler.scale(loss).backward()# Optimizeif ni % accumulate 0:scaler.step(optimizer) # optimizer.stepscaler.update()optimizer.zero_grad()if ema:ema.update(model)# Printif rank in [-1, 0]:mloss (mloss * i loss_items) / (i 1) # update mean lossesmem %.3gG % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)s (%10s * 2 %10.4g * 6) % (%g/%g % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])pbar.set_description(s)# Plotif plots and ni 10:f save_dir / ftrain_batch{ni}.jpg # filenameThread(targetplot_images, args(imgs, targets, paths, f), daemonTrue).start()# if tb_writer:# tb_writer.add_image(f, result, dataformatsHWC, global_stepepoch)# tb_writer.add_graph(torch.jit.trace(model, imgs, strictFalse), []) # add model graphelif plots and ni 10 and wandb_logger.wandb:wandb_logger.log({Mosaics: [wandb_logger.wandb.Image(str(x), captionx.name) for x insave_dir.glob(train*.jpg) if x.exists()]})# end batch ------------------------------------------------------------------------------------------------# end epoch ----------------------------------------------------------------------------------------------------# Schedulerlr [x[lr] for x in optimizer.param_groups] # for tensorboardscheduler.step()# DDP process 0 or single-GPUif rank in [-1, 0]:# mAPema.update_attr(model, include[yaml, nc, hyp, gr, names, stride, class_weights])final_epoch epoch 1 epochsif not opt.notest or final_epoch: # Calculate mAPwandb_logger.current_epoch epoch 1results, maps, times test.test(data_dict,batch_sizebatch_size * 2,imgszimgsz_test,modelema.ema,single_clsopt.single_cls,dataloadertestloader,save_dirsave_dir,verbosenc 50 and final_epoch,plotsplots and final_epoch,wandb_loggerwandb_logger,compute_losscompute_loss,is_cocois_coco)# Writewith open(results_file, a) as f:f.write(s %10.4g * 7 % results \n) # append metrics, val_lossif len(opt.name) and opt.bucket:os.system(gsutil cp %s gs://%s/results/results%s.txt % (results_file, opt.bucket, opt.name))# Logtags [train/box_loss, train/obj_loss, train/cls_loss, # train lossmetrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95,val/box_loss, val/obj_loss, val/cls_loss, # val lossx/lr0, x/lr1, x/lr2] # paramsfor x, tag in zip(list(mloss[:-1]) list(results) lr, tags):if tb_writer:tb_writer.add_scalar(tag, x, epoch) # tensorboardif wandb_logger.wandb:wandb_logger.log({tag: x}) # WB# Update best mAPfi fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP.5, mAP.5-.95]if fi best_fitness:best_fitness fiwandb_logger.end_epoch(best_resultbest_fitness fi)# Save modelif (not opt.nosave) or (final_epoch and not opt.evolve): # if saveckpt {epoch: epoch,best_fitness: best_fitness,training_results: results_file.read_text(),model: deepcopy(model.module if is_parallel(model) else model).half(),ema: deepcopy(ema.ema).half(),updates: ema.updates,optimizer: optimizer.state_dict(),wandb_id: wandb_logger.wandb_run.id if wandb_logger.wandb else None}# Save last, best and deletetorch.save(ckpt, last)if best_fitness fi:torch.save(ckpt, best)if (best_fitness fi) and (epoch 200):torch.save(ckpt, wdir / best_{:03d}.pt.format(epoch))if epoch 0:torch.save(ckpt, wdir / epoch_{:03d}.pt.format(epoch))elif ((epoch1) % 25) 0:torch.save(ckpt, wdir / epoch_{:03d}.pt.format(epoch))elif epoch (epochs-5):torch.save(ckpt, wdir / epoch_{:03d}.pt.format(epoch))if wandb_logger.wandb:if ((epoch 1) % opt.save_period 0 and not final_epoch) and opt.save_period ! -1:wandb_logger.log_model(last.parent, opt, epoch, fi, best_modelbest_fitness fi)del ckpt# end epoch ----------------------------------------------------------------------------------------------------# end trainingif rank in [-1, 0]:# Plotsif plots:plot_results(save_dirsave_dir) # save as results.pngif wandb_logger.wandb:files [results.png, confusion_matrix.png, *[f{x}_curve.png for x in (F1, PR, P, R)]]wandb_logger.log({Results: [wandb_logger.wandb.Image(str(save_dir / f), captionf) for f in filesif (save_dir / f).exists()]})# Test best.ptlogger.info(%g epochs completed in %.3f hours.\n % (epoch - start_epoch 1, (time.time() - t0) / 3600))if opt.data.endswith(coco.yaml) and nc 80: # if COCOfor m in (last, best) if best.exists() else (last): # speed, mAP testsresults, _, _ test.test(opt.data,batch_sizebatch_size * 2,imgszimgsz_test,conf_thres0.001,iou_thres0.7,modelattempt_load(m, device).half(),single_clsopt.single_cls,dataloadertestloader,save_dirsave_dir,save_jsonTrue,plotsFalse,is_cocois_coco)# Strip optimizersfinal best if best.exists() else last # final modelfor f in last, best:if f.exists():strip_optimizer(f) # strip optimizersif opt.bucket:os.system(fgsutil cp {final} gs://{opt.bucket}/weights) # uploadif wandb_logger.wandb and not opt.evolve: # Log the stripped modelwandb_logger.wandb.log_artifact(str(final), typemodel,namerun_ wandb_logger.wandb_run.id _model,aliases[last, best, stripped])wandb_logger.finish_run()else:dist.destroy_process_group()torch.cuda.empty_cache()return resultsif __name__ __main__:parser argparse.ArgumentParser()parser.add_argument(--weights, typestr, defaultyolov7.pt, helpinitial weights path)parser.add_argument(--cfg, typestr, defaultcfg/training/yolov7.yaml, helpmodel.yaml path)parser.add_argument(--data, typestr, defaultdata/coco.yaml, helpdata.yaml path)parser.add_argument(--hyp, typestr, defaultdata/hyp.scratch.p5.yaml, helphyperparameters path)parser.add_argument(--epochs, typeint, default300)parser.add_argument(--batch-size, typeint, default4, helptotal batch size for all GPUs)parser.add_argument(--img-size, nargs, typeint, default[640, 640], help[train, test] image sizes)parser.add_argument(--rect, actionstore_true, helprectangular training)parser.add_argument(--resume, nargs?, constTrue, defaultFalse, helpresume most recent training)parser.add_argument(--nosave, actionstore_true, helponly save final checkpoint)parser.add_argument(--notest, actionstore_true, helponly test final epoch)parser.add_argument(--noautoanchor, actionstore_true, helpdisable autoanchor check)parser.add_argument(--evolve, actionstore_true, helpevolve hyperparameters)parser.add_argument(--bucket, typestr, default, helpgsutil bucket)parser.add_argument(--cache-images, actionstore_true, helpcache images for faster training)parser.add_argument(--image-weights, actionstore_true, helpuse weighted image selection for training)parser.add_argument(--device, default0, helpcuda device, i.e. 0 or 0,1,2,3 or cpu)parser.add_argument(--multi-scale, actionstore_true, helpvary img-size /- 50%%)parser.add_argument(--single-cls, actionstore_true, helptrain multi-class data as single-class)parser.add_argument(--adam, actionstore_true, helpuse torch.optim.Adam() optimizer)parser.add_argument(--sync-bn, actionstore_true, helpuse SyncBatchNorm, only available in DDP mode)parser.add_argument(--local_rank, typeint, default-1, helpDDP parameter, do not modify)parser.add_argument(--workers, typeint, default0, helpmaximum number of dataloader workers)parser.add_argument(--project, defaultruns/train, helpsave to project/name)parser.add_argument(--entity, defaultNone, helpWB entity)parser.add_argument(--name, defaultexp, helpsave to project/name)parser.add_argument(--exist-ok, actionstore_true, helpexisting project/name ok, do not increment)parser.add_argument(--quad, actionstore_true, helpquad dataloader)parser.add_argument(--linear-lr, actionstore_true, helplinear LR)parser.add_argument(--label-smoothing, typefloat, default0.0, helpLabel smoothing epsilon)parser.add_argument(--upload_dataset, actionstore_true, helpUpload dataset as WB artifact table)parser.add_argument(--bbox_interval, typeint, default-1, helpSet bounding-box image logging interval for WB)parser.add_argument(--save_period, typeint, default-1, helpLog model after every save_period epoch)parser.add_argument(--artifact_alias, typestr, defaultlatest, helpversion of dataset artifact to be used)parser.add_argument(--freeze, nargs, typeint, default[0], helpFreeze layers: backbone of yolov750, first30 1 2)opt parser.parse_args()# Set DDP variablesopt.world_size int(os.environ[WORLD_SIZE]) if WORLD_SIZE in os.environ else 1opt.global_rank int(os.environ[RANK]) if RANK in os.environ else -1set_logging(opt.global_rank)#if opt.global_rank in [-1, 0]:# check_git_status()# check_requirements()# Resumewandb_run check_wandb_resume(opt)if opt.resume and not wandb_run: # resume an interrupted runckpt opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent pathassert os.path.isfile(ckpt), ERROR: --resume checkpoint does not existapriori opt.global_rank, opt.local_rankwith open(Path(ckpt).parent.parent / opt.yaml) as f:opt argparse.Namespace(**yaml.load(f, Loaderyaml.SafeLoader)) # replaceopt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank , ckpt, True, opt.total_batch_size, *apriori # reinstatelogger.info(Resuming training from %s % ckpt)else:# opt.hyp opt.hyp or (hyp.finetune.yaml if opt.weights else hyp.scratch.yaml)opt.data, opt.cfg, opt.hyp check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check filesassert len(opt.cfg) or len(opt.weights), either --cfg or --weights must be specifiedopt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)opt.name evolve if opt.evolve else opt.nameopt.save_dir increment_path(Path(opt.project) / opt.name, exist_okopt.exist_ok | opt.evolve) # increment run# DDP modeopt.total_batch_size opt.batch_sizedevice select_device(opt.device, batch_sizeopt.batch_size)if opt.local_rank ! -1:assert torch.cuda.device_count() opt.local_ranktorch.cuda.set_device(opt.local_rank)device torch.device(cuda, opt.local_rank)dist.init_process_group(backendnccl, init_methodenv://) # distributed backendassert opt.batch_size % opt.world_size 0, --batch-size must be multiple of CUDA device countopt.batch_size opt.total_batch_size // opt.world_size# Hyperparameterswith open(opt.hyp) as f:hyp yaml.load(f, Loaderyaml.SafeLoader) # load hyps# Trainlogger.info(opt)if not opt.evolve:tb_writer None # init loggersif opt.global_rank in [-1, 0]:prefix colorstr(tensorboard: )logger.info(f{prefix}Start with tensorboard --logdir {opt.project}, view at http://localhost:6006/)tb_writer SummaryWriter(opt.save_dir) # Tensorboardtrain(hyp, opt, device, tb_writer)# Evolve hyperparameters (optional)else:# Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)meta {lr0: (1, 1e-5, 1e-1), # initial learning rate (SGD1E-2, Adam1E-3)lrf: (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)momentum: (0.3, 0.6, 0.98), # SGD momentum/Adam beta1weight_decay: (1, 0.0, 0.001), # optimizer weight decaywarmup_epochs: (1, 0.0, 5.0), # warmup epochs (fractions ok)warmup_momentum: (1, 0.0, 0.95), # warmup initial momentumwarmup_bias_lr: (1, 0.0, 0.2), # warmup initial bias lrbox: (1, 0.02, 0.2), # box loss gaincls: (1, 0.2, 4.0), # cls loss gaincls_pw: (1, 0.5, 2.0), # cls BCELoss positive_weightobj: (1, 0.2, 4.0), # obj loss gain (scale with pixels)obj_pw: (1, 0.5, 2.0), # obj BCELoss positive_weightiou_t: (0, 0.1, 0.7), # IoU training thresholdanchor_t: (1, 2.0, 8.0), # anchor-multiple thresholdanchors: (2, 2.0, 10.0), # anchors per output grid (0 to ignore)fl_gamma: (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma1.5)hsv_h: (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)hsv_s: (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)hsv_v: (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)degrees: (1, 0.0, 45.0), # image rotation (/- deg)translate: (1, 0.0, 0.9), # image translation (/- fraction)scale: (1, 0.0, 0.9), # image scale (/- gain)shear: (1, 0.0, 10.0), # image shear (/- deg)perspective: (0, 0.0, 0.001), # image perspective (/- fraction), range 0-0.001flipud: (1, 0.0, 1.0), # image flip up-down (probability)fliplr: (0, 0.0, 1.0), # image flip left-right (probability)mosaic: (1, 0.0, 1.0), # image mixup (probability)mixup: (1, 0.0, 1.0), # image mixup (probability)copy_paste: (1, 0.0, 1.0), # segment copy-paste (probability)paste_in: (1, 0.0, 1.0)} # segment copy-paste (probability)with open(opt.hyp, errorsignore) as f:hyp yaml.safe_load(f) # load hyps dictif anchors not in hyp: # anchors commented in hyp.yamlhyp[anchors] 3assert opt.local_rank -1, DDP mode not implemented for --evolveopt.notest, opt.nosave True, True # only test/save final epoch# ei [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indicesyaml_file Path(opt.save_dir) / hyp_evolved.yaml # save best result hereif opt.bucket:os.system(gsutil cp gs://%s/evolve.txt . % opt.bucket) # download evolve.txt if existsfor _ in range(300): # generations to evolveif Path(evolve.txt).exists(): # if evolve.txt exists: select best hyps and mutate# Select parent(s)parent single # parent selection method: single or weightedx np.loadtxt(evolve.txt, ndmin2)n min(5, len(x)) # number of previous results to considerx x[np.argsort(-fitness(x))][:n] # top n mutationsw fitness(x) - fitness(x).min() # weightsif parent single or len(x) 1:# x x[random.randint(0, n - 1)] # random selectionx x[random.choices(range(n), weightsw)[0]] # weighted selectionelif parent weighted:x (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination# Mutatemp, s 0.8, 0.2 # mutation probability, sigmanpr np.randomnpr.seed(int(time.time()))g np.array([x[0] for x in meta.values()]) # gains 0-1ng len(meta)v np.ones(ng)while all(v 1): # mutate until a change occurs (prevent duplicates)v (g * (npr.random(ng) mp) * npr.randn(ng) * npr.random() * s 1).clip(0.3, 3.0)for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)hyp[k] float(x[i 7] * v[i]) # mutate# Constrain to limitsfor k, v in meta.items():hyp[k] max(hyp[k], v[1]) # lower limithyp[k] min(hyp[k], v[2]) # upper limithyp[k] round(hyp[k], 5) # significant digits# Train mutationresults train(hyp.copy(), opt, device)# Write mutation resultsprint_mutation(hyp.copy(), results, yaml_file, opt.bucket)# Plot resultsplot_evolution(yaml_file)print(fHyperparameter evolution complete. Best results saved as: {yaml_file}\nfCommand to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file})7.UI界面的编写系统的整合
class Thread_1(QThread): # 线程1def __init__(self,info1):super().__init__()self.info1info1self.run2(self.info1)def run2(self, info1):result []result det_yolov7(info1)class Ui_MainWindow(object):def setupUi(self, MainWindow):MainWindow.setObjectName(MainWindow)MainWindow.resize(1280, 960)MainWindow.setStyleSheet(background-image: url(\./template/carui.png\))self.centralwidget QtWidgets.QWidget(MainWindow)self.centralwidget.setObjectName(centralwidget)self.label QtWidgets.QLabel(self.centralwidget)self.label.setGeometry(QtCore.QRect(168, 60, 551, 71))self.label.setAutoFillBackground(False)self.label.setStyleSheet()self.label.setFrameShadow(QtWidgets.QFrame.Plain)self.label.setAlignment(QtCore.Qt.AlignCenter)self.label.setObjectName(label)self.label.setStyleSheet(font-size:42px;font-weight:bold;font-family:SimHei;background:rgba(255,255,255,0);)self.label_2 QtWidgets.QLabel(self.centralwidget)self.label_2.setGeometry(QtCore.QRect(40, 188, 751, 501))self.label_2.setStyleSheet(background:rgba(255,255,255,1);)self.label_2.setAlignment(QtCore.Qt.AlignCenter)self.label_2.setObjectName(label_2)self.textBrowser QtWidgets.QTextBrowser(self.centralwidget)self.textBrowser.setGeometry(QtCore.QRect(73, 746, 851, 174))self.textBrowser.setStyleSheet(background:rgba(0,0,0,0);)self.textBrowser.setObjectName(textBrowser)self.pushButton QtWidgets.QPushButton(self.centralwidget)self.pushButton.setGeometry(QtCore.QRect(1020, 750, 150, 40))self.pushButton.setStyleSheet(background:rgba(53,142,255,1);border-radius:10px;padding:2px 4px;)self.pushButton.setObjectName(pushButton)self.pushButton_2 QtWidgets.QPushButton(self.centralwidget)self.pushButton_2.setGeometry(QtCore.QRect(1020, 810, 150, 40))self.pushButton_2.setStyleSheet(background:rgba(53,142,255,1);border-radius:10px;padding:2px 4px;)self.pushButton_2.setObjectName(pushButton_2)self.pushButton_3 QtWidgets.QPushButton(self.centralwidget)self.pushButton_3.setGeometry(QtCore.QRect(1020, 870, 150, 40))self.pushButton_3.setStyleSheet(background:rgba(53,142,255,1);border-radius:10px;padding:2px 4px;)self.pushButton_3.setObjectName(pushButton_2)MainWindow.setCentralWidget(self.centralwidget)self.retranslateUi(MainWindow)QtCore.QMetaObject.connectSlotsByName(MainWindow)def retranslateUi(self, MainWindow):_translate QtCore.QCoreApplication.translateMainWindow.setWindowTitle(_translate(MainWindow, 基于YOLODeepsort的交通车流量统计系统))self.label.setText(_translate(MainWindow, 基于YOLODeepsort的交通车流量统计系统))self.label_2.setText(_translate(MainWindow, 请添加对象注意路径不要存在中文))self.pushButton.setText(_translate(MainWindow, 选择对象))self.pushButton_2.setText(_translate(MainWindow, 开始识别))self.pushButton_3.setText(_translate(MainWindow, 退出系统))# 点击文本框绑定槽事件self.pushButton.clicked.connect(self.openfile)self.pushButton_2.clicked.connect(self.click_1)self.pushButton_3.clicked.connect(self.handleCalc3)def openfile(self):global sname, filepathfname QFileDialog()fname.setAcceptMode(QFileDialog.AcceptOpen)fname, _ fname.getOpenFileName()if fname :returnfilepath os.path.normpath(fname)sname filepath.split(os.sep)ui.printf(当前选择的文件路径是%s % filepath)try:show cv2.imread(filepath)ui.showimg(show)except:ui.printf(请检查路径是否存在中文更名后重试)def handleCalc3(self):os._exit(0)def printf(self,text):self.textBrowser.append(text)self.cursor self.textBrowser.textCursor()self.textBrowser.moveCursor(self.cursor.End)QtWidgets.QApplication.processEvents()def showimg(self,img):global vidimg2 cv2.cvtColor(img, cv2.COLOR_BGR2RGB)_image QtGui.QImage(img2[:], img2.shape[1], img2.shape[0], img2.shape[1] * 3,QtGui.QImage.Format_RGB888)n_width _image.width()n_height _image.height()if n_width / 500 n_height / 400:ratio n_width / 700else:ratio n_height / 700new_width int(n_width / ratio)new_height int(n_height / ratio)new_img _image.scaled(new_width, new_height, Qt.KeepAspectRatio)self.label_2.setPixmap(QPixmap.fromImage(new_img))def click_1(self):global filepathtry:self.thread_1.quit()except:passself.thread_1 Thread_1(filepath) # 创建线程self.thread_1.wait()self.thread_1.start() # 开始线程if __name__ __main__:app QtWidgets.QApplication(sys.argv)MainWindow QtWidgets.QMainWindow()ui Ui_MainWindow()ui.setupUi(MainWindow)MainWindow.show()sys.exit(app.exec_())8.项目展示
下图完整源码环境部署视频教程自定义UI界面
参考博客《[YOLOv7]基于YOLODeepsort的人流量统计系统(源码部署教程)》