网站开发定价,wordpress 首页动画,软件培训记录,软件开发合同样本文章目录 链接说明代码峰值信噪比结构相似性均方误差学习感知图像块相似性自然图像质量评估器 链接
GitHub 仓库 如果代码对你的研究有所帮助#xff0c;请为该仓库点上小星星。
说明
PSNR、SSIM、MSE和LPIPS是有监督指标#xff0c;计算时必须是两对图像参与#xff1b;… 文章目录 链接说明代码峰值信噪比结构相似性均方误差学习感知图像块相似性自然图像质量评估器 链接
GitHub 仓库 如果代码对你的研究有所帮助请为该仓库点上小星星。
说明
PSNR、SSIM、MSE和LPIPS是有监督指标计算时必须是两对图像参与NIQE是无监督指标计算时只需要单个图像。
代码
峰值信噪比
import numpy as np
from utils import to_y_channeldef calculate_psnr(img1, img2, test_y_channelFalse):Calculate PSNR (Peak Signal-to-Noise Ratio).Ref: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratioArgs:img1 (ndarray): Images with range [0, 255].img2 (ndarray): Images with range [0, 255].test_y_channel (bool): Test on Y channel of YCbCr. Default: False.Returns:float: psnr result.assert img1.shape img2.shape, (fImage shapes are differnet: {img1.shape}, {img2.shape}.)assert img1.shape[2] 3img1 img1.astype(np.float64)img2 img2.astype(np.float64)if test_y_channel:img1 to_y_channel(img1)img2 to_y_channel(img2)mse np.mean((img1 - img2) ** 2)if mse 0:return float(inf)return 20. * np.log10(255. / np.sqrt(mse))结构相似性
import cv2
import numpy as np
from utils import to_y_channeldef _ssim(img1, img2):Calculate SSIM (structural similarity) for one channel images.It is called by func:calculate_ssim.Args:img1 (ndarray): Images with range [0, 255] with order HWC.img2 (ndarray): Images with range [0, 255] with order HWC.Returns:float: ssim result.C1 (0.01 * 255) ** 2C2 (0.03 * 255) ** 2img1 img1.astype(np.float64)img2 img2.astype(np.float64)kernel cv2.getGaussianKernel(11, 1.5)window np.outer(kernel, kernel.transpose())mu1 cv2.filter2D(img1, -1, window)[5:-5, 5:-5]mu2 cv2.filter2D(img2, -1, window)[5:-5, 5:-5]mu1_sq mu1 ** 2mu2_sq mu2 ** 2mu1_mu2 mu1 * mu2sigma1_sq cv2.filter2D(img1 ** 2, -1, window)[5:-5, 5:-5] - mu1_sqsigma2_sq cv2.filter2D(img2 ** 2, -1, window)[5:-5, 5:-5] - mu2_sqsigma12 cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2ssim_map ((2 * mu1_mu2 C1) * (2 * sigma12 C2)) / ((mu1_sq mu2_sq C1) * (sigma1_sq sigma2_sq C2))return ssim_map.mean()def calculate_ssim(img1, img2, test_y_channelFalse):Calculate SSIM (structural similarity).Ref:Image quality assessment: From error visibility to structural similarityThe results are the same as that of the official released MATLAB code inhttps://ece.uwaterloo.ca/~z70wang/research/ssim/.For three-channel images, SSIM is calculated for each channel and thenaveraged.Args:img1 (ndarray): Images with range [0, 255].img2 (ndarray): Images with range [0, 255].test_y_channel (bool): Test on Y channel of YCbCr. Default: False.Returns:float: ssim result.assert img1.shape img2.shape, (fImage shapes are differnet: {img1.shape}, {img2.shape}.)assert img1.shape[2] 3img1 img1.astype(np.float64)img2 img2.astype(np.float64)if test_y_channel:img1 to_y_channel(img1)img2 to_y_channel(img2)ssims []for i in range(img1.shape[2]):ssims.append(_ssim(img1[..., i], img2[..., i]))return np.array(ssims).mean()均方误差
import numpy as np
from utils import to_y_channeldef calculate_mse(img1, img2, test_y_channelFalse):assert img1.shape img2.shape, (fImage shapes are differnet: {img1.shape}, {img2.shape}.)assert img1.shape[2] 3img1 img1.astype(np.float64)img2 img2.astype(np.float64)if test_y_channel:img1 to_y_channel(img1)img2 to_y_channel(img2)return np.mean((img1 - img2) ** 2)学习感知图像块相似性
import lpipsdevice cuda if torch.cuda.is_available() else cpu
calculate_lpips lpips.LPIPS(netalex, verboseFalse).to(device)自然图像质量评估器
import cv2
import math
import numpy as np
from scipy.special import gamma
from utils import to_y_channel
from scipy.ndimage.filters import convolvedef reorder_image(img, input_orderHWC):Reorder images to HWC order.If the input_order is (h, w), return (h, w, 1);If the input_order is (c, h, w), return (h, w, c);If the input_order is (h, w, c), return as it is.Args:img (ndarray): Input image.input_order (str): Whether the input order is HWC or CHW.If the input image shape is (h, w), input_order will not haveeffects. Default: HWC.Returns:ndarray: reordered image.if input_order not in [HWC, CHW]:raise ValueError(fWrong input_order {input_order}. Supported input_orders are HWC and CHW)if len(img.shape) 2:img img[..., None]if input_order CHW:img img.transpose(1, 2, 0)return imgdef estimate_aggd_param(block):Estimate AGGD (Asymmetric Generalized Gaussian Distribution) paramters.Args:block (ndarray): 2D Image block.Returns:tuple: alpha (float), beta_l (float) and beta_r (float) for the AGGDdistribution (Estimating the parames in Equation 7 in the paper).block block.flatten()gam np.arange(0.2, 10.001, 0.001) # len 9801gam_reciprocal np.reciprocal(gam)r_gam np.square(gamma(gam_reciprocal * 2)) / (gamma(gam_reciprocal) * gamma(gam_reciprocal * 3))left_std np.sqrt(np.mean(block[block 0]**2))right_std np.sqrt(np.mean(block[block 0]**2))gammahat left_std / right_stdrhat (np.mean(np.abs(block)))**2 / np.mean(block**2)rhatnorm (rhat * (gammahat**3 1) *(gammahat 1)) / ((gammahat**2 1)**2)array_position np.argmin((r_gam - rhatnorm)**2)alpha gam[array_position]beta_l left_std * np.sqrt(gamma(1 / alpha) / gamma(3 / alpha))beta_r right_std * np.sqrt(gamma(1 / alpha) / gamma(3 / alpha))return (alpha, beta_l, beta_r)def compute_feature(block):Compute features.Args:block (ndarray): 2D Image block.Returns:list: Features with length of 18.feat []alpha, beta_l, beta_r estimate_aggd_param(block)feat.extend([alpha, (beta_l beta_r) / 2])shifts [[0, 1], [1, 0], [1, 1], [1, -1]]for i in range(len(shifts)):shifted_block np.roll(block, shifts[i], axis(0, 1))alpha, beta_l, beta_r estimate_aggd_param(block * shifted_block)# Eq. 8mean (beta_r - beta_l) * (gamma(2 / alpha) / gamma(1 / alpha))feat.extend([alpha, mean, beta_l, beta_r])return featdef niqe(img,mu_pris_param,cov_pris_param,gaussian_window,block_size_h96,block_size_w96):Calculate NIQE (Natural Image Quality Evaluator) metric.Ref: Making a Completely Blind Image Quality Analyzer.This implementation could produce almost the same results as the officialMATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zipNote that we do not include block overlap height and width, since they arealways 0 in the official implementation.For good performance, it is advisable by the official implemtation todivide the distorted image in to the same size patched as used for theconstruction of multivariate Gaussian model.Args:img (ndarray): Input image whose quality needs to be computed. Theimage must be a gray or Y (of YCbCr) image with shape (h, w).Range [0, 255] with float type.mu_pris_param (ndarray): Mean of a pre-defined multivariate Gaussianmodel calculated on the pristine dataset.cov_pris_param (ndarray): Covariance of a pre-defined multivariateGaussian model calculated on the pristine dataset.gaussian_window (ndarray): A 7x7 Gaussian window used for smoothing theimage.block_size_h (int): Height of the blocks in to which image is divided.Default: 96 (the official recommended value).block_size_w (int): Width of the blocks in to which image is divided.Default: 96 (the official recommended value).assert img.ndim 2, (Input image must be a gray or Y (of YCbCr) image with shape (h, w).)h, w img.shapenum_block_h math.floor(h / block_size_h)num_block_w math.floor(w / block_size_w)img img[0:num_block_h * block_size_h, 0:num_block_w * block_size_w]distparam []for scale in (1, 2):mu convolve(img, gaussian_window, modenearest)sigma np.sqrt(np.abs(convolve(np.square(img), gaussian_window, modenearest) -np.square(mu)))img_nomalized (img - mu) / (sigma 1)feat []for idx_w in range(num_block_w):for idx_h in range(num_block_h):block img_nomalized[idx_h * block_size_h //scale:(idx_h 1) * block_size_h //scale, idx_w * block_size_w //scale:(idx_w 1) * block_size_w //scale]feat.append(compute_feature(block))distparam.append(np.array(feat))if scale 1:h, w img.shapeimg cv2.resize(img / 255., (w // 2, h // 2), interpolationcv2.INTER_LINEAR)img img * 255.distparam np.concatenate(distparam, axis1)mu_distparam np.nanmean(distparam, axis0)distparam_no_nan distparam[~np.isnan(distparam).any(axis1)]cov_distparam np.cov(distparam_no_nan, rowvarFalse)invcov_param np.linalg.pinv((cov_pris_param cov_distparam) / 2)quality np.matmul(np.matmul((mu_pris_param - mu_distparam), invcov_param),np.transpose((mu_pris_param - mu_distparam)))quality np.sqrt(quality)return qualitydef calculate_niqe(img, crop_border, input_orderHWC, convert_toy):Calculate NIQE (Natural Image Quality Evaluator) metric.Ref: Making a Completely Blind Image Quality Analyzer.This implementation could produce almost the same results as the officialMATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zipWe use the official params estimated from the pristine dataset.We use the recommended block size (96, 96) without overlaps.Args:img (ndarray): Input image whose quality needs to be computed.The input image must be in range [0, 255] with float/int type.The input_order of image can be HW or HWC or CHW. (BGR order)If the input order is HWC or CHW, it will be converted to grayor Y (of YCbCr) image according to the convert_to argument.crop_border (int): Cropped pixels in each edge of an image. Thesepixels are not involved in the metric calculation.input_order (str): Whether the input order is HW, HWC or CHW.Default: HWC.convert_to (str): Whether coverted to y (of MATLAB YCbCr) or gray.Default: y.Returns:float: NIQE result.niqe_pris_params np.load(./niqe_pris_params.npz)mu_pris_param niqe_pris_params[mu_pris_param]cov_pris_param niqe_pris_params[cov_pris_param]gaussian_window niqe_pris_params[gaussian_window]img img.astype(np.float32)if input_order ! HW:img reorder_image(img, input_orderinput_order)if convert_to y:img to_y_channel(img)elif convert_to gray:img cv2.cvtColor(img / 255., cv2.COLOR_BGR2GRAY) * 255.img np.squeeze(img)if crop_border ! 0:img img[crop_border:-crop_border, crop_border:-crop_border]niqe_result niqe(img, mu_pris_param, cov_pris_param, gaussian_window)return niqe_result