当前位置: 首页 > news >正文

古典 网站模板建立网站步骤

古典 网站模板,建立网站步骤,新网站上线,网络服务协议需要交印花税吗本文节选自吴恩达老师《深度学习专项课程》编程作业#xff0c;在此表示感谢。 课程链接#xff1a;https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Neural Network model 2 - Zero initialization 3 - Random initialization#xff08;掌握… 本文节选自吴恩达老师《深度学习专项课程》编程作业在此表示感谢。   课程链接https://www.deeplearning.ai/deep-learning-specialization/ 目录 1 - Neural Network model 2 - Zero initialization 3 - Random initialization掌握 4 - He initialization理解 To get started, run the following cell to load the packages and the planar dataset you will try to classify. import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.datasets from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec%matplotlib inline plt.rcParams[figure.figsize] (7.0, 4.0) # set default size of plots plt.rcParams[image.interpolation] nearest plt.rcParams[image.cmap] gray# load image dataset: blue/red dots in circles train_X, train_Y, test_X, test_Y load_dataset() 1 - Neural Network model You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with: Zeros initialization -- setting initialization zeros in the input argument.Random initialization -- setting initialization random in the input argument. This initializes the weights to large random values.He initialization -- setting initialization he in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015. Instructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls. def model(X, Y, learning_rate 0.01, num_iterations 15000, print_cost True, initialization he):Implements a three-layer neural network: LINEAR-RELU-LINEAR-RELU-LINEAR-SIGMOID.Arguments:X -- input data, of shape (2, number of examples)Y -- true label vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)learning_rate -- learning rate for gradient descent num_iterations -- number of iterations to run gradient descentprint_cost -- if True, print the cost every 1000 iterationsinitialization -- flag to choose which initialization to use (zeros,random or he)Returns:parameters -- parameters learnt by the modelgrads {}costs [] # to keep track of the lossm X.shape[1] # number of exampleslayers_dims [X.shape[0], 10, 5, 1]# Initialize parameters dictionary.if initialization zeros:parameters initialize_parameters_zeros(layers_dims)elif initialization random:parameters initialize_parameters_random(layers_dims)elif initialization he:parameters initialize_parameters_he(layers_dims)# Loop (gradient descent)for i in range(0, num_iterations):# Forward propagation: LINEAR - RELU - LINEAR - RELU - LINEAR - SIGMOID.a3, cache forward_propagation(X, parameters)# Losscost compute_loss(a3, Y)# Backward propagation.grads backward_propagation(X, Y, cache)# Update parameters.parameters update_parameters(parameters, grads, learning_rate)# Print the loss every 1000 iterationsif print_cost and i % 1000 0:print(Cost after iteration {}: {}.format(i, cost))costs.append(cost)# plot the lossplt.plot(costs)plt.ylabel(cost)plt.xlabel(iterations (per hundreds))plt.title(Learning rate str(learning_rate))plt.show()return parameters 2 - Zero initialization There are two types of parameters to initialize in a neural network: the weight matrices the bias vectors Exercise: Implement the following function to initialize all parameters to zeros. Youll see later that this does not work well since it fails to break symmetry, but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes. def initialize_parameters_zeros(layers_dims):Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters W1, b1, ..., WL, bL:W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)parameters {}L len(layers_dims) # number of layers in the networkfor l in range(1, L):parameters[W str(l)] np.zeros((layers_dims[l], layers_dims[l-1]))parameters[b str(l)] np.zeros((layers_dims[l], 1))return parameters parameters initialize_parameters_zeros([3,2,1]) print(W1 str(parameters[W1])) print(b1 str(parameters[b1])) print(W2 str(parameters[W2])) print(b2 str(parameters[b2]))W1 [[0. 0. 0.][0. 0. 0.]] b1 [[0.][0.]] W2 [[0. 0.]] b2 [[0.]]parameters model(train_X, train_Y, initialization zeros) print (On the train set:) predictions_train predict(train_X, train_Y, parameters) print (On the test set:) predictions_test predict(test_X, test_Y, parameters) plt.title(Model with Zeros initialization) axes plt.gca() axes.set_xlim([-1.5,1.5]) axes.set_ylim([-1.5,1.5]) plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, np.squeeze(train_Y)) The model is predicting 0 for every example. In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with ?[?]1 for every layer, and the network is no more powerful than a linear classifier such as logistic regression. **What you should remember**: - The weights  should be initialized randomly to break symmetry. - It is however okay to initialize the biases to zeros. Symmetry is still broken so long as is initialized randomly. 3 - Random initialization掌握 To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values. Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your random weights match ours, so dont worry if running several times your code gives you always the same initial values for the parameters. def initialize_parameters_random(layers_dims):Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters W1, b1, ..., WL, bL:W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)np.random.seed(3) # This seed makes sure your random numbers will be the as oursparameters {}L len(layers_dims) # integer representing the number of layersfor l in range(1, L):parameters[W str(l)] np.random.randn(layers_dims[l], layers_dims[l-1]) * 10parameters[b str(l)] np.zeros((layers_dims[l], 1))return parameters parameters model(train_X, train_Y, initialization random) print (On the train set:) predictions_train predict(train_X, train_Y, parameters) print (On the test set:) predictions_test predict(test_X, test_Y, parameters) **In summary**: - Initializing weights to very large random values does not work well. - Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part! 4 - He initialization理解 Finally, try He Initialization; this is named for the first author of He et al., 2015. (If you have heard of Xavier initialization, this is similar except Xavier initialization uses a scaling factor for the weights ?[?]W[l] of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).) Exercise: Implement the following function to initialize your parameters with He initialization. Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by ,which is what He initialization recommends for layers with a ReLU activation. # GRADED FUNCTION: initialize_parameters_hedef initialize_parameters_he(layers_dims):Arguments:layer_dims -- python array (list) containing the size of each layer.Returns:parameters -- python dictionary containing your parameters W1, b1, ..., WL, bL:W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])b1 -- bias vector of shape (layers_dims[1], 1)...WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])bL -- bias vector of shape (layers_dims[L], 1)np.random.seed(3)parameters {}L len(layers_dims) - 1 # integer representing the number of layersfor l in range(1, L 1):### START CODE HERE ### (≈ 2 lines of code)parameters[W str(l)] np.random.randn(layers_dims[l], layers_dims[l-1]) * np.sqrt(2 / layers_dims[l-1])parameters[b str(l)] np.zeros((layers_dims[l], 1))### END CODE HERE ###return parameters
http://www.huolong8.cn/news/22384/

相关文章:

  • 网站静态化怎么做wordpress如何在地址栏中加网站logo
  • 网站用开源cms全国装饰100强排名
  • 我想建个网站网站怎么建域名安庆哪里做网站
  • 求一些做里番的网站wordpress iis rewrite
  • 在网站后台挂马教育培训机构排名
  • 企业网站备案好不好wordpress 计费插件
  • 互助网站建设个人网站免费服务器
  • 流量网站怎么做的如何做好网站设计
  • 网站源码破解版字形分析网站
  • 小创业公司网站怎么做wordpress乐趣公园缩略图不显示
  • 做网站如何不被忽悠酒店类网站建设方案书
  • 网站开发分支结构wordpress社区聊天室
  • 站酷网怎么样长春建站方法
  • 可视化拖拽网站建设软件设置wordpress文章图片不显示
  • 常熟建设网站国外 wordpress模板下载地址
  • 做外贸电商网站有哪个十堰网站建设培训
  • 郑州餐饮网站建设公司做英文网站 赚钱
  • 万全孔家庄做网站域名备案关闭网站
  • 江苏省城乡住房建设厅网站网站紧急维护
  • 天水市秦州区作风建设年网站工程项目建设网站
  • 外国的网站 ftp吃什么补肾最快
  • 聊城网站建设 推广聊城博达河北中凯建设有限公司网站
  • 设计一个个人网站陕西网络推广维护
  • 绿色 网站 源码图文设计与制作
  • 网站怎样做淘宝客做个类似淘宝的网站怎么做
  • 网站制作软件是什么seo软件优化工具软件
  • 成都企业网站建设哪家专业建筑设计专业是干什么的
  • 淘客单网站自定义wordpress背景图
  • 通辽网站建设0475seo沈阳最新新闻事件今天
  • o2o网站策划设计公司网站官网