Abstract
As we know, a well chosen initialization can speed up the convergence of gradient descent and increase the odds of gradient descent converging to a lower training (and generalization) error .So in this article, we will explore the impact of different parameter initialization strategies on training in deep learning.
In this experience,we will try three different initialization strategies:
 Zeros initialization
 Random initialization
 He initialization
Load dataset
To get started, we need load dataset for our experience :
1  import numpy as np 
What we goal is train a model toi classifier the blue dots and red dots.
Neural Network model
I will use a 3layer neural network (already implemented on init_utils). Here are the initialization methods we will experiment with:
 Zeros initialization — setting
initialization = "zeros"
in the input argument.  Random initialization — setting
initialization = "random"
in the input argument. This initializes the weights to large random values.  He initialization — setting
initialization = "he"
in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.
The model code as follows:
1  def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"): 
2  Zero initialization
There are two types of parameters to initialize in a neural network:
 the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, …, W^{[L1]}, W^{[L]})$
 the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, …, b^{[L1]}, b^{[L]})$
Exercise: Implement the following function to initialize all parameters to zeros. You’ll see later that this does not work well since it fails to “break symmetry”, but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
1 

Run the following code to train our model on 15,000 iterations using zeros initialization:
1  def train_with_zeros_init(): 
The outputs are as follows:
The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? From the details of the predictions and the decision boundary, we can find that the model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry(对称性). This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
What we should remember
 The weights $W^{[l]}$ should be initialized randomly to break symmetry.
 It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly.
Random initialization
To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.
1 

Run the following code to train our model on 15,000 iterations using random initialization:
1 

And the outputs are as follows:
We can see “inf” as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn’t worth worrying about for our purposes.
Anyway, it looks like we have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
Observations:
 The cost starts very high. This is because with large randomvalued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.
 Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm.
 If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.
In summary
 Initializing weights to very large random values does not work well.
 Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
He initialization
Finally, try “He Initialization”; this is named for the first author of He et al., 2015. (If you have heard of “Xavier initialization”, this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l1])
where He initialization would use sqrt(2./layers_dims[l1])
.):
1 

And run the code as follow to train with he initialization:
1 

The outputs are as follows:
Observations:
 The model with He initialization separates the blue and the red dots very well in a small number of iterations.
Conclusions
We have seen three different types of initializations. For the same number of iterations and same hyperparameters the comparison is:
Model  Train accuracy  Problem/Comment 
3layer NN with zeros initialization  50%  fails to break symmetry 
3layer NN with large random initialization  83%  too large weights 
3layer NN with He initialization  99%  recommended method 
What we should remember from this blog
 Different initializations lead to different results
 Random initialization is used to break symmetry and make sure different hidden units can learn different things
 Don’t intialize to values that are too large
 He initialization works well for networks with ReLU activations.