Until now, I’ve always used Gradient Descent to update the parameters and minimize the cost. In this aricle, I will show more advanced optimization methods that can speed up learning and perhaps even get a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes “downhill” on a cost function $J$. Think of it as trying to do this:
At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point.
Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
This section will implement the gradient descent update rule. The gradient descent rule is, for $l = 1, …, L$:
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the parameters
dictionary. Note that the iterator l
starts at 0 in the for
loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. We need to shift l
to l+1
when coding:
1  def update_parameters_with_gd(parameters, grads, learning_rate): 
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to minibatch gradient descent where each minibatch has just 1 example. The update rule that I have just implemented does not change. What changes is that I would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
 (Batch) Gradient Descent:
1  X = data_input 
 Stochastic Gradient Descent:
1  X = data_input 
In Stochastic Gradient Descent, I use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will “oscillate” toward the minimum rather than converge smoothly. Here is an illustration of this:
Note also that implementing SGD requires 3 forloops in total:
 Over the number of iterations
 Over the $m$ training examples
 Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, we’ll often get faster results if we do not use neither the whole training set, nor only one training example, to perform each update. Minibatch gradient descent uses an intermediate number of examples for each step. With minibatch gradient descent, we loop over the minibatches instead of looping over individual training examples.
What we should remember:
 The difference between gradient descent, minibatch gradient descent and stochastic gradient descent is the number of examples we use to perform one update step.
 We have to tune a learning rate hyperparameter $\alpha$.
 With a wellturned minibatch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
MiniBatch Gradient descent
Let’s learn how to build minibatches from the training set (X, Y).
There are two steps:
 Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different minibatches.
 Partition: Partition the shuffled (X, Y) into minibatches of size
mini_batch_size
(here 64). Note that the number of training examples is not always divisible bymini_batch_size
. The last mini batch might be smaller, but you don’t need to worry about this. When the final minibatch is smaller than the fullmini_batch_size
, it will look like this:
This section will implement random_mini_batches
. We coded the shuffling part . For the partitioning step, we will use the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ minibatches:
1  first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size] 
We should note that the last minibatch might end up smaller than mini_batch_size=64
. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s)
in Python). If the total number of examples is not a multiple of mini_batch_size=64
then there will be $\lfloor \frac{m}{mini_batch_size}\rfloor$ minibatches with a full 64 examples, and the number of examples in the final minibatch will be ($mmini_batch_size \times \lfloor \frac{m}{mini_batch_size}\rfloor$):
1  def random_mini_batches(X, Y, mini_batch_size=64, seed=0): 
What we should remember:
 Shuffling and Partitioning are the two steps required to build minibatches
 Powers of two are often chosen to be the minibatch size, e.g., 16, 32, 64, 128.
Momentum
Because minibatch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by minibatch gradient descent will “oscillate” toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the ‘direction’ of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the “velocity” of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
So let’s initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads
dictionary, that is:
for $l =1,…,L$:
1  v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) 
Note that the iterator l starts at 0 in the for loop while the first parameters are v[“dW1”] and v[“db1”] (that’s a “one” on the superscript). This is why we are shifting l to l+1 in the for
loop.
1  def initialize_velocity(parameters): 
Now we can implement the parameters update with momentum. The momentum update rule is, for $l = 1, …, L$:
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the parameters
dictionary. Note that the iterator l
starts at 0 in the for
loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that’s a “one” on the superscript). So you will need to shift l
to l+1
when coding.
1 

Note that:
 The velocity is initialized with zeros. So the algorithm will take a few iterations to “build up” velocity and start to take bigger steps.
 If $\beta = 0$, then this just becomes standard gradient descent without momentum.
How do you choose $\beta$?
 The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
 Common values for $\beta$ range from 0.8 to 0.999. If you don’t feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
 Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
What you should remember:
 Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, minibatch gradient descent or stochastic gradient descent.
 You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
How does Adam work?
 It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
 It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
 It updates parameters in a direction based on combining information from “1” and “2”.
The update rule is, for $l = 1, …, L$:
where:
 t counts the number of steps taken of Adam
 L is the number of layers
 $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
 $\alpha$ is the learning rate
 $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the parameters
dictionary
Firstly we need to initialize the Adam variables $v, s$ which keep track of the past information.
Instruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads
, that is:
for $l = 1, …, L$:
1  v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)]) 
initialize_adam:
1  def initialize_adam(parameters): 
Now, we can implement the parameters update with Adam. Recall the general update rule is, for $l = 1, …, L$:
Note that the iterator l
starts at 0 in the for
loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l
to l+1
when coding.
1  def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate=0.01, 
You now have three working optimization algorithms (minibatch gradient descent, Momentum, Adam). Let’s implement a model with each of these optimizers and observe the difference.
Model with different optimization algorithms
Lets use the following “moons” dataset to test the different optimization methods. (The dataset is named “moons” because the data from each of the two classes looks a bit like a crescentshaped moon.)
1  train_X, train_Y = load_dataset() 
We have already implemented a 3layer neural network. You will train it with:
 Minibatch Gradient Descent: it will call your function:
update_parameters_with_gd()
 Minibatch Momentum: it will call your functions:
initialize_velocity()
andupdate_parameters_with_momentum()
 Minibatch Adam: it will call your functions:
initialize_adam()
andupdate_parameters_with_adam()
1  def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9, 
I will now run this 3 layer neural network with each of the 3 optimization methods.
Minibatch Gradient descent
1 

Output:
Minibatch gradient descent with momentum
1 

Output:
Minibatch with Adam mode
1 

Output:
Summary
optimization method  accuracy  cost shape 

Gradient descent  79.7%  oscillations 
Momentum  79.7%  oscillations 
Adam  94%  smoother 
Momentum usually helps, but given the small learning rate and the simplistic dataset, its impact is almost negligeable. Also, the huge oscillations you see in the cost come from the fact that some minibatches are more difficult thans others for the optimization algorithm.
Adam on the other hand, clearly outperforms minibatch gradient descent and Momentum. If you run the model for more epochs on this simple dataset, all three methods will lead to very good results. However, you’ve seen that Adam converges a lot faster.
Some advantages of Adam include:
 Relatively low memory requirements (though higher than gradient descent and gradient descent with momentum)
 Usually works well even with little tuning of hyperparameters (except $\alpha$)
References:
 Adam paper: https://arxiv.org/pdf/1412.6980.pdf