# 精华 | 几种梯度下降方法对比【收藏】

``` 1X = data_input
2Y = labels
3parameters = initialize_parameters(layers_dims)
4for i in range(0, num_iterations): #num_iterations--迭代次数
5    # Forward propagation
6    a, caches = forward_propagation(X, parameters)
7    # Compute cost.
8    cost = compute_cost(a, Y)
9    # Backward propagation.
10    grads = backward_propagation(a, caches, parameters)
11    # Update parameters.

``` 1X = data_input
2Y = labels
3permutation = list(np.random.permutation(m))
4shuffled_X = X[:, permutation]
5shuffled_Y = Y[:, permutation].reshape((1, m))
6for i in range(0, num_iterations):
7    for j in range(0, m):  # 每次训练一个样本
8        # Forward propagation
9        AL,caches = forward_propagation(shuffled_X[:, j].reshape(-1,1), parameters)
10        # Compute cost
11        cost = compute_cost(AL, shuffled_Y[:, j].reshape(1,1))
12        # Backward propagation
13        grads = backward_propagation(AL, shuffled_Y[:,j].reshape(1,1), caches)
14        # Update parameters.
15        parameters = update_parameters(parameters, grads, learning_rate)```

``` 1# GRADED FUNCTION: random_mini_batches
2def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
3    """
4    Creates a list of random minibatches from (X, Y)
5    Arguments:
6    X -- input data, of shape (input size, number of examples)
7    Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
8    mini_batch_size -- size of the mini-batches, integer
9
10    Returns:
11    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
12    """
13    np.random.seed(seed)            # To make your "random" minibatches the same as ours
14    m = X.shape[1]                  # number of training examples
15    mini_batches = []
16
17    # Step 1: Shuffle (X, Y)
18    permutation = list(np.random.permutation(m))
19    shuffled_X = X[:, permutation]
20    shuffled_Y = Y[:, permutation].reshape((1,m))
21
22    # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
23    num_complete_minibatches = m//mini_batch_size # number of mini batches
24    for k in range(0, num_complete_minibatches):
25        mini_batch_X = shuffled_X[:, k * mini_batch_size: (k + 1) * mini_batch_size]
26        mini_batch_Y = shuffled_Y[:, k * mini_batch_size: (k + 1) * mini_batch_size]
27        mini_batch = (mini_batch_X, mini_batch_Y)
28        mini_batches.append(mini_batch)
29
30    # Handling the end case (last mini-batch < mini_batch_size)
31    if m % mini_batch_size != 0:
32        mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
33        mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
34        mini_batch = (mini_batch_X, mini_batch_Y)
35        mini_batches.append(mini_batch)
36
37    return mini_batches```

``` 1seed = 0
2for i in range(0, num_iterations):
3    # Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
4    seed = seed + 1
5    minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
6    for minibatch in minibatches:
7        # Select a minibatch
8        (minibatch_X, minibatch_Y) = minibatch
9        # Forward propagation
10        AL, caches = forward_propagation(minibatch_X, parameters)
11        # Compute cost
12        cost = compute_cost(AL, minibatch_Y)
13        # Backward propagation
14        grads = backward_propagation(AL, minibatch_Y, caches)
15        parameters = update_parameters(parameters, grads, learning_rate)```

903 篇文章151 人订阅

0 条评论

## 相关文章

439120

64820

38080

56960

436120

### 【深度干货】2017年深度学习优化算法研究亮点最新综述（附slide下载）

【导读】梯度下降算法是机器学习中使用非常广泛的优化算法，也是众多机器学习算法中最常用的优化方法。几乎当前每一个先进的(state-of-the-art)机器学习...

32750

42480

37460

33050

14230