首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >用有限差分法检验神经网络梯度是行不通的

用有限差分法检验神经网络梯度是行不通的
EN

Stack Overflow用户
提问于 2021-02-23 01:11:19
回答 2查看 276关注 0票数 2

The The The

赏金

明天到期

..。对这个问题的回答有资格获得+300的声誉奖励。

瑞安

想要

吸引更多关注

关于这个问题:

如果需要更多细节,COLAB笔记本可以分享,尽管我相信一切都在这里。

在一个

整整一周

打印语句、维度分析、重构和大声讨论代码,我可以说

我完全卡住了

..。

我的代价函数产生的梯度与有限差产生的梯度太远了。

我已经确认我的成本函数为正规化的投入产生了正确的成本,而不是。下面是成本函数:

代码语言:javascript
运行
复制
def nnCost(nn_params, X, y, lambda_, input_layer_size, hidden_layer_size, num_labels):
  # reshape parameter/weight vectors to suit network size
  Theta1 = np.reshape(nn_params[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1)))
  Theta2 = np.reshape(nn_params[(hidden_layer_size * (input_layer_size+1)):], (num_labels, (hidden_layer_size + 1)))

  if lambda_ is None:
    lambda_ = 0

  # grab number of observations
  m = X.shape[0]
  
  # init variables we must return
  cost = 0
  Theta1_grad = np.zeros(Theta1.shape)
  Theta2_grad = np.zeros(Theta2.shape)

  # one-hot encode the vector y
  y_mtx = pd.get_dummies(y.ravel()).to_numpy() 

  ones = np.ones((m, 1))
  X = np.hstack((ones, X))
  
  # layer 1
  a1 = X
  z2 = Theta1@a1.T
  # layer 2
  ones_l2 = np.ones((y.shape[0], 1))
  a2 = np.hstack((ones_l2, sigmoid(z2.T)))
  z3 = Theta2@a2.T
  # layer 3
  a3 = sigmoid(z3)

  reg_term = (lambda_/(2*m)) * (np.sum(np.sum(np.multiply(Theta1, Theta1))) + np.sum(np.sum(np.multiply(Theta2,Theta2))) - np.subtract((Theta1[:,0].T@Theta1[:,0]),(Theta2[:,0].T@Theta2[:,0])))
  cost = (1/m) * np.sum((-np.log(a3).T * (y_mtx) - np.log(1-a3).T * (1-y_mtx))) + reg_term
  
  # BACKPROPAGATION
  # δ3 equals the difference between a3 and the y_matrix
  d3 = a3 - y_mtx.T
  # δ2 equals the product of δ3 and Θ2 (ignoring the Θ2 bias units) multiplied element-wise by the g′() of z2 (computed back in Step 2).
  d2 = Theta2[:,1:].T@d3 * sigmoidGradient(z2)
  # Δ1 equals the product of δ2 and a1.
  Delta1 = d2@a1
  Delta1 /= m
  # Δ2 equals the product of δ3 and a2.
  Delta2 = d3@a2
  Delta2 /= m
  
  reg_term1 = (lambda_/m) * np.append(np.zeros((Theta1.shape[0],1)), Theta1[:,1:], axis=1)
  reg_term2 = (lambda_/m) * np.append(np.zeros((Theta2.shape[0],1)), Theta2[:,1:], axis=1)
  
  Theta1_grad = Delta1 + reg_term1
  Theta2_grad = Delta2 + reg_term2
  
  grad = np.append(Theta1_grad.ravel(), Theta2_grad.ravel())
  
  return cost, grad

这是检查渐变的代码。我已经检查了每一行代码,在这里我想不到任何需要修改的地方。它似乎处于工作状态。

代码语言:javascript
运行
复制
def checkNNGradients(lambda_):
  """
  Creates a small neural network to check the backpropagation gradients. 
  Credit: Based on the MATLAB code provided by Dr. Andrew Ng, Stanford Univ.
  
  Input: Regularization parameter, lambda, as int or float.
  
  Output: Analytical gradients produced by backprop code and the numerical gradients (computed
  using computeNumericalGradient). These two gradient computations should result in 
  very similar values. 
  """

  input_layer_size = 3
  hidden_layer_size = 5
  num_labels = 3
  m = 5

  # generate 'random' test data
  Theta1 = debugInitializeWeights(hidden_layer_size, input_layer_size)
  Theta2 = debugInitializeWeights(num_labels, hidden_layer_size)

  # reusing debugInitializeWeights to generate X
  X  = debugInitializeWeights(m, input_layer_size - 1)
  y  = np.ones(m) + np.remainder(np.range(m), num_labels)


  # unroll parameters
  nn_params = np.append(Theta1.ravel(), Theta2.ravel())
  costFunc = lambda p: nnCost(p, X, y, lambda_, input_layer_size, hidden_layer_size, num_labels)
    
  cost, grad = costFunc(nn_params)
    
  numgrad = computeNumericalGradient(costFunc, nn_params)

  # examine the two gradient computations; two columns should be very similar. 
  print('The columns below should be very similar.\n')
   
  # Credit: http://stackoverflow.com/a/27663954/583834
  print('{:<25}{}'.format('Numerical Gradient', 'Analytical Gradient'))
  for numerical, analytical in zip(numgrad, grad):
    print('{:<25}{}'.format(numerical, analytical))


  # If you have a correct implementation, and assuming you used EPSILON = 0.0001 
  # in computeNumericalGradient.m, then diff below should be less than 1e-9
  diff = np.linalg.norm(numgrad-grad)/np.linalg.norm(numgrad+grad)
  print(diff)
  print("\n")
  print('If your backpropagation implementation is correct, then \n' \
          'the relative difference will be small (less than 1e-9). \n' \
          '\nRelative Difference: {:.10f}'.format(diff))

check函数使用

函数(这是一个可重现的例子;只要运行它,它就会调用其他函数),然后调用使用有限差分计算梯度的函数。两者都在下面。

代码语言:javascript
运行
复制
def debugInitializeWeights(fan_out, fan_in):
  """
  Initializes the weights of a layer with fan_in
  incoming connections and fan_out outgoing connections using a fixed
  strategy.

  Input: fan_out, number of outgoing connections for a layer as int; fan_in, number
  of incoming connections for the same layer as int. 
  
  Output: Weight matrix, W, of size(1 + fan_in, fan_out), as the first row of W handles the "bias" terms
  """
  W = np.zeros((fan_out, 1 + fan_in))
  # Initialize W using "sin", this ensures that the values in W are of similar scale;
  # this will be useful for debugging
  W = np.sin(range(1, np.size(W)+1)) / 10 
  return W.reshape(fan_out, fan_in+1)

def computeNumericalGradient(J, nn_params):
  """
  Computes the gradient using "finite differences"
  and provides a numerical estimate of the gradient (i.e.,
  gradient of the function J around theta).
  Credit: Based on the MATLAB code provided by Dr. Andrew Ng, Stanford Univ. 

  Inputs: Cost, J, as computed by nnCost function; Parameter vector, theta.

  Output: Gradient vector using finite differences. Per Dr. Ng, 
  'Sets numgrad(i) to (a numerical approximation of) the partial derivative of 
  J with respect to the i-th input argument, evaluated at theta. (i.e., numgrad(i) should 
  be the (approximately) the partial derivative of J with respect
  to theta(i).)'          
  """
  numgrad = np.zeros(nn_params.shape)
  perturb = np.zeros(nn_params.shape)
  e = .0001
  for i in range(np.size(nn_params)):
      # Set perturbation (i.e., noise) vector
      perturb[i] = e
      # run cost fxn w/ noise added to and subtracted from parameters theta in nn_params
      cost1, grad1 = J((nn_params - perturb))
      cost2, grad2 = J((nn_params + perturb))
      # record the difference in cost function ouputs; this is the numerical gradient
      numgrad[i] = (cost2 - cost1) / (2*e)
      perturb[i] = 0

  return numgrad

该代码不适用于类。那个MOOC是在MATLAB上的,现在已经结束了。这是给我的。其他的解决方案也存在于网络上;查看这些解决方案被证明是徒劳无功的。每个人都有不同的(难以理解的)方法。所以,我真的需要帮助或者奇迹。

编辑/更新:当分解向量影响结果时,Fortran排序,但我无法让梯度一起移动更改该选项。

EN

Stack Overflow用户

回答已采纳

发布于 2021-02-28 22:19:51

一种想法是:我认为你的干扰有点大,

..。对于双精度浮点数,它应该更像

,即机器精度的根(或者您使用的是单精度?!)。

也就是说,有限差分可以是

非常

对真实导数的不好的近似。具体地说,numpy中的浮点计算并不是确定性的,您似乎已经发现了这一点。在某些情况下,评估中的噪声可能会抵消许多有效数字。你看到了什么价值,你期待什么?

票数 2
EN
查看全部 2 条回答
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66320255

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档