【Deep learning AI】梯度检测Gradient Checking

xiaoxiao2021-02-28  42

有时候我们不知道我们的后向传播是否写得正确,这时候我们就要使用梯度检测技术来帮我们测试一下。

其数学依据为导数的定义

当eposilon趋于0时,便是J对θ的导数了

N维的梯度检测

我们的参数矩阵储存在python的 一个叫做parameters 的dict中,我们需要将其转换成一个向量

将其矩阵中的每一个值都放入向量之中

计算J_plus[i]的步骤

1.令θ+ = np.copy(value)   深复制整个向量

2.θ+ = θ+  + epslion

3.使用 计算J_plus[i]

4.对J_minus[i] 使用相同的步骤

5.对每一个参数都计算其估算梯度

6.计算其差别

利用np.linalg.norm(X,ord)计算其范数

当差别<10^-7 时说明梯度下降得正确

def gradient_check_n(parameters, gradients, x, y, epsilon = 1e-7): """ Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient """ # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) # print("grad: {}".format(grad)) # print("gradapprox: {}".format(gradapprox)) numerator = np.linalg.norm(grad-gradapprox, ord=2) # Step 1' denominator = np.linalg.norm(grad, ord=2) + np.linalg.norm(gradapprox, ord=2) # Step 2' difference = numerator / denominator # Step 3' ### END CODE HERE ### if difference > 1e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference

梯度检测十分缓慢,在训练模型的过程中我们不使用梯度检测,只有在少数迭代的情况下进行检测。

同时梯度检测不适合使用在Drop out之中。

我们使用BP求得梯度之后再用梯度检测进行检验,再对神经层进行Drop out处理

转载请注明原文地址: https://www.6miu.com/read-2619955.html

最新回复(0)