要推导反向传播算法,并了解每一层的参数梯度如何计算,以及每一层的梯度受到哪些值的影响,我们使用一个简单的神经网络结构:

  • 输入层有2个节点
  • 一个有2个节点的隐藏层,激活函数是ReLU
  • 一个输出节点,激活函数是线性激活(即没有激活函数)

假设权重矩阵和偏置如下:

  • 输入层到隐藏层的权重矩阵W1W_1W12×22 \times 22×2
  • 隐藏层的偏置向量b1b_1b12×12 \times 12×1
  • 隐藏层到输出层的权重矩阵W2W_2W22×12 \times 12×1
  • 输出层的偏置向量b2b_2b2是一个标量

输入为x=[x1,x2]x = [x_1, x_2]x=[x1,x2],期望输出为yyy,损失函数为均方误差(MSE)。

前向传播:

  1. 计算隐藏层的输入:
    z1=W1⋅x+b1z_1 = W_1 \cdot x + b_1z1=W1x+b1
  2. 计算隐藏层的激活:
    a1=ReLU(z1)a_1 = \text{ReLU}(z_1)a1=ReLU(z1)
  3. 计算输出层的输入:
    z2=W2T⋅a1+b2z_2 = W_2^T \cdot a_1 + b_2z2=W2Ta1+b2
  4. 输出值:
    y^=z2\hat{y} = z_2y^=z2
  5. 计算损失:
    L=12(y^−y)2L = \frac{1}{2} (\hat{y} - y)^2L=21(y^y)2

反向传播:

  1. 计算输出层的梯度:

    • 损失函数对输出层输入的梯度:
      ∂L∂z2=y^−y\frac{\partial L}{\partial z_2} = \hat{y} - yz2L=y^y
  2. 计算从输出层到隐藏层的梯度:

    • 隐藏层激活对权重的梯度:
      ∂L∂W2=∂L∂z2⋅a1\frac{\partial L}{\partial W_2} = \frac{\partial L}{\partial z_2} \cdot a_1W2L=z2La1
    • 隐藏层激活对偏置的梯度:
      ∂L∂b2=∂L∂z2\frac{\partial L}{\partial b_2} = \frac{\partial L}{\partial z_2}b2L=z2L
  3. 计算隐藏层的梯度:

    • 损失函数对隐藏层激活的梯度:
      ∂L∂a1=W2⋅∂L∂z2\frac{\partial L}{\partial a_1} = W_2 \cdot \frac{\partial L}{\partial z_2}a1L=W2z2L
    • 隐藏层对隐藏层输入的梯度(ReLU的梯度):
      ∂L∂z1=∂L∂a1⋅ReLU′(z1)\frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \text{ReLU}'(z_1)z1L=a1LReLU(z1)
      • ReLU梯度ReLU′(z1)\text{ReLU}'(z_1)ReLU(z1)z1>0z_1 > 0z1>0时为1,否则为0
  4. 计算从输入层到隐藏层的梯度:

    • 输入对权重的梯度:
      ∂L∂W1=∂L∂z1⋅xT\frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot x^TW1L=z1LxT
    • 输入对偏置的梯度:
      ∂L∂b1=∂L∂z1\frac{\partial L}{\partial b_1} = \frac{\partial L}{\partial z_1}b1L=z1L

详细推导实例:

假设:

  • x=[1,2]x = [1, 2]x=[1,2]
  • y=3y = 3y=3
  • W1=[0.50.20.30.7]W_1 = \begin{bmatrix} 0.5 & 0.2 \\ 0.3 & 0.7 \end{bmatrix}W1=[0.50.30.20.7]
  • b1=[0.10.2]b_1 = \begin{bmatrix} 0.1 \\ 0.2 \end{bmatrix}b1=[0.10.2]
  • W2=[0.60.9]W_2 = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix}W2=[0.60.9]
  • b2=0.3b_2 = 0.3b2=0.3

前向传播:
1.
z1=W1⋅x+b1=[0.50.20.30.7]⋅[12]+[0.10.2]=[1.01.9]z_1 = W_1 \cdot x + b_1 = \begin{bmatrix} 0.5 & 0.2 \\ 0.3 & 0.7 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 2 \end{bmatrix} + \begin{bmatrix} 0.1 \\ 0.2 \end{bmatrix} = \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix}z1=W1x+b1=[0.50.30.20.7][12]+[0.10.2]=[1.01.9]
2.
a1=ReLU(z1)=ReLU([1.01.9])=[1.01.9]a_1 = \text{ReLU}(z_1) = \text{ReLU}(\begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix}) = \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix}a1=ReLU(z1)=ReLU([1.01.9])=[1.01.9]
3.
z2=W2T⋅a1+b2=[0.60.9]T⋅[1.01.9]+0.3=2.46z_2 = W_2^T \cdot a_1 + b_2 = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix}^T \cdot \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} + 0.3 = 2.46z2=W2Ta1+b2=[0.60.9]T[1.01.9]+0.3=2.46
4.
y^=z2=2.46\hat{y} = z_2 = 2.46y^=z2=2.46
5.
L=12(2.46−3)2=0.1458L = \frac{1}{2} (2.46 - 3)^2 = 0.1458L=21(2.463)2=0.1458

反向传播:
1.
∂L∂z2=2.46−3=−0.54\frac{\partial L}{\partial z_2} = 2.46 - 3 = -0.54z2L=2.463=0.54

  1. ∂L∂W2=[−0.54]⋅[1.01.9]=[−0.54⋅1.0−0.54⋅1.9]=[−0.54−1.026]\frac{\partial L}{\partial W_2} = \begin{bmatrix} -0.54 \end{bmatrix} \cdot \begin{bmatrix} 1.0 \\ 1.9 \end{bmatrix} = \begin{bmatrix} -0.54 \cdot 1.0 \\ -0.54 \cdot 1.9 \end{bmatrix} = \begin{bmatrix} -0.54 \\ -1.026 \end{bmatrix}W2L=[0.54][1.01.9]=[0.541.00.541.9]=[0.541.026]
    ∂L∂b2=−0.54\frac{\partial L}{\partial b_2} = -0.54b2L=0.54

  2. ∂L∂a1=[0.60.9]⋅−0.54=[−0.324−0.486]\frac{\partial L}{\partial a_1} = \begin{bmatrix} 0.6 \\ 0.9 \end{bmatrix} \cdot -0.54 = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix}a1L=[0.60.9]0.54=[0.3240.486]
    ∂L∂z1=∂L∂a1⋅ReLU′(z1)=[−0.324−0.486]⋅[11]=[−0.324−0.486]\frac{\partial L}{\partial z_1} = \frac{\partial L}{\partial a_1} \cdot \text{ReLU}'(z_1) = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix}z1L=a1LReLU(z1)=[0.3240.486][11]=[0.3240.486]

  3. ∂L∂W1=∂L∂z1⋅xT=[−0.324−0.486]⋅[12]T=[−0.324−0.648−0.486−0.972]\frac{\partial L}{\partial W_1} = \frac{\partial L}{\partial z_1} \cdot x^T = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix} \cdot \begin{bmatrix} 1 & 2 \end{bmatrix}^T = \begin{bmatrix} -0.324 & -0.648 \\ -0.486 & -0.972 \end{bmatrix}W1L=z1LxT=[0.3240.486][12]T=[0.3240.4860.6480.972]
    ∂L∂b1=[−0.324−0.486]\frac{\partial L}{\partial b_1} = \begin{bmatrix} -0.324 \\ -0.486 \end{bmatrix}b1L=[0.3240.486]

从上述示例可以看到,每层的梯度依赖于上一层的激活值和当前层的损失梯度。梯度的传递通过链式法则一步步向前传播,从最初的损失函数计算开始,直到最终的输入层的权重和偏置。

Logo

魔乐社区(Modelers.cn) 是一个中立、公益的人工智能社区,提供人工智能工具、模型、数据的托管、展示与应用协同服务,为人工智能开发及爱好者搭建开放的学习交流平台。社区通过理事会方式运作,由全产业链共同建设、共同运营、共同享有,推动国产AI生态繁荣发展。

更多推荐