pytorch 反向传播计算
pytorch
本文字数:1.1k 字 | 阅读时长 ≈ 6 min

pytorch 反向传播计算

pytorch
本文字数:1.1k 字 | 阅读时长 ≈ 6 min

本篇文章通过简单的公式推导来介绍神经网络的前向传播和反向传播的计算

1. 前向传播

如下所示,我们定义一个简单的一层神经网络并进行计算

输入值为$x_{1}=0.5, x_{2}=1.0$
网络参数为$w_{1}=1.0, w_{2}=0.5, w_{3}=0.5, w_{4}=0.7, w_{5}=1.0, w_{6}=2.0$
网络输出值$y^{’}$

假设网络的真实值为$y=0.8$

我们先对网络进行前向传播进行计算,计算过程如下所示

$$
\begin{aligned}
& h_{1}^{(1)} = x_{1}w_{1} + x_{2}w_{2} = 0.5\times 1.0 + 1.0\times 0.5 = 1.0 \
& h_{2}^{(1)} = x_{1}w_{3} + x_{2}w_{4} = 0.5\times 0.5 + 1.0\times 0.7 = 0.95 \
& y^{’} = h_{1}^{(1)}w_{5} + h_{2}^{(1)}w_{6} = 1.0\times 1.0 + 0.95\times 2.0 = 2.9
\end{aligned}
$$

2. 反向传播

在前向传播中,我们得到了最终的输出值$y^{’}=2.9$,现在我们要根据损失函数的计算得到网络的损失值

$$
\delta = \frac{1}{2}(y-y{’}){2} = 0.5(0.8-2.9)^{2} = 2.205\
$$

2.1 $w_{5},w_{6}$的梯度

接下来我们根据得到的损失值进行反向传播,我们先求参数$w_{5},w_{6}$的梯度,由链式法则

$$
\begin{aligned}
& \frac{\partial \delta}{\partial w_{5}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial w_{5}} \
& \frac{\partial \delta}{\partial w_{6}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial w_{6}}
\end{aligned}
$$

由$\delta = \frac{1}{2}(y-y{’}){2}$,得到

$$
\frac{\partial \delta}{\partial y^{’}} = 2\times \frac{1}{2}\times (y-y^{’}\times (-1)) = y^{’}-y = 2.9-0.8 = 2.1
$$

由$y^{’} = h_{1}^{(1)}w_{5} + h_{2}^{(1)}w_{6}$,得到

$$
\begin{aligned}
& \frac{\partial y^{’}}{\partial w_{5}} = h_{1}^{(1)} + 0 = 1.0 \
& \frac{\partial y^{’}}{\partial w_{6}} = 0 + h_{2}^{(1)} = 0.95
\end{aligned}
$$

所以$w_{5},w_{6}$的梯度为

$$
\begin{aligned}
& \frac{\partial \delta}{\partial w_{5}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial w_{5}} = 2.1\times 1.0 = 2.1 \
& \frac{\partial \delta}{\partial w_{6}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial w_{6}} = 2.1\times 0.95 = 1.995
\end{aligned}
$$

2.2 $w_{1},w_{2},w_{3},w_{4}$的梯度

$$
\begin{aligned}
& \frac{\partial \delta}{\partial w_{1}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial h_{1}^{(1)}} \times \frac{\partial h_{1}^{(1)}}{\partial w_{1}} \
& \frac{\partial \delta}{\partial w_{2}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial h_{1}^{(1)}} \times \frac{\partial h_{1}^{(1)}}{\partial w_{2}} \
& \frac{\partial \delta}{\partial w_{3}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial h_{2}^{(1)}} \times \frac{\partial h_{2}^{(1)}}{\partial w_{3}} \
& \frac{\partial \delta}{\partial w_{4}} = \frac{\partial \delta}{\partial y^{’}} \times \frac{\partial y^{’}}{\partial h_{2}^{(1)}} \times \frac{\partial h_{2}^{(1)}}{\partial w_{4}} \

\end{aligned}
$$

3. pytorch 实践

下面用 pytorch 程序算一下,我们定义了一个和上面一模一样的代码如下所示

import torch
import torch.nn as nn
import torch.optim as optim

# 构建一个简单的神经网络
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.w1 = nn.Linear(1, 1, bias=False)
        self.w2 = nn.Linear(1, 1, bias=False)
        self.w3 = nn.Linear(1, 1, bias=False)
        self.w4 = nn.Linear(1, 1, bias=False)
        self.w5 = nn.Linear(1, 1, bias=False)
        self.w6 = nn.Linear(1, 1, bias=False)

    def forward(self, x1, x2):
        h_1_1 = self.w1(x1) + self.w2(x2)
        h_1_2 = self.w3(x1) + self.w4(x2)
        y = self.w5(h_1_1) + self.w6(h_1_2)
        return y

# 定义输入
x1 = torch.tensor([0.5], requires_grad=True)
x2 = torch.tensor([1.0], requires_grad=True)
# 定义目标
y = torch.tensor([0.8])
# 定义模型
model = Net()
model.w1.weight.data.fill_(1.0)
model.w2.weight.data.fill_(0.5)
model.w3.weight.data.fill_(0.5)
model.w4.weight.data.fill_(0.7)
model.w5.weight.data.fill_(1.0)
model.w6.weight.data.fill_(2.0)

optimizer = optim.SGD(model.parameters(), lr=0.01)
output = model(x1, x2)
loss = 0.5 * nn.MSELoss()(output, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()

print(loss)

我们看一下计算的梯度如何

for name, param in model.named_parameters():
    print(f"Parameter:{name},  Size:{param.data},  Gradient:{param.grad}")
Parameter:w1.weight,  Size:tensor([[1.]]),  Gradient:tensor([[1.0500]])
Parameter:w2.weight,  Size:tensor([[0.5000]]),  Gradient:tensor([[2.1000]])
Parameter:w3.weight,  Size:tensor([[0.5000]]),  Gradient:tensor([[2.1000]])
Parameter:w4.weight,  Size:tensor([[0.7000]]),  Gradient:tensor([[4.2000]])
Parameter:w5.weight,  Size:tensor([[1.]]),  Gradient:tensor([[2.1000]])
Parameter:w6.weight,  Size:tensor([[2.]]),  Gradient:tensor([[1.9950]])

我们对参数进行更新

for name, param in model.named_parameters():
    print(f"Parameter:{name},  Size:{param.data},  Gradient:{param.grad}")
Parameter:w1.weight,  Size:tensor([[0.9895]]),  Gradient:tensor([[1.0500]])
Parameter:w2.weight,  Size:tensor([[0.4790]]),  Gradient:tensor([[2.1000]])
Parameter:w3.weight,  Size:tensor([[0.4790]]),  Gradient:tensor([[2.1000]])
Parameter:w4.weight,  Size:tensor([[0.6580]]),  Gradient:tensor([[4.2000]])
Parameter:w5.weight,  Size:tensor([[0.9790]]),  Gradient:tensor([[2.1000]])
Parameter:w6.weight,  Size:tensor([[1.9800]]),  Gradient:tensor([[1.9950]])
9月 09, 2024
9月 06, 2024