11.11. 学习率调度器
Open the notebook in Colab
Open the notebook in Colab
Open the notebook in Colab

(本节是机器翻译,欢迎贡献改进)

到目前为止,我们主要关注如何更新权重向量的优化 算法,而不是它们被更新的 速率。尽管如此,调整学习率通常与实际算法同样重要。有许多方面需要考虑:

  • 最明显的是,学习率的 大小 很重要。如果它太大,优化就会发生分歧,如果它太小,训练需要太长时间,或者我们最终得到了次优的结果。我们之前看到问题的状况编号很重要(有关详细信息,请参见 11.6节)。直观地说,这是最不敏感方向的变化量与最敏感方向的变化量的比率。

  • 其次,衰减速度同样重要。如果学习率仍然很高,我们可能最终会在最小值附近反弹,从而无法达到最优解。:numref:sec_minibatch_sgd 比较详细地讨论了这一点,我们在 11.4节 中分析了性能保证。简而言之,我们希望速率衰减,但可能比 \(\mathcal{O}(t^{-\frac{1}{2}})\) 慢,这将是解决凸问题的不错选择。

  • 另一个同样重要的方面是 初始化。这既涉及参数最初的设置方式(详情请参阅 4.8节),也关系到它们最初的演变方式。这是 warmup 的名字,即我们最初开始朝着解决方案迈进的速度有多快。一开始的大步可能没有好处,特别是因为最初的参数集是随机的。最初的更新方向可能也是毫无意义的。

  • 最后,还有许多优化变体可以执行周期性学习率调整。这超出了本章的范围。我们建议读者阅读 [Izmailov et al., 2018] 来了解个中细节,例如,如何通过对整个 路径 参数求平均值来获得更好的解决方案。

鉴于管理学习率需要很多细节,因此大多数深度学习框架都有自动处理这个问题的工具。在本章中,我们将回顾不同的调度策略对准确性的影响,并展示如何通过 学习率调度器 有效管理这一点。

11.11.1. 玩具问题

我们从一个玩具问题开始,这个问题很便宜,可以轻松计算,但足以说明一些关键方面。为此,我们选择了一个稍微现代化的 LeNet 版本(激活函数使用relu 而不是 sigmoid ,池化使用最大池化而不是平均池化),并应用于Fashion-MNIST数据集。此外,我们混合网络以提高性能。由于大多数代码都是标准的,我们只介绍基础知识,而不做进一步的详细讨论。如果需要,请参阅 6节进行复习。

%matplotlib inline
from mxnet import autograd, gluon, init, lr_scheduler, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l

npx.set_np()

net = nn.HybridSequential()
net.add(nn.Conv2D(channels=6, kernel_size=5, padding=2, activation='relu'),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Conv2D(channels=16, kernel_size=5, activation='relu'),
        nn.MaxPool2D(pool_size=2, strides=2),
        nn.Dense(120, activation='relu'),
        nn.Dense(84, activation='relu'),
        nn.Dense(10))
net.hybridize()
loss = gluon.loss.SoftmaxCrossEntropyLoss()
device = d2l.try_gpu()

batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)

# The code is almost identical to `d2l.train_ch6` defined in the
# lenet section of chapter convolutional neural networks
def train(net, train_iter, test_iter, num_epochs, loss, trainer, device):
    net.initialize(force_reinit=True, ctx=device, init=init.Xavier())
    animator = d2l.Animator(xlabel='epoch', xlim=[0, num_epochs],
                            legend=['train loss', 'train acc', 'test acc'])
    for epoch in range(num_epochs):
        metric = d2l.Accumulator(3)  # train_loss, train_acc, num_examples
        for i, (X, y) in enumerate(train_iter):
            X, y = X.as_in_ctx(device), y.as_in_ctx(device)
            with autograd.record():
                y_hat = net(X)
                l = loss(y_hat, y)
            l.backward()
            trainer.step(X.shape[0])
            metric.add(l.sum(), d2l.accuracy(y_hat, y), X.shape[0])
            train_loss = metric[0] / metric[2]
            train_acc = metric[1] / metric[2]
            if (i + 1) % 50 == 0:
                animator.add(epoch + i / len(train_iter),
                             (train_loss, train_acc, None))
        test_acc = d2l.evaluate_accuracy_gpu(net, test_iter)
        animator.add(epoch + 1, (None, None, test_acc))
    print(f'train loss {train_loss:.3f}, train acc {train_acc:.3f}, '
          f'test acc {test_acc:.3f}')
%matplotlib inline
import math
import torch
from torch import nn
from torch.optim import lr_scheduler
from d2l import torch as d2l


def net_fn():
    model = nn.Sequential(
        nn.Conv2d(1, 6, kernel_size=5, padding=2), nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2),
        nn.Conv2d(6, 16, kernel_size=5), nn.ReLU(),
        nn.MaxPool2d(kernel_size=2, stride=2),
        nn.Flatten(),
        nn.Linear(16 * 5 * 5, 120), nn.ReLU(),
        nn.Linear(120, 84), nn.ReLU(),
        nn.Linear(84, 10))

    return model

loss = nn.CrossEntropyLoss()
device = d2l.try_gpu()

batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)

# The code is almost identical to `d2l.train_ch6` defined in the
# lenet section of chapter convolutional neural networks
def train(net, train_iter, test_iter, num_epochs, loss, trainer, device,
          scheduler=None):
    net.to(device)
    animator = d2l.Animator(xlabel='epoch', xlim=[0, num_epochs],
                            legend=['train loss', 'train acc', 'test acc'])

    for epoch in range(num_epochs):
        metric = d2l.Accumulator(3)  # train_loss, train_acc, num_examples
        for i, (X, y) in enumerate(train_iter):
            net.train()
            trainer.zero_grad()
            X, y = X.to(device), y.to(device)
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()
            trainer.step()
            with torch.no_grad():
                metric.add(l * X.shape[0], d2l.accuracy(y_hat, y), X.shape[0])
            train_loss = metric[0] / metric[2]
            train_acc = metric[1] / metric[2]
            if (i + 1) % 50 == 0:
                animator.add(epoch + i / len(train_iter),
                             (train_loss, train_acc, None))

        test_acc = d2l.evaluate_accuracy_gpu(net, test_iter)
        animator.add(epoch+1, (None, None, test_acc))

        if scheduler:
            if scheduler.__module__ == lr_scheduler.__name__:
                # Using PyTorch In-Built scheduler
                scheduler.step()
            else:
                # Using custom defined scheduler
                for param_group in trainer.param_groups:
                    param_group['lr'] = scheduler(epoch)

    print(f'train loss {train_loss:.3f}, train acc {train_acc:.3f}, '
          f'test acc {test_acc:.3f}')
%matplotlib inline
import math
import tensorflow as tf
from tensorflow.keras.callbacks import LearningRateScheduler
from d2l import tensorflow as d2l


def net():
    return tf.keras.models.Sequential([
        tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation='relu',
                               padding='same'),
        tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
        tf.keras.layers.Conv2D(filters=16, kernel_size=5,
                               activation='relu'),
        tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(120, activation='relu'),
        tf.keras.layers.Dense(84, activation='sigmoid'),
        tf.keras.layers.Dense(10)])


batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)

# The code is almost identical to `d2l.train_ch6` defined in the
# lenet section of chapter convolutional neural networks
def train(net_fn, train_iter, test_iter, num_epochs, lr,
              device=d2l.try_gpu(), custom_callback = False):
    device_name = device._device_name
    strategy = tf.distribute.OneDeviceStrategy(device_name)
    with strategy.scope():
        optimizer = tf.keras.optimizers.SGD(learning_rate=lr)
        loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
        net = net_fn()
        net.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
    callback = d2l.TrainCallback(net, train_iter, test_iter, num_epochs,
                             device_name)
    if custom_callback is False:
        net.fit(train_iter, epochs=num_epochs, verbose=0,
                callbacks=[callback])
    else:
         net.fit(train_iter, epochs=num_epochs, verbose=0,
                 callbacks=[callback, custom_callback])
    return net

让我们来看看如果我们使用默认设置调用此算法会发生什么情况,例如学习率为 \(0.3\) 并训练 \(30\) 次迭代。请注意,在测试准确度方面的进展停滞超过一点时,训练准确度如何持续提高。两条曲线之间的间隙表示过拟合。

lr, num_epochs = 0.3, 30
net.initialize(force_reinit=True, ctx=device, init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.160, train acc 0.940, test acc 0.882
../_images/output_lr-scheduler_1dfeb6_15_1.svg
lr, num_epochs = 0.3, 30
net = net_fn()
trainer = torch.optim.SGD(net.parameters(), lr=lr)
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.181, train acc 0.930, test acc 0.868
../_images/output_lr-scheduler_1dfeb6_18_1.svg
lr, num_epochs = 0.3, 30
train(net, train_iter, test_iter, num_epochs, lr)
loss 0.219, train acc 0.919, test acc 0.887
70975.9 examples/sec on /GPU:0
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f74ee7de400>
../_images/output_lr-scheduler_1dfeb6_21_2.svg

11.11.2. 调度程序

调整学习率的一种方法是在每一步明确设置学习率。这可以通过 set_learning_rate 方法方便地实现。我们可以在每个时代之后(甚至在每个小批量之后)向下调整它,例如,以动态的方式来响应优化的进展情况。

trainer.set_learning_rate(0.1)
print(f'learning rate is now {trainer.learning_rate:.2f}')
learning rate is now 0.10
lr = 0.1
trainer.param_groups[0]["lr"] = lr
print(f'learning rate is now {trainer.param_groups[0]["lr"]:.2f}')
learning rate is now 0.10
lr = 0.1
dummy_model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
dummy_model.compile(tf.keras.optimizers.SGD(learning_rate=lr), loss='mse')
print(f'learning rate is now ,', dummy_model.optimizer.lr.numpy())
learning rate is now , 0.1

更一般来说,我们想定义一个调度程序。当调用更新次数时,它将返回学习率的适当值。让我们定义一个简单的方法,将学习率设置为 \(\eta = \eta_0 (t + 1)^{-\frac{1}{2}}\)

class SquareRootScheduler:
    def __init__(self, lr=0.1):
        self.lr = lr

    def __call__(self, num_update):
        return self.lr * pow(num_update + 1.0, -0.5)

让我们在一系列值上绘制它的行为。

scheduler = SquareRootScheduler(lr=0.1)
d2l.plot(np.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_41_0.svg
scheduler = SquareRootScheduler(lr=0.1)
d2l.plot(torch.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_44_0.svg
scheduler = SquareRootScheduler(lr=0.1)
d2l.plot(tf.range(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_47_0.svg

现在让我们来看看这对 Fashion-MNIST 的训练有何影响。我们只是提供调度程序作为训练算法的额外参数。

trainer = gluon.Trainer(net.collect_params(), 'sgd',
                        {'lr_scheduler': scheduler})
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.523, train acc 0.809, test acc 0.810
../_images/output_lr-scheduler_1dfeb6_53_1.svg
net = net_fn()
trainer = torch.optim.SGD(net.parameters(), lr)
train(net, train_iter, test_iter, num_epochs, loss, trainer, device,
      scheduler)
train loss 0.276, train acc 0.899, test acc 0.868
../_images/output_lr-scheduler_1dfeb6_56_1.svg
train(net, train_iter, test_iter, num_epochs, lr,
      custom_callback=LearningRateScheduler(scheduler))
loss 0.383, train acc 0.859, test acc 0.848
78236.4 examples/sec on /GPU:0
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f7577f19100>
../_images/output_lr-scheduler_1dfeb6_59_2.svg

这比以前好一些。有两个突出的方面:曲线比以前更加平滑。其次,过拟合较小。不幸的是,关于为什么在 理论 上某些策略会导致较少的过拟合,这还不是一个很好解决的问题。有一些观点认为,较小的步长将导致参数更接近零,因此更简单。但是,这并不能完全解释这种现象,因为我们并没有真正提前停止,而只是轻轻地降低学习率。

11.11.3. 策略

虽然我们不可能涵盖所有各种学习率调度器,但我们尝试在下面简要概述流行的策略。常见的选择是多项式衰减和分段常数表。除此之外,人们发现余弦学习率在一些问题上凭经验运行效果很好。最后,在某些问题上,最好在使用较高的学习率之前预热优化器。

11.11.3.1. 因子调度器

多项式衰减的一种替代方案是乘法衰减,即 \(\alpha \in (0, 1)\)\(\eta_{t+1} \leftarrow \eta_t \cdot \alpha\)。为了防止学习率衰减超出合理的下限,更新方程经常修改为 \(\eta_{t+1} \leftarrow \mathop{\mathrm{max}}(\eta_{\mathrm{min}}, \eta_t \cdot \alpha)\)

class FactorScheduler:
    def __init__(self, factor=1, stop_factor_lr=1e-7, base_lr=0.1):
        self.factor = factor
        self.stop_factor_lr = stop_factor_lr
        self.base_lr = base_lr

    def __call__(self, num_update):
        self.base_lr = max(self.stop_factor_lr, self.base_lr * self.factor)
        return self.base_lr

scheduler = FactorScheduler(factor=0.9, stop_factor_lr=1e-2, base_lr=2.0)
d2l.plot(np.arange(50), [scheduler(t) for t in range(50)])
../_images/output_lr-scheduler_1dfeb6_65_0.svg
class FactorScheduler:
    def __init__(self, factor=1, stop_factor_lr=1e-7, base_lr=0.1):
        self.factor = factor
        self.stop_factor_lr = stop_factor_lr
        self.base_lr = base_lr

    def __call__(self, num_update):
        self.base_lr = max(self.stop_factor_lr, self.base_lr * self.factor)
        return self.base_lr

scheduler = FactorScheduler(factor=0.9, stop_factor_lr=1e-2, base_lr=2.0)
d2l.plot(torch.arange(50), [scheduler(t) for t in range(50)])
../_images/output_lr-scheduler_1dfeb6_68_0.svg
class FactorScheduler:
    def __init__(self, factor=1, stop_factor_lr=1e-7, base_lr=0.1):
        self.factor = factor
        self.stop_factor_lr = stop_factor_lr
        self.base_lr = base_lr

    def __call__(self, num_update):
        self.base_lr = max(self.stop_factor_lr, self.base_lr * self.factor)
        return self.base_lr

scheduler = FactorScheduler(factor=0.9, stop_factor_lr=1e-2, base_lr=2.0)
d2l.plot(tf.range(50), [scheduler(t) for t in range(50)])
../_images/output_lr-scheduler_1dfeb6_71_0.svg

这也可以通过 MXNet 中的内置调度程序通过 lr_scheduler.FactorScheduler 对象来实现。它还需要一些参数,例如预热时间、预热模式(线性或恒定)、所需更新的最大数量等;接下来,我们将酌情使用内置的调度程序,仅在这里解释它们的功能。如图所示,如果需要,构建自己的调度程序相当简单。

11.11.3.2. 多因素调度器

训练深度网络的一种常见策略是保持逐段稳定的学习率,并且每隔一次将学习率降低给定的数量。也就是说,给定一组降低利率的时间,例如 \(s = \{5, 10, 20\}\) 每当 \(t \in s\) 时降低 \(\eta_{t+1} \leftarrow \eta_t \cdot \alpha\)。假设每个步骤中的值减半,我们可以按如下方式实现这一点。

scheduler = lr_scheduler.MultiFactorScheduler(step=[15, 30], factor=0.5,
                                              base_lr=0.5)
d2l.plot(np.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_77_0.svg
net = net_fn()
trainer = torch.optim.SGD(net.parameters(), lr=0.5)
scheduler = lr_scheduler.MultiStepLR(trainer, milestones=[15, 30], gamma=0.5)

def get_lr(trainer, scheduler):
    lr = scheduler.get_last_lr()[0]
    trainer.step()
    scheduler.step()
    return lr

d2l.plot(torch.arange(num_epochs), [get_lr(trainer, scheduler)
                                  for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_80_0.svg
class MultiFactorScheduler:
    def __init__(self, step, factor, base_lr):
        self.step = step
        self.factor = factor
        self.base_lr = base_lr

    def __call__(self, epoch):
        if epoch in self.step:
            self.base_lr = self.base_lr * self.factor
            return self.base_lr
        else:
            return self.base_lr

scheduler = MultiFactorScheduler(step=[15, 30], factor=0.5, base_lr=0.5)
d2l.plot(tf.range(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_83_0.svg

这种分段恒定学习率表背后的直觉是,让优化继续进行,直到权重向量的分布达到一个平衡点。然后(也只有在那时)我们才将学习率降低,以获得更高质量的代理到一个良好的局部最小值。下面的例子展示了如何使用这种方法产生更好的解决方案。

trainer = gluon.Trainer(net.collect_params(), 'sgd',
                        {'lr_scheduler': scheduler})
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.185, train acc 0.932, test acc 0.895
../_images/output_lr-scheduler_1dfeb6_89_1.svg
train(net, train_iter, test_iter, num_epochs, loss, trainer, device,
      scheduler)
train loss 0.207, train acc 0.923, test acc 0.881
../_images/output_lr-scheduler_1dfeb6_92_1.svg
train(net, train_iter, test_iter, num_epochs, lr,
      custom_callback=LearningRateScheduler(scheduler))
loss 0.234, train acc 0.913, test acc 0.885
80066.4 examples/sec on /GPU:0
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f7577dfe1c0>
../_images/output_lr-scheduler_1dfeb6_95_2.svg

11.11.3.3. 余弦调度器

[Loshchilov & Hutter, 2016] 提出了一种相当令人困惑的启发式算法。它所依据的观点是,我们可能不想在一开始就太大地降低学习率,而且我们最终可能希望用非常小的学习率来 “改进” 解决方案。这导致了类似于余弦的时间表,其中包含以下函数形式的学习率在 \(t \in [0, T]\) 范围内。

(11.11.1)\[\eta_t = \eta_T + \frac{\eta_0 - \eta_T}{2} \left(1 + \cos(\pi t/T)\right)\]

这里是 \(\eta_0\) 是初始学习率,\(\eta_T\) 是当时 \(T\) 的目标学习率。此外,对于 \(t > T\),我们只需将值固定到 \(\eta_T\),不再增加它。在下面的示例中,我们设置了最大更新步骤 \(T = 20\)

scheduler = lr_scheduler.CosineScheduler(max_update=20, base_lr=0.3,
                                         final_lr=0.01)
d2l.plot(np.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_101_0.svg
class CosineScheduler:
    def __init__(self, max_update, base_lr=0.01, final_lr=0,
               warmup_steps=0, warmup_begin_lr=0):
        self.base_lr_orig = base_lr
        self.max_update = max_update
        self.final_lr = final_lr
        self.warmup_steps = warmup_steps
        self.warmup_begin_lr = warmup_begin_lr
        self.max_steps = self.max_update - self.warmup_steps

    def get_warmup_lr(self, epoch):
        increase = (self.base_lr_orig - self.warmup_begin_lr) \
                       * float(epoch) / float(self.warmup_steps)
        return self.warmup_begin_lr + increase

    def __call__(self, epoch):
        if epoch < self.warmup_steps:
            return self.get_warmup_lr(epoch)
        if epoch <= self.max_update:
            self.base_lr = self.final_lr + (
                self.base_lr_orig - self.final_lr) * (1 + math.cos(
                math.pi * (epoch - self.warmup_steps) / self.max_steps)) / 2
        return self.base_lr

scheduler = CosineScheduler(max_update=20, base_lr=0.3, final_lr=0.01)
d2l.plot(torch.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_104_0.svg
class CosineScheduler:
    def __init__(self, max_update, base_lr=0.01, final_lr=0,
               warmup_steps=0, warmup_begin_lr=0):
        self.base_lr_orig = base_lr
        self.max_update = max_update
        self.final_lr = final_lr
        self.warmup_steps = warmup_steps
        self.warmup_begin_lr = warmup_begin_lr
        self.max_steps = self.max_update - self.warmup_steps

    def get_warmup_lr(self, epoch):
        increase = (self.base_lr_orig - self.warmup_begin_lr) \
                       * float(epoch) / float(self.warmup_steps)
        return self.warmup_begin_lr + increase

    def __call__(self, epoch):
        if epoch < self.warmup_steps:
            return self.get_warmup_lr(epoch)
        if epoch <= self.max_update:
            self.base_lr = self.final_lr + (
                self.base_lr_orig - self.final_lr) * (1 + math.cos(
                math.pi * (epoch - self.warmup_steps) / self.max_steps)) / 2
        return self.base_lr

scheduler = CosineScheduler(max_update=20, base_lr=0.3, final_lr=0.01)
d2l.plot(tf.range(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_107_0.svg

在计算机视觉的背景下,这个时间表 可以 导致改进的结果。但是,请注意,这种改进并不能保证(如下所示)。

trainer = gluon.Trainer(net.collect_params(), 'sgd',
                        {'lr_scheduler': scheduler})
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.344, train acc 0.877, test acc 0.880
../_images/output_lr-scheduler_1dfeb6_113_1.svg
net = net_fn()
trainer = torch.optim.SGD(net.parameters(), lr=0.3)
train(net, train_iter, test_iter, num_epochs, loss, trainer, device,
      scheduler)
train loss 0.191, train acc 0.930, test acc 0.899
../_images/output_lr-scheduler_1dfeb6_116_1.svg
train(net, train_iter, test_iter, num_epochs, lr,
      custom_callback=LearningRateScheduler(scheduler))
loss 0.249, train acc 0.909, test acc 0.894
79704.0 examples/sec on /GPU:0
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f7577fe3f40>
../_images/output_lr-scheduler_1dfeb6_119_2.svg

11.11.3.4. 预热

在某些情况下,初始化参数不足以保证良好的解决方案。这对于某些高级网络设计来说尤其是一个问题,可能导致不稳定的优化问题。我们可以通过选择足够小的学习率来解决这个问题,以防止一开始发生分歧。不幸的是,这意味着进展缓慢。相反,较高的学习率最初会导致分歧。

解决这种困境的一个相当简单的解决方法是使用预热期,在此期间学习率将 增加 至初始最大值,然后冷却学习率直到优化过程结束。为了简单起见,通常使用线性递增。这导致了如下表所示的时间表。

scheduler = lr_scheduler.CosineScheduler(20, warmup_steps=5, base_lr=0.3,
                                         final_lr=0.01)
d2l.plot(np.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_125_0.svg
scheduler = CosineScheduler(20, warmup_steps=5, base_lr=0.3, final_lr=0.01)
d2l.plot(torch.arange(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_128_0.svg
scheduler = CosineScheduler(20, warmup_steps=5, base_lr=0.3, final_lr=0.01)
d2l.plot(tf.range(num_epochs), [scheduler(t) for t in range(num_epochs)])
../_images/output_lr-scheduler_1dfeb6_131_0.svg

请注意,网络最初收敛得更好(特别是观察前 5 个时期的性能)。

trainer = gluon.Trainer(net.collect_params(), 'sgd',
                        {'lr_scheduler': scheduler})
train(net, train_iter, test_iter, num_epochs, loss, trainer, device)
train loss 0.349, train acc 0.874, test acc 0.854
../_images/output_lr-scheduler_1dfeb6_137_1.svg
net = net_fn()
trainer = torch.optim.SGD(net.parameters(), lr=0.3)
train(net, train_iter, test_iter, num_epochs, loss, trainer, device,
      scheduler)
train loss 0.200, train acc 0.925, test acc 0.895
../_images/output_lr-scheduler_1dfeb6_140_1.svg
train(net, train_iter, test_iter, num_epochs, lr,
      custom_callback=LearningRateScheduler(scheduler))
loss 0.268, train acc 0.901, test acc 0.878
80260.7 examples/sec on /GPU:0
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f74ebee30d0>
../_images/output_lr-scheduler_1dfeb6_143_2.svg

热身可以应用于任何调度程序(不仅仅是余弦)。有关学习率调度和更多实验的更详细讨论,另请参阅 [Gotmare et al., 2018]。特别是,他们发现,热身阶段限制了非常深的网络中参数的发散量。这在直觉上是有道理的,因为我们预计,网络中那些在一开始花费最多时间取得进展的那些部分的随机初始化会出现巨大的差异。

11.11.4. 摘要

  • 在训练期间降低学习率可以提高准确性,并且(最令人困惑的是)减少模型的过拟合。

  • 在实践中,每当进展趋于稳定时,逐步降低学习率是有效的。从本质上说,这可以确保我们有效地收敛到一个合适的解决方案,只有这样才能通过降低学习率来减少参数的固有方差。

  • 余弦调度器在某些计算机视觉问题中很受欢迎。例如,请参阅 GluonCV 了解此类调度器的详细信息。

  • 优化之前的预热期可以防止发散。

  • 优化在深度学习中有多种用途。除了最大限度地减少训练目标之外,不同的优化算法和学习率调整可能会导致测试集上不同的泛化和过拟合量(对于同样的训练误差)。

11.11.5. 练习

  1. 试验给定固定学习率的优化行为。这种方式可以获得的最佳模型是什么?

  2. 如果你改变学习率下降的指数,收敛性会如何改变?为了方便在实验中使用 PolyScheduler

  3. 将余弦调度器应用于大型计算机视觉问题,例如训练 ImageNet。相对于其他调度程序,它如何影响性能?

  4. 热身应该持续多长时间?

  5. 你能把优化和采样联系起来吗?首先,在随机梯度 Langevin 动力学上使用 [Welling & Teh, 2011] 的结果。