.. _sec_natural-language-inference-bert: 自然语言推断:微调BERT ====================== 在本章的前面几节中,我们已经为SNLI数据集( :numref:`sec_natural-language-inference-and-dataset`\ )上的自然语言推断任务设计了一个基于注意力的结构( :numref:`sec_natural-language-inference-attention`\ )。现在,我们通过微调BERT来重新审视这项任务。正如在 :numref:`sec_finetuning-bert`\ 中讨论的那样,自然语言推断是一个序列级别的文本对分类问题,而微调BERT只需要一个额外的基于多层感知机的架构,如 :numref:`fig_nlp-map-nli-bert`\ 中所示。 .. _fig_nlp-map-nli-bert: .. figure:: ../img/nlp-map-nli-bert.svg 将预训练BERT提供给基于多层感知机的自然语言推断架构 本节将下载一个预训练好的小版本的BERT,然后对其进行微调,以便在SNLI数据集上进行自然语言推断。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import json import multiprocessing import os from mxnet import gluon, np, npx from mxnet.gluon import nn from d2l import mxnet as d2l npx.set_np() .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import json import multiprocessing import os import torch from torch import nn from d2l import torch as d2l .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python import warnings from d2l import paddle as d2l warnings.filterwarnings("ignore") import json import multiprocessing import os import paddle from paddle import nn .. raw:: html
.. raw:: html
加载预训练的BERT ---------------- 我们已经在 :numref:`sec_bert-dataset`\ 和 :numref:`sec_bert-pretraining`\ WikiText-2数据集上预训练BERT(请注意,原始的BERT模型是在更大的语料库上预训练的)。正如在 :numref:`sec_bert-pretraining`\ 中所讨论的,原始的BERT模型有数以亿计的参数。在下面,我们提供了两个版本的预训练的BERT:“bert.base”与原始的BERT基础模型一样大,需要大量的计算资源才能进行微调,而“bert.small”是一个小版本,以便于演示。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python d2l.DATA_HUB['bert.base'] = (d2l.DATA_URL + 'bert.base.zip', '7b3820b35da691042e5d34c0971ac3edbd80d3f4') d2l.DATA_HUB['bert.small'] = (d2l.DATA_URL + 'bert.small.zip', 'a4e718a47137ccd1809c9107ab4f5edd317bae2c') .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python d2l.DATA_HUB['bert.base'] = (d2l.DATA_URL + 'bert.base.torch.zip', '225d66f04cae318b841a13d32af3acc165f253ac') d2l.DATA_HUB['bert.small'] = (d2l.DATA_URL + 'bert.small.torch.zip', 'c72329e68a732bef0452e4b96a1c341c8910f81f') .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python d2l.DATA_HUB['bert_small'] = ('https://paddlenlp.bj.bcebos.com/models/bert.small.paddle.zip', '9fcde07509c7e87ec61c640c1b277509c7e87ec6153d9041758e4') d2l.DATA_HUB['bert_base'] = ('https://paddlenlp.bj.bcebos.com/models/bert.base.paddle.zip', '9fcde07509c7e87ec61c640c1b27509c7e87ec61753d9041758e4') .. raw:: html
.. raw:: html
两个预训练好的BERT模型都包含一个定义词表的“vocab.json”文件和一个预训练参数的“pretrained.params”文件。我们实现了以下\ ``load_pretrained_model``\ 函数来加载预先训练好的BERT参数。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_pretrained_model(pretrained_model, num_hiddens, ffn_num_hiddens, num_heads, num_layers, dropout, max_len, devices): data_dir = d2l.download_extract(pretrained_model) # 定义空词表以加载预定义词表 vocab = d2l.Vocab() vocab.idx_to_token = json.load(open(os.path.join(data_dir, 'vocab.json'))) vocab.token_to_idx = {token: idx for idx, token in enumerate( vocab.idx_to_token)} bert = d2l.BERTModel(len(vocab), num_hiddens, ffn_num_hiddens, num_heads, num_layers, dropout, max_len) # 加载预训练BERT参数 bert.load_parameters(os.path.join(data_dir, 'pretrained.params'), ctx=devices) return bert, vocab .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_pretrained_model(pretrained_model, num_hiddens, ffn_num_hiddens, num_heads, num_layers, dropout, max_len, devices): data_dir = d2l.download_extract(pretrained_model) # 定义空词表以加载预定义词表 vocab = d2l.Vocab() vocab.idx_to_token = json.load(open(os.path.join(data_dir, 'vocab.json'))) vocab.token_to_idx = {token: idx for idx, token in enumerate( vocab.idx_to_token)} bert = d2l.BERTModel(len(vocab), num_hiddens, norm_shape=[256], ffn_num_input=256, ffn_num_hiddens=ffn_num_hiddens, num_heads=4, num_layers=2, dropout=0.2, max_len=max_len, key_size=256, query_size=256, value_size=256, hid_in_features=256, mlm_in_features=256, nsp_in_features=256) # 加载预训练BERT参数 bert.load_state_dict(torch.load(os.path.join(data_dir, 'pretrained.params'))) return bert, vocab .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python def load_pretrained_model(pretrained_model, num_hiddens, ffn_num_hiddens, num_heads, num_layers, dropout, max_len, devices): data_dir = d2l.download_extract(pretrained_model) # 定义空词表以加载预定义词表 vocab = d2l.Vocab() vocab.idx_to_token = json.load(open(os.path.join(data_dir, 'vocab.json'))) vocab.token_to_idx = {token: idx for idx, token in enumerate( vocab.idx_to_token)} bert = d2l.BERTModel(len(vocab), num_hiddens, norm_shape=[256], ffn_num_input=256, ffn_num_hiddens=ffn_num_hiddens, num_heads=4, num_layers=2, dropout=0.2, max_len=max_len, key_size=256, query_size=256, value_size=256, hid_in_features=256, mlm_in_features=256, nsp_in_features=256) # 加载预训练BERT参数 bert.set_state_dict(paddle.load(os.path.join(data_dir, 'pretrained.pdparams'))) return bert, vocab .. raw:: html
.. raw:: html
为了便于在大多数机器上演示,我们将在本节中加载和微调经过预训练BERT的小版本(“bert.small”)。在练习中,我们将展示如何微调大得多的“bert.base”以显著提高测试精度。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python devices = d2l.try_all_gpus() bert, vocab = load_pretrained_model( 'bert.small', num_hiddens=256, ffn_num_hiddens=512, num_heads=4, num_layers=2, dropout=0.1, max_len=512, devices=devices) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output Downloading ../data/bert.small.zip from http://d2l-data.s3-accelerate.amazonaws.com/bert.small.zip... [07:02:11] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for CPU [07:02:12] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for GPU [07:02:12] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for GPU .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python devices = d2l.try_all_gpus() bert, vocab = load_pretrained_model( 'bert.small', num_hiddens=256, ffn_num_hiddens=512, num_heads=4, num_layers=2, dropout=0.1, max_len=512, devices=devices) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output Downloading ../data/bert.small.torch.zip from http://d2l-data.s3-accelerate.amazonaws.com/bert.small.torch.zip... .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python devices = d2l.try_all_gpus() bert, vocab = load_pretrained_model( 'bert_small', num_hiddens=256, ffn_num_hiddens=512, num_heads=4, num_layers=2, dropout=0.1, max_len=512, devices=devices) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output 正在从https://paddlenlp.bj.bcebos.com/models/bert.small.paddle.zip下载../data/bert.small.paddle.zip... W0818 09:06:42.244998 3053 gpu_resources.cc:61] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.8, Runtime API Version: 11.8 W0818 09:06:42.275718 3053 gpu_resources.cc:91] device: 0, cuDNN Version: 8.7. .. raw:: html
.. raw:: html
微调BERT的数据集 ---------------- 对于SNLI数据集的下游任务自然语言推断,我们定义了一个定制的数据集类\ ``SNLIBERTDataset``\ 。在每个样本中,前提和假设形成一对文本序列,并被打包成一个BERT输入序列,如 :numref:`fig_bert-two-seqs`\ 所示。回想 :numref:`subsec_bert_input_rep`\ ,片段索引用于区分BERT输入序列中的前提和假设。利用预定义的BERT输入序列的最大长度(\ ``max_len``\ ),持续移除输入文本对中较长文本的最后一个标记,直到满足\ ``max_len``\ 。为了加速生成用于微调BERT的SNLI数据集,我们使用4个工作进程并行生成训练或测试样本。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class SNLIBERTDataset(gluon.data.Dataset): def __init__(self, dataset, max_len, vocab=None): all_premise_hypothesis_tokens = [[ p_tokens, h_tokens] for p_tokens, h_tokens in zip( *[d2l.tokenize([s.lower() for s in sentences]) for sentences in dataset[:2]])] self.labels = np.array(dataset[2]) self.vocab = vocab self.max_len = max_len (self.all_token_ids, self.all_segments, self.valid_lens) = self._preprocess(all_premise_hypothesis_tokens) print('read ' + str(len(self.all_token_ids)) + ' examples') def _preprocess(self, all_premise_hypothesis_tokens): pool = multiprocessing.Pool(4) # 使用4个进程 out = pool.map(self._mp_worker, all_premise_hypothesis_tokens) all_token_ids = [ token_ids for token_ids, segments, valid_len in out] all_segments = [segments for token_ids, segments, valid_len in out] valid_lens = [valid_len for token_ids, segments, valid_len in out] return (np.array(all_token_ids, dtype='int32'), np.array(all_segments, dtype='int32'), np.array(valid_lens)) def _mp_worker(self, premise_hypothesis_tokens): p_tokens, h_tokens = premise_hypothesis_tokens self._truncate_pair_of_tokens(p_tokens, h_tokens) tokens, segments = d2l.get_tokens_and_segments(p_tokens, h_tokens) token_ids = self.vocab[tokens] + [self.vocab['']] \ * (self.max_len - len(tokens)) segments = segments + [0] * (self.max_len - len(segments)) valid_len = len(tokens) return token_ids, segments, valid_len def _truncate_pair_of_tokens(self, p_tokens, h_tokens): # 为BERT输入中的''、''和''词元保留位置 while len(p_tokens) + len(h_tokens) > self.max_len - 3: if len(p_tokens) > len(h_tokens): p_tokens.pop() else: h_tokens.pop() def __getitem__(self, idx): return (self.all_token_ids[idx], self.all_segments[idx], self.valid_lens[idx]), self.labels[idx] def __len__(self): return len(self.all_token_ids) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class SNLIBERTDataset(torch.utils.data.Dataset): def __init__(self, dataset, max_len, vocab=None): all_premise_hypothesis_tokens = [[ p_tokens, h_tokens] for p_tokens, h_tokens in zip( *[d2l.tokenize([s.lower() for s in sentences]) for sentences in dataset[:2]])] self.labels = torch.tensor(dataset[2]) self.vocab = vocab self.max_len = max_len (self.all_token_ids, self.all_segments, self.valid_lens) = self._preprocess(all_premise_hypothesis_tokens) print('read ' + str(len(self.all_token_ids)) + ' examples') def _preprocess(self, all_premise_hypothesis_tokens): pool = multiprocessing.Pool(4) # 使用4个进程 out = pool.map(self._mp_worker, all_premise_hypothesis_tokens) all_token_ids = [ token_ids for token_ids, segments, valid_len in out] all_segments = [segments for token_ids, segments, valid_len in out] valid_lens = [valid_len for token_ids, segments, valid_len in out] return (torch.tensor(all_token_ids, dtype=torch.long), torch.tensor(all_segments, dtype=torch.long), torch.tensor(valid_lens)) def _mp_worker(self, premise_hypothesis_tokens): p_tokens, h_tokens = premise_hypothesis_tokens self._truncate_pair_of_tokens(p_tokens, h_tokens) tokens, segments = d2l.get_tokens_and_segments(p_tokens, h_tokens) token_ids = self.vocab[tokens] + [self.vocab['']] \ * (self.max_len - len(tokens)) segments = segments + [0] * (self.max_len - len(segments)) valid_len = len(tokens) return token_ids, segments, valid_len def _truncate_pair_of_tokens(self, p_tokens, h_tokens): # 为BERT输入中的''、''和''词元保留位置 while len(p_tokens) + len(h_tokens) > self.max_len - 3: if len(p_tokens) > len(h_tokens): p_tokens.pop() else: h_tokens.pop() def __getitem__(self, idx): return (self.all_token_ids[idx], self.all_segments[idx], self.valid_lens[idx]), self.labels[idx] def __len__(self): return len(self.all_token_ids) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class SNLIBERTDataset(paddle.io.Dataset): def __init__(self, dataset, max_len, vocab=None): all_premise_hypothesis_tokens = [[ p_tokens, h_tokens] for p_tokens, h_tokens in zip( *[d2l.tokenize([s.lower() for s in sentences]) for sentences in dataset[:2]])] self.labels = paddle.to_tensor(dataset[2]) self.vocab = vocab self.max_len = max_len (self.all_token_ids, self.all_segments, self.valid_lens) = self._preprocess(all_premise_hypothesis_tokens) print('read ' + str(len(self.all_token_ids)) + ' examples') def _preprocess(self, all_premise_hypothesis_tokens): # pool = multiprocessing.Pool(1) # 使用4个进程 # out = pool.map(self._mp_worker, all_premise_hypothesis_tokens) out = [] for i in all_premise_hypothesis_tokens: tempOut = self._mp_worker(i) out.append(tempOut) all_token_ids = [ token_ids for token_ids, segments, valid_len in out] all_segments = [segments for token_ids, segments, valid_len in out] valid_lens = [valid_len for token_ids, segments, valid_len in out] return (paddle.to_tensor(all_token_ids, dtype='int64'), paddle.to_tensor(all_segments, dtype='int64'), paddle.to_tensor(valid_lens)) def _mp_worker(self, premise_hypothesis_tokens): p_tokens, h_tokens = premise_hypothesis_tokens self._truncate_pair_of_tokens(p_tokens, h_tokens) tokens, segments = d2l.get_tokens_and_segments(p_tokens, h_tokens) token_ids = self.vocab[tokens] + [self.vocab['']] \ * (self.max_len - len(tokens)) segments = segments + [0] * (self.max_len - len(segments)) valid_len = len(tokens) return token_ids, segments, valid_len def _truncate_pair_of_tokens(self, p_tokens, h_tokens): # 为BERT输入中的''、''和''词元保留位置 while len(p_tokens) + len(h_tokens) > self.max_len - 3: if len(p_tokens) > len(h_tokens): p_tokens.pop() else: h_tokens.pop() def __getitem__(self, idx): return (self.all_token_ids[idx], self.all_segments[idx], self.valid_lens[idx]), self.labels[idx] def __len__(self): return len(self.all_token_ids) .. raw:: html
.. raw:: html
下载完SNLI数据集后,我们通过实例化\ ``SNLIBERTDataset``\ 类来生成训练和测试样本。这些样本将在自然语言推断的训练和测试期间进行小批量读取。 .. raw:: html
mxnetpytorchpaddle
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python # 如果出现显存不足错误,请减少“batch_size”。在原始的BERT模型中,max_len=512 batch_size, max_len, num_workers = 512, 128, d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_set = SNLIBERTDataset(d2l.read_snli(data_dir, True), max_len, vocab) test_set = SNLIBERTDataset(d2l.read_snli(data_dir, False), max_len, vocab) train_iter = gluon.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = gluon.data.DataLoader(test_set, batch_size, num_workers=num_workers) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output Downloading ../data/snli_1.0.zip from https://nlp.stanford.edu/projects/snli/snli_1.0.zip... read 549367 examples read 9824 examples .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python # 如果出现显存不足错误,请减少“batch_size”。在原始的BERT模型中,max_len=512 batch_size, max_len, num_workers = 512, 128, d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_set = SNLIBERTDataset(d2l.read_snli(data_dir, True), max_len, vocab) test_set = SNLIBERTDataset(d2l.read_snli(data_dir, False), max_len, vocab) train_iter = torch.utils.data.DataLoader(train_set, batch_size, shuffle=True, num_workers=num_workers) test_iter = torch.utils.data.DataLoader(test_set, batch_size, num_workers=num_workers) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output read 549367 examples read 9824 examples .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python # 如果出现显存不足错误,请减少“batch_size”。在原始的BERT模型中,max_len=512 batch_size, max_len, num_workers = 512, 128, d2l.get_dataloader_workers() data_dir = d2l.download_extract('SNLI') train_set = SNLIBERTDataset(d2l.read_snli(data_dir, True), max_len, vocab) test_set = SNLIBERTDataset(d2l.read_snli(data_dir, False), max_len, vocab) train_iter = paddle.io.DataLoader(train_set, batch_size=batch_size, shuffle=True, return_list=True) test_iter = paddle.io.DataLoader(test_set, batch_size=batch_size, return_list=True) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output 正在从https://nlp.stanford.edu/projects/snli/snli_1.0.zip下载../data/snli_1.0.zip... read 549367 examples read 9824 examples .. raw:: html
.. raw:: html
微调BERT -------- 如 :numref:`fig_bert-two-seqs`\ 所示,用于自然语言推断的微调BERT只需要一个额外的多层感知机,该多层感知机由两个全连接层组成(请参见下面\ ``BERTClassifier``\ 类中的\ ``self.hidden``\ 和\ ``self.output``\ )。这个多层感知机将特殊的“”词元的BERT表示进行了转换,该词元同时编码前提和假设的信息为自然语言推断的三个输出:蕴涵、矛盾和中性。 .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class BERTClassifier(nn.Block): def __init__(self, bert): super(BERTClassifier, self).__init__() self.encoder = bert.encoder self.hidden = bert.hidden self.output = nn.Dense(3) def forward(self, inputs): tokens_X, segments_X, valid_lens_x = inputs encoded_X = self.encoder(tokens_X, segments_X, valid_lens_x) return self.output(self.hidden(encoded_X[:, 0, :])) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class BERTClassifier(nn.Module): def __init__(self, bert): super(BERTClassifier, self).__init__() self.encoder = bert.encoder self.hidden = bert.hidden self.output = nn.Linear(256, 3) def forward(self, inputs): tokens_X, segments_X, valid_lens_x = inputs encoded_X = self.encoder(tokens_X, segments_X, valid_lens_x) return self.output(self.hidden(encoded_X[:, 0, :])) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python class BERTClassifier(nn.Layer): def __init__(self, bert): super(BERTClassifier, self).__init__() self.encoder = bert.encoder self.hidden = bert.hidden self.output = nn.Linear(256, 3) def forward(self, inputs): tokens_X, segments_X, valid_lens_x = inputs encoded_X = self.encoder(tokens_X, segments_X, valid_lens_x.squeeze(1)) return self.output(self.hidden(encoded_X[:, 0, :])) .. raw:: html
.. raw:: html
在下文中,预训练的BERT模型\ ``bert``\ 被送到用于下游应用的\ ``BERTClassifier``\ 实例\ ``net``\ 中。在BERT微调的常见实现中,只有额外的多层感知机(\ ``net.output``\ )的输出层的参数将从零开始学习。预训练BERT编码器(\ ``net.encoder``\ )和额外的多层感知机的隐藏层(\ ``net.hidden``\ )的所有参数都将进行微调。 .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python net = BERTClassifier(bert) net.output.initialize(ctx=devices) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python net = BERTClassifier(bert) .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python net = BERTClassifier(bert) .. raw:: html
.. raw:: html
回想一下,在 :numref:`sec_bert`\ 中,\ ``MaskLM``\ 类和\ ``NextSentencePred``\ 类在其使用的多层感知机中都有一些参数。这些参数是预训练BERT模型\ ``bert``\ 中参数的一部分,因此是\ ``net``\ 中的参数的一部分。然而,这些参数仅用于计算预训练过程中的遮蔽语言模型损失和下一句预测损失。这两个损失函数与微调下游应用无关,因此当BERT微调时,\ ``MaskLM``\ 和\ ``NextSentencePred``\ 中采用的多层感知机的参数不会更新(陈旧的,staled)。 为了允许具有陈旧梯度的参数,标志\ ``ignore_stale_grad=True``\ 在\ ``step``\ 函数\ ``d2l.train_batch_ch13``\ 中被设置。我们通过该函数使用SNLI的训练集(\ ``train_iter``\ )和测试集(\ ``test_iter``\ )对\ ``net``\ 模型进行训练和评估。由于计算资源有限,训练和测试精度可以进一步提高:我们把对它的讨论留在练习中。 .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python lr, num_epochs = 1e-4, 5 trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr}) loss = gluon.loss.SoftmaxCrossEntropyLoss() d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices, d2l.split_batch_multi_inputs) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output loss 0.479, train acc 0.810, test acc 0.788 4671.6 examples/sec on [gpu(0), gpu(1)] .. figure:: output_natural-language-inference-bert_1857e6_99_1.svg .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python lr, num_epochs = 1e-4, 5 trainer = torch.optim.Adam(net.parameters(), lr=lr) loss = nn.CrossEntropyLoss(reduction='none') d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output loss 0.520, train acc 0.790, test acc 0.779 10442.5 examples/sec on [device(type='cuda', index=0), device(type='cuda', index=1)] .. figure:: output_natural-language-inference-bert_1857e6_102_1.svg .. raw:: html
.. raw:: html
.. raw:: latex \diilbookstyleinputcell .. code:: python lr, num_epochs = 1e-4, 5 trainer = paddle.optimizer.Adam(learning_rate=lr, parameters=net.parameters()) loss = nn.CrossEntropyLoss(reduction='none') d2l.train_ch13(net, train_iter, test_iter, loss, trainer, num_epochs, devices) .. raw:: latex \diilbookstyleoutputcell .. parsed-literal:: :class: output loss 0.672, train acc 0.712, test acc 0.715 4759.4 examples/sec on [Place(gpu:0), Place(gpu:1)] .. figure:: output_natural-language-inference-bert_1857e6_105_1.svg .. raw:: html
.. raw:: html
小结 ---- - 我们可以针对下游应用对预训练的BERT模型进行微调,例如在SNLI数据集上进行自然语言推断。 - 在微调过程中,BERT模型成为下游应用模型的一部分。仅与训练前损失相关的参数在微调期间不会更新。 练习 ---- 1. 如果您的计算资源允许,请微调一个更大的预训练BERT模型,该模型与原始的BERT基础模型一样大。修改\ ``load_pretrained_model``\ 函数中的参数设置:将“bert.small”替换为“bert.base”,将\ ``num_hiddens=256``\ 、\ ``ffn_num_hiddens=512``\ 、\ ``num_heads=4``\ 和\ ``num_layers=2``\ 的值分别增加到768、3072、12和12。通过增加微调迭代轮数(可能还会调优其他超参数),你可以获得高于0.86的测试精度吗? 2. 如何根据一对序列的长度比值截断它们?将此对截断方法与\ ``SNLIBERTDataset``\ 类中使用的方法进行比较。它们的利弊是什么? .. raw:: html
.. raw:: html
`Discussions `__ .. raw:: html
.. raw:: html
`Discussions `__ .. raw:: html
.. raw:: html
`Discussions `__ .. raw:: html
.. raw:: html