zoukankan      html  css  js  c++  java
  • 论文翻译——Attention Is All You Need

    Attention Is All You Need

    Abstract

    The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder.

    显性序列转换模型基于复杂的递归或卷积神经网络,包括编码器和解码器。

    The best performing models also connect the encoder and decoder through an attention mechanism.

    性能最佳的模型还通过注意机制连接编码器和解码器。

    We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.

    我们提出了一种新的简单的网络结构,转换器,它完全基于注意机制,完全免除了递归和卷积。

    Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.

    在两个机器翻译任务上的实验表明,这些模型在质量上更优,同时更可并行化,所需的训练时间明显更少。

    Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU.

    我们的模型在2014年WMT英德翻译任务中达到28.4 BLEU,比现有的最佳结果(包括集成部分)提高了2个BLEU以上。

    On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.

    在2014年的WMT英法翻译任务中,我们的模型在8个GPU上进行了3.5天的培训后,建立了一个新的单模型——最先进的BLEU评分41.8,这只是文献中最佳模型培训成本的一小部分。

    We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

    我们证明了Transformer 可以很好地将其推广到其他任务,并成功地将其应用到具有大量和有限训练数据的英语选民分析中。

    1 Introduction

    Recurrent neural networks, long short-term memory [13] and gated recurrent [7] neural networks in particular, have been firmly established as state of the art approaches in sequence models and transduction problems such as language modes and machine translation [35, 2, 5].

    递归神经网络,特别是长短时记忆[13]和门控递归[7]神经网络,已经被牢固地建立为最先进的序列模型方法和转换问题,如语言模式和机器翻译[35,2,5]。

    Numerous efforts have since continued to push the boundaries of recurrent language models and encoder-decoder architectures [38, 24, 15].

    从那以后,无数的努力继续推动循环语言模型和编解码器架构的边界[38,24,15]。

    Equal contribution. Listing order is random.

    平等的贡献。列表顺序是随机的。

    Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea.

    Jakob建议用自我关注代替RNNs,并开始努力评估这个想法。

    Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work.

    Ashish与Illia一起设计并实现了第一个Transformer模型,并参与了这项工作的各个方面。

    Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail.

    Noam提出了缩放的点产品注意力、多头注意力和无参数的位置表示,并成为几乎涉及每个细节的另一个人。

    Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor.

    Niki在我们最初的代码库和tensor2tensor中设计、实现、调整和评估了无数的模型变体。

    Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations.

    Llion还试验了新的模型变体,负责我们的初始代码库,以及有效的推理和可视化。

    Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research.

    Lukasz和Aidan花费了无数漫长的日子来设计和实现tensor2tensor的各个部分,替换我们早期的代码库,极大地改进了结果并极大地加速了我们的研究。

    Work performed while at Google Brain.

    Work performed while at Google Research.

    31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.

    Recurrent models typically factor computation along the symbol positions of the input and output sequences.

    递归模型通常沿着输入和输出序列的符号位置进行因子计算。

    Aligning the positions to steps in computation time, they generate a sequence of hidden states h_t, as a function of the previous hidden state h_t-1 and the input for position t.

    将计算时间内的位置与步骤对齐,生成一个隐藏状态序列h_t,作为前一个隐藏状态h_t-1和位置t的输入的函数。

    This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.

    由于内存限制限制了示例之间的批处理,因此这种固有的顺序性妨碍了训练示例中的并行化,而在更长的序列长度中,并行化变得至关重要。

    Recent work has achieved significant improvements in computational efficiency through factorization tricks [21] and conditional computation [32], while also improving model performance in case of the latter.

    最近的工作已经通过因数分解技巧[21]和条件计算[32]在计算效率方面取得了显著的改进,同时也提高了针对后者的模型性能。

    The fundamental constraint of sequential computation, however, remains.

    然而,顺序计算的基本约束仍然存在。

    Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences [2, 19].

    在各种任务中,注意力机制已经成为引人注目的序列建模和转换模型的组成部分,允许对依赖项进行建模,而不考虑它们在输入或输出序列中的距离[2,19]。

    In all but a few cases [27], however, such attention mechanisms are used in conjunction with a recurrent network.

    然而,除了少数情况外,在所有[27]中,这种注意机制都与一个递归网络一起使用。

    In this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output.

    在这项工作中,我们提出了Transformer,一种避免递归的模型架构,而是完全依赖于一种注意力机制来绘制输入和输出之间的全局依赖关系。

    The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs

    Transformer 允许更多的并行化,在8个P100 gpu上训练12个小时后,可以达到翻译质量的新水平

    2 Background

    The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU [16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions.

    减少序列计算的目标也构成了扩展神经GPU[16]、ByteNet[18]和ConvS2S[9]的基础,它们都使用卷积神经网络作为基本构件,对所有输入和输出位置并行计算隐藏表示。

    In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet.

    在这些模型中,将两个任意输入或输出位置的信号关联起来所需的操作数量在位置之间的距离中增长,对于ConvS2S是线性增长,对于ByteNet是对数增长。

    This makes it more difficult to learn dependencies between distant positions [12].

    这使得学习远处位置[12]之间的依赖关系变得更加困难。

    In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section 3.2.

    在Transformer 中,这被简化为一个恒定的操作数,尽管代价是由于平均注意加权位置而降低了有效分辨率,我们用3.2节中描述的多头注意来抵消这种影响。

    Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence.

    Self-attention,有时也称为intra-attention,是一种将单个序列的不同位置联系起来以计算序列表示的注意机制。

    Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations [4, 27, 28, 22].

    Self-attention已经成功地应用于各种任务中,包括阅读理解、抽象总结、文本蕴涵和学习任务无关的句子表征[4,27,28,22]。

    End-to-end memory networks are based on a recurrent attention mechanism instead of sequence aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks [34].

    端到端记忆网络基于递归注意机制,而不是序列对齐递归,在简单语言问题回答和语言建模任务[34]中表现良好。

    To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution.

    然而,就我们所知,Transformer是第一个完全依赖于self-attention来计算其输入和输出表示的转换模型,而不使用序列对齐的RNNs或卷积。

    In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as [17, 18] and [9].

    在接下来的部分中,我们将描述Transformer,激发self-attention,并讨论它相对于[17,18]和[9]等模型的优点。

    3 Model Architecture

    Most competitive neural sequence transduction models have an encoder-decoder structure [5, 2, 35].

    大多数竞争性的神经序列转导模型都具有编解码器结构[5,2,35]。

    Here, the encoder maps an input sequence of symbol representations (x_1, ... , x_n ) to a sequence of continuous representations z = ( z_1, ... , z_n).

    这里,编码器将一个符号表示的输入序列(x_1 x_n)映射为一个连续表示序列z = (z_1, ... , z_n)。

    Given z, the decoder then generates an output sequence ( y_1, ... , y_m) of symbols one element at a time.

    给定z,然后解码器生成符号的输出序列(y_1 , ... , y_m),每次一个元素。

    At each step the model is auto-regressive[10], consuming the previously generated symbols as additional input when generating the next.

    在每个步骤中,模型都是自动回归的[10],在生成下一个步骤时,使用前面生成的符号作为额外的输入。

    The Transformer follows this overall architecture using stacked self-attention and point-wise, fully connected layers for both the encoder and decoder, shown in the left and right halves of Figure 1, respectively.

    Transformer遵循这种整体架构,使用堆叠的自关注层和点向的完全连接的编码器和解码器层,分别如图1的左半边和右半边所示。

    3.1 Encoder and Decoder Stacks

    Encoder: The encoder is composed of a stack of N = 6 identical layers.

    Encoder: 编码器由一组N = 6个相同的层组成。

    Each layer has two sub-layers.

    每个层有两个子层。

    The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network.

    第一个是一个多头的自我注意机制,第二个是一个简单的,位置的完全连接的前馈网络。

    We employ a residual connection [11] around each of the two sub-layers, followed by layer normalization [1].

    我们在每两个子层周围使用一个残余连接[11],然后使用层标准化[1]。

    That is, the output of each sub-layer is LayerNorm(x + Sublayer(x)), where Sublayer(x) is the function implemented by the sub-layer itself.

    即每个子层的输出是LayerNorm(x + subblayer (x)),其中subblayer (x)是由子层本身实现的函数。

    To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension d_model = 512.

    为了方便这些剩余的连接,模型中的所有子层以及嵌入层都产生d_model = 512维的输出。

    Decoder: The decoder is also composed of a stack of N = 6 identical layers.

    Decoder: 解码器也由一组N = 6个相同的层组成。

    In addition to the two sub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.

    除了每个编码器层中的两个子层外,解码器还插入第三个子层,该子层对编码器堆栈的输出进行多头注意。

    Similar to the encoder, we employ residual connections around each of the sub-layers, followed by layer normalization.

    与编码器类似,我们在每个子层周围使用残余连接,然后进行层标准化。

    We also modify the self-attention sub-layer in the decoder stack to prevent positions from attending to subsequent positions.

    我们还修改了解码器堆栈中的self-attention子层,以防止位置对后续位置的注意。

    This masking, combined with fact that the output embeddings are offset by one position, ensures that the predictions for position i can depend only on the known outputs at positions less than i.

    这种掩蔽,加上输出嵌入被一个位置偏移的事实,确保了对位置 i 的预测只能依赖于小于 i 位置的已知输出。

    3.2 Attention

    An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors.

    可以将注意力函数描述为将查询和一组键-值对映射到输出,其中查询、键、值和输出都是向量。

    The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key

    输出计算为值的加权和,其中分配给每个值的权重由查询的兼容性函数与相应的键计算

    Figure 2: (left) Scaled Dot-Product Attention.

    图2:(左) Scaled Dot-Product Attention。

    (right) Multi-Head Attention consists of several attention layers running in parallel.

    (右) Multi-Head Attention由多个平行运行的注意层组成。

    ![2](C:UsersjieDocuments笔记PaperAttention Is All You Need2.png)

    3.2.1 Scaled Dot-Product Attention

    We call our particular attention "Scaled Dot-Product Attention" (Figure 2). The input consists of queries and keys of dimension d_k, and values of dimension d_v.

    我们将我们的特别注意称为“Scaled Dot-Product Attention”(图2)。输入由维度d_k的查询和键以及维度d_v的值组成。

    We compute the dot products of the query with all keys, divide each by √d_k, and apply a softmax function to obtain the weights on the values.

    我们用所有键计算查询的点积,每个点积除以√d_k,然后应用一个softmax函数来获得这些值的权重。

    In practice, we compute the attention function on a set of queries simultaneously, packed together into a matrix Q.

    在实践中,我们同时计算一组查询的注意函数,并将其打包成一个矩阵Q

    The keys and values are also packed together into matrices K and V .We compute the matrix of outputs as:

    键和值也被打包成矩阵KV。我们计算输出矩阵为:

    The two most commonly used attention functions are additive attention and dot-product (multi-plicative) attention.

    两个最常用的注意力函数是additive attention和dot-product(multi-plicative)。

    Dot-product attention is identical to our algorithm, except for the scaling factor of 1/√d_k .

    Dot-product attention和我们的算法是一样的,除了比例因子1/√d_k

    Additive attention computes the compatibility function using a feed-forward network with a single hidden layer.

    Additive attention使用带有单隐层的前馈网络计算兼容性函数。

    While the two are similar in theoretical complexity, dot-product attention is much faster and more space-efficient in practice, since it can be implemented using highly optimized matrix multiplication code.

    虽然这两者在理论上的复杂性相似,但在实践中,由于可以使用高度优化的矩阵乘法代码来实现,因此dot-product attention要快得多,也更节省空间。

    While for small values of d_k the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of d_k.

    d_k的值较小时,这两种机制的表现相似,当d_k的值较大时,additive attention优于dot product attention,而不需要缩放。

    We suspect that for large values of d_k, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients.

    我们怀疑对于较大的d_k值,点积的大小会变大,将softmax函数推到其梯度非常小的区域。

    To counteract this effect, we scale the dot products by 1/√d_k.

    为了抵消这种影响,我们将点积乘以1/√d_k

    3.2.2 Multi-Head Attention

    Instead of performing a single attention function with d dimensional keys, values and queries, we found it beneficial to linearly project the queries, keys and values h times with different, learned linear projections to d k, d k and d v dimensions, respectively.

    与使用d维键、值和查询执行单个注意力函数不同,我们发现使用不同的、学习过的线性投影到d**kd**kd v维来线性投影查询、键和值h次是有益的。

    On each of these projected versions of queries, keys and values we then perform the attention function in parallel, yielding d v-dimensional output values.

    在这些查询、键和值的投影版本中,我们并行执行注意力函数,生成d v维的输出值。

    These are concatenated and once again projected, resulting in the final values, as depicted in Figure 2.

    将这些值连接起来并再次投影,得到最终的值,如图2所示。

    To illustrate why the dot products get large, assume that the components of q and k are independent random variables with mean 0 and variance 1. Then their dot product, q · k = dki=1 qik i, has mean 0 and variance d k.

    Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.

    Where the projections are parameter matrices

    其中投影是参数矩阵 和

    In this work we employ h = 8 parallel attention layers, or heads.

    在这项工作中,我们使用h = 8个平行的注意力层,或者说是头部。

    For each of these we use 64.

    每一个用64。

    Due to the reduced dimension of each head, the total computational cost is similar to that of single-head attention with full dimensionality

    由于每个头的维数减少,总的计算成本与全维的单头注意相似

    3.2.3 Applications of Attention in our Model

    The Transformer uses multi-head attention in three different ways:

    该变压器在三个不同的方式使用多头注意:

    In "encoder-decoder attention" layers, the queries come from the previous decoder layer, and the memory keys and values come from the output of the encoder.

    在“编码器-解码器注意”层中,查询来自前一解码器层,存储键和值来自编码器的输出。

    This allows every position in the decoder to attend over all positions in the input sequence.

    这使得解码器中的每个位置都可以参与输入序列中的所有位置。

    This mimics the typical encoder-decoder attention mechanisms in sequence-to-sequence models such as [38, 2, 9].

    这模仿了典型的编码-解码注意机制的序列-序列模型,如[38,2,9]。

    The encoder contains self-attention layers.

    编码器包含自我注意层。

    In a self-attention layer all of the keys, values and queries come from the same place, in this case, the output of the previous layer in the encoder.

    在一个self-attention层中,所有的键、值和查询都来自同一个地方,在本例中,是编码器中前一层的输出。

    Each position in the encoder can attend to all positions in the previous layer of the encoder.

    编码器中的每个位置都可以参与到编码器上一层的所有位置。

    Similarly, self-attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder up to and including that position.

    类似地,解码器中的自我注意层允许解码器中的每个位置关注解码器中的所有位置,直到并包括该位置。

    We need to prevent leftward information flow in the decoder to preserve the auto-regressive property.

    为了保持解码器的自回归特性,需要防止解码器中的左向信息流。

    We implement this inside of scaled dot-product attention by masking out (setting to −∞) all values in the input of the softmax which correspond to illegal connections.See Figure 2.

    我们通过屏蔽(设置为−∞)softmax输入中对应于非法连接的所有值来实现这一点。参见图2。

    3.3 Position-wise Feed-Forward Networks

    In addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully connected feed-forward network, which is applied to each position separately and identically.

    除了注意子层之外,我们的编码器和解码器的每一层都包含一个完全连接的前馈网络,它分别应用于每个位置,并且是相同的。

    This consists of two linear transformations with a ReLU activation in between.

    这包括两个线性变换,中间有一个ReLU激活。

    While the linear transformations are the same across different positions, they use different parameters from layer to layer.

    虽然在不同位置上的线性变换是相同的,但它们在不同的层之间使用不同的参数。

    Another way of describing this is as two convolutions with kernel size 1.

    另一种描述它的方法是用内核大小为1的两个卷积。

    The dimensionality of input and output is d = 512, and the inner-layer has dimensionality d f f = 2048.

    输入输出维度为d= 512,内层维度为d ff = 2048。

    3.4 Embeddings and Softmax

    Similarly to other sequence transduction models, we use learned embeddings to convert the input tokens and output tokens to vectors of dimension d.

    与其他序列转换模型类似,我们使用了learned embeddings将输入标记和输出标记转换为维度d的向量。

    We also use the usual learned linear transformation and softmax function to convert the decoder output to predicted next-token probabilities.

    我们还使用了常用的学习线性变换和软最大函数将解码器输出转换为预测的下一个令牌概率。

    In our model, we share the same weight matrix between the two embedding layers and the pre-softmax linear transformation, similar to [30].

    在我们的模型中,我们在两个嵌入层之间共享相同的权矩阵和pre-softmax线性变换,类似于[30]。

    In the embedding layers, we multiply those weights by d.

    在嵌入层中,我们将这些权重乘以 d

    3.5 Positional Encoding

    Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence.

    由于我们的模型不包含递归和卷积,为了使模型能够利用序列的顺序,我们必须注入一些关于序列中标记的相对或绝对位置的信息。

    To this end, we add "positional encodings" to the input embeddings at the bottoms of the encoder and decoder stacks.

    为此,我们将“位置编码”添加到编码器和解码器堆栈底部的输入嵌入中。

    The positional encodings have the same dimension d model as the embeddings, so that the two can be summed.

    位置编码与嵌入编码具有相同的维度d模型,因此可以将二者相加。

    There are many choices of positional encodings, learned and fixed [9].

    有许多位置编码的选择,学习和固定的[9]。

    Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations for different layer types.

    表1:不同层类型的最大路径长度、每层复杂度和最小顺序操作数。

    n is the sequence length, d is the representation dimension, k is the kernel size of convolutions and r the size of the neighborhood in restricted self-attention.

    n为序列长度,d为表示维度,k为卷积的核大小,r为受限自注意的邻域大小。

    In this work, we use sine and cosine functions of different frequencies:

    在这项工作中,我们使用不同频率的正弦和余弦函数:

    where pos is the position and i is the dimension.

    其中pos为位置,i为维度。

    That is, each dimension of the positional encoding corresponds to a sinusoid.

    也就是说,位置编码的每个维度对应一个正弦信号。

    The wavelengths form a geometric progression from 2π to 10000 · 2π.

    波长组成几何级数从2 π 10000 * * 2 π

    We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fifixed offset k, P E**pos+k can be represented as a linear function of P E pos.

    我们选择了这个函数,因为我们假设它可以让模型很容易地学习相对位置,因为对于任何固定偏移量kP E pos+k都可以表示为P E**pos*的线性函数。

    We also experimented with using learned positional embeddings [9] instead, and found that the two versions produced nearly identical results (see Table 3 row (E)).

    我们还尝试使用学习位置嵌入[9],发现这两个版本产生的结果几乎相同(见表3行(E))。

    We chose the sinusoidal version because it may allow the model to extrapolate to sequence lengths longer than the ones encountered during training.

    我们选择正弦版本,因为它可能允许模型外推序列长度比训练中遇到的更长。

    4 Why Self-Attention

    In this section we compare various aspects of self-attention layers to the recurrent and convolutional layers commonly used for mapping one variable-length sequence of symbol representations (x_1, ..., x_n) to another sequence of equal length ( z_1, ..., z_n), with x_i, z_i Rd, such as a hidden layer in a typical sequence transduction encoder or decoder.

    在本节中,我们将self-attention层的各个方面与通常用于映射一个变长符号表示序列(x_1,…, x_n)到另一个长度相等的序列(z_1,…例如在一个典型的序列转换编码器或解码器的隐层。

    Motivating our use of self-attention we consider three desiderata.

    激励我们self-attention我们考虑三个欲望。

    One is the total computational complexity per layer.

    一个是每层的计算复杂度。

    Another is the amount of computation that can be parallelized, as measured by the minimum number of sequential operations required.

    另一个是可以并行化的计算量,由所需的最小顺序操作数来衡量。

    The third is the path length between long-range dependencies in the network.

    第三个是网络中远程依赖之间的路径长度。

    Learning long-range dependencies is a key challenge in many sequence transduction tasks.

    在许多序列转换任务中,学习长期依赖关系是一个关键的挑战。

    One key factor affecting the ability to learn such dependencies is the length of the paths forward and backward signals have to traverse in the network.

    影响学习这种依赖关系能力的一个关键因素是在网络中前进和后退信号必须经过的路径的长度。

    The shorter these paths between any combination of positions in the input and output sequences, the easier it is to learn long-range dependencies [12].

    输入和输出序列中任意位置组合之间的这些路径越短,就越容易学习长期依赖关系[12]。

    Hence we also compare the maximum path length between any two input and output positions in networks composed of the different layer types.

    因此,我们还比较了由不同层类型组成的网络中任意两个输入和输出位置之间的最大路径长度。

    As noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially executed operations, whereas a recurrent layer requires O(n) sequential operations.

    如表1所示,self-attention层将所有位置连接到一个常数数量的连续执行操作,而循环层需要O(n)个连续操作。

    In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece [38] and byte-pair [31] representations.

    在计算复杂性方面,self-attention层速度比周期性层当序列长度n小于表示维数d,这是最常使用的情况下与句子表示最先进的机器翻译模型,如word-piece[38]和byte-pair[31]表示。

    To improve computational performance for tasks involving very long sequences, self-attention could be restricted to considering only a neighborhood of size r in the input sequence centered around the respective output position.

    为了提高涉及非常长的序列的任务的计算性能,可以将self-attention限制为只考虑输入序列中以各自的输出位置为中心的大小为r的邻域。

    This would increase the maximum path length to O(n/r).

    这将把最大路径长度增加到O(n/r)。

    We plan to investigate this approach further in future work.

    我们计划在未来的工作中进一步研究这种方法。

    A single convolutional layer with kernel width k < n does not connect all pairs of input and output positions. Doing so requires a stack of O(n/k) convolutional layers in the case of contiguous kernels, or O(logk(n)) in the case of dilated convolutions [18], increasing the length of the longest paths between any two positions in the network.

    一个内核宽度为k < n的卷积层并不能连接所有对输入和输出位置。这样做需要O(n/k)卷积层的堆栈(在连续内核的情况下),或者O(logk(n))在扩展卷积[18]的情况下,增加网络中任意两个位置之间最长路径的长度。

    Convolutional layers are generally more expensive than recurrent layers, by a factor of k.

    卷积层的复杂度通常比递归层的复杂度高k倍。

    Separable convolutions [6], however, decrease the complexity considerably, to O(k n d + n d2).

    可分卷积[6]将复杂度大大降低到O(k n d + n d2)。

    Even with k = n, however, the complexity of a separable convolution is equal to the combination of a self-attention layer and a point-wise feed-forward layer, the approach we take in our model.

    然而,即使k = n,可分离卷积的复杂度也等于一个self-attention层和一个point-wise向前馈层的组合,我们在模型中采用的方法。

    As side benefit, self-attention could yield more interpretable models.

    作为附带的好处,self-attention可以产生更多可解释的模型。

    We inspect attention distributions from our models and present and discuss examples in the appendix.

    从我们的模型中检查注意分配,并在附录中提供和讨论示例。

    Not only do individual attention heads clearly learn to perform different tasks, many appear to exhibit behavior related to the syntactic and semantic structure of the sentences.

    个体的注意力不仅能清楚地学习执行不同的任务,而且许多注意力还表现出与句子的句法和语义结构相关的行为。

    5 Training

    This section describes the training regime for our models.

    本节描述我们的模型的训练机制。

    5.1 Training Data and Batching

    We trained on the standard WMT 2014 English-German dataset consisting of about 4.5 million sentence pairs.

    我们在标准的WMT 2014英德数据集上进行培训,该数据集包含大约450万对句子。

    Sentences were encoded using byte-pair encoding [3], which has a shared source- target vocabulary of about 37000 tokens.

    句子使用字节对编码[3]进行编码,它有一个共享的source- target词汇表,大约37000个标记。

    For English-French, we used the significantly larger WMT 2014 English-French dataset consisting of 36M sentences and split tokens into a 32000 word-piece vocabulary [38].

    对于英法双语,我们使用了更大的WMT 2014英法双语数据集(包含3600万个句子),并将标记拆分为32000个单词词汇表[38]。

    Sentence pairs were batched together by approximate sequence length.

    句子对按大致的序列长度排列在一起。

    Each training batch contained a set of sentence pairs containing approximately 25000 source tokens and 25000 target tokens.

    每个训练批包含一组句子对,其中包含大约25000个源标记和25000个目标标记。

    5.2 Hardware and Schedule

    We trained our models on one machine with 8 NVIDIA P100 GPUs.

    我们在一台有8台NVIDIA P100 GPU的机器上训练我们的模型。

    For our base models using the hyperparameters described throughout the paper, each training step took about 0.4 seconds.

    对于使用本文中描述的超参数的基本模型,每个训练步骤大约花费0.4秒。

    We trained the base models for a total of 100,000 steps or 12 hours.

    我们总共训练了10万步或12小时的基本模型。

    For our big models,(described on the bottom line of table 3), step time was 1.0 seconds.

    对于我们的大型模型(见表3的底线),步骤时间为1.0秒。

    The big models were trained for 300,000 steps (3.5 days).

    大模型们接受了30万步(3.5天)的训练。

    5.3 Optimizer

    We used the Adam optimizer [20] with β1 = 0.9, β2 = 0.98 and s = 10−9.

    We varied the learning rate over the course of training, according to the formula:

    我们使用了Adam与β1 = 0.9优化器[20],β2 = 0.98和s = 10−9。根据公式,我们在训练过程中改变了学习率:

    This corresponds to increasing the learning rate linearly for the first warmup_steps training steps, and decreasing it thereafter proportionally to the inverse square root of the step number.

    这对应于对第一个warmup_steps训练步骤线性增加学习率,然后按步骤数的平方根的倒数比例减少学习率。

    We used warmup_steps = 4000.

    我们使用warmup_steps = 4000。

    5.4 Regularization

    We employ three types of regularization during training:

    我们在培训中采用了三种类型的正规化:

    Residual Dropout

    We apply dropout [33] to the output of each sub-layer, before it is added to the sub-layer input and normalized.

    我们将dropout[33]应用于每个子层的输出,然后将其添加到子层的输入并进行规范化。

    In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks.

    另外,我们将dropout应用于编码器和解码器堆栈中的嵌入和位置编码。

    For the base model, we use a rate of P_drop = 0.1.

    对于基本模型,我们使用P_drop = 0.1的速率。

    Table 2: The Transformer achieves better BLEU scores than previous state-of-the-art models on the English-to-German and English-to-French newstest2014 tests at a fraction of the training cost.

    表2:Transformer在2014年的英德、英法新闻测试中,以培训成本的一小部分获得了比之前最先进的车型更高的BLEU分数。

    Label Smoothing

    During training, we employed label smoothing of value_ls = 0.1 [36].

    在训练中,我们使用了value_ls = 0.1[36]的标签平滑。

    This hurts perplexity, as the model learns to be more unsure, but improves accuracy and BLEU score.

    这伤害了perplexity,因为模型学会了更不确定,但提高了准确性和BLEU分数。

    6 Results

    6.1 Machine Translation

    On the WMT 2014 English-to-German translation task, the big transformer model (Transformer (big) in Table 2) outperforms the best previously reported models (including ensembles) by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4.

    在2014年WMT英德翻译任务中,big transformer模型(表2中的transformer (big))的性能比之前

    The configuration of this model is listed in the bottom line of Table 3.

    这个模型的配置列在表3的底部。

    Training took 3.5 days on 8 P100 GPUs.

    训练时间为3.5天,使用8个P100 gpu。

    Even our base model surpasses all previously published models and ensembles, at a fraction of the training cost of any of the competitive models.

    甚至我们的基础模型也超过了所有以前发布的模型和集成,而培训成本只是任何竞争模型的一小部分。

    On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model.

    在2014年的WMT英法翻译任务中,我们的大模型获得了BLEU评分41.0,超过了之前发布的所有单个模型,花费不到之前最先进模型的1/4的培训成本。

    The Transformer (big) model trained for English-to-French used dropout rate P = 0.1, instead of 0.3.

    接受英法双语训练的Transformer (big)模型使用了辍学率P = 0.1,而不是0.3。

    For the base models, we used a single model obtained by averaging the last 5 checkpoints, which were written at 10-minute intervals.

    对于基本模型,我们使用了一个单独的模型,该模型通过平均最后5个检查点得到,每隔10分钟写一次。

    For the big models, we averaged the last 20 checkpoints.

    对于大型模型,我们平均使用最后20个检查点。

    We used beam search with a beam size of 4 and length penalty α = 0.6 [38].

    我们使用定向搜索的梁尺寸4和长度点球α= 0.6 [38]。

    These hyperparameters were chosen after experimentation on the development set. We set the maximum output length during inference to input length + 50, but terminate early when possible [38].

    这些超参数是在开发集上进行实验后选择的。我们将推理时的最大输出长度设置为输入长度+ 50,但在可能的情况下提前终止[38]。

    Table 2 summarizes our results and compares our translation quality and training costs to other model architectures from the literature.

    表2总结了我们的结果,并将我们的翻译质量和培训成本与文献中的其他模型架构进行了比较。

    We estimate the number of floating point operations used to train a model by multiplying the training time, the number of GPUs used, and an estimate of the sustained single-precision floating-point capacity of each GPU 5.

    我们通过乘以训练时间、使用的GPU数量和每个GPU持续单精度浮点运算能力来估计用于训练模型的浮点运算的数量。

    6.2 Model Variations

    To evaluate the importance of different components of the Transformer, we varied our base model in different ways, measuring the change in performance on English-to-German translation on the development set, newstest2013.

    为了评估Transformer不同组件的重要性,我们以不同的方式改变了我们的基本模型,在开发集newstest2013上测量英语到德语翻译的性能变化。

    We used beam search as described in the previous section, but no checkpoint averaging.

    我们使用前一节描述的波束搜索,但没有检查点平均。

    We present these results in Table 3.

    我们在表3中给出了这些结果。

    In Table 3 rows (A), we vary the number of attention heads and the attention key and value dimensions, keeping the amount of computation constant, as described in Section 3.2.2.

    在表3行(A)中,我们改变了注意头的数量、注意键和值维度,保持计算量不变,如3.2.2节所述。

    While single-head attention is 0.9 BLEU worse than the best setting, quality also drops off with too many heads.

    单头注意力比最佳设置差0.9个蓝,但如果头太多,质量也会下降。

    6.3 English Constituency Parsing

    To evaluate if the Transformer can generalize to other tasks we performed experiments on English constituency parsing.

    为了评估Transformer是否可以推广到其他任务,我们进行了英语选民分析的实验。

    This task presents specific challenges: the output is subject to strong structural constraints and is significantly longer than the input.

    这个任务提出了特定的挑战:输出受制于强大的结构约束,并且比输入长得多。

    Furthermore, RNN sequence-to-sequence models have not been able to attain state-of-the-art results in small-data regimes [37]

    此外,RNN序列到序列模型还不能在小数据环境中获得最新的结果[37]。

    We trained a 4-layer transformer with d = 1024 on the Wall Street Journal (WSJ) portion of the Penn Treebank [25], about 40K training sentences.

    我们用d = 1024在Penn Treebank的华尔街日报(WSJ)部分训练了一个4层变压器,大约40K的训练语句。

    We also trained it in a semi-supervised setting, using the larger high-confidence and BerkleyParser corpora from with approximately 17M sentences [37].

    我们还在半监督设置下训练它,使用更大的高置信度和BerkleyParser语料库,包含大约1700万个句子[37]。

    We used a vocabulary of 16K tokens for the WSJ only setting and a vocabulary of 32K tokens for the semi-supervised setting.

    我们在华尔街日报的设置中使用了16K的标记词汇,在半监督设置中使用了32K的标记词汇。

    We performed only a small number of experiments to select the dropout, both attention and residual (section 5.4), learning rates and beam size on the Section 22 development set, all other parameters remained unchanged from the English-to-German base translation model.

    我们只进行了少量的实验来选择dropout,包括注意力和残差(section [5.4],) (#_bookmark6),在section 22开发集上的学习速率和波束大小,所有其他参数与英德基础翻译模型保持不变。

    During inference, we increased the maximum output length to input length + 300.

    在推理期间,我们将最大输出长度增加到输入长度+ 300。

    We used a beam size of 21 and α = 0.3 for both WSJ only and the semi-supervised setting.

    我们使用光束大小21 和α= 0.3的《华尔街日报》只和semi-supervised设置。

    Our results in Table [4]show that despite the lack of task-specific tuning our model performs sur- prisingly well, yielding better results than all previously reported models with the exception of the Recurrent Neural Network Grammar [8]

    我们在表4中的结果表明,尽管缺乏特定于任务的调优,我们的模型执行得非常好,产生的结果比所有先前报告的模型都好,除了递归神经网络语法[8]

    In contrast to RNN sequence-to-sequence models [37], the Transformer outperforms the Berkeley- Parser [29]even when training only on the WSJ training set of 40K sentences.

    与RNN的顺序-顺序模型[37]相比,即使只在华尔街日报的40K句子训练集上训练,Transformer的性能也优于Berkeley- Parser [29]。

    Conclusion

    In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.

    在这项工作中,我们提出了Transformer,第一个序列转换模型完全基于注意,取代了递归层最常用的编码器和解码器架构与multi-headed self-attention。

    For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers.

    对于翻译任务,Transformer的训练速度比基于递归或卷积层的架构要快得多。

    On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art.

    在2014年WMT英法翻译任务和2014年WMT英法翻译任务中,我们都达到了一个新的水平。

    In the former task our best model outperforms even all previously reported ensembles.

    在前一个任务中,我们的最佳模型甚至优于所有先前报告的集成。

    We are excited about the future of attention-based models and plan to apply them to other tasks.

    我们对基于注意力的模型的未来感到兴奋,并计划将其应用于其他任务。

    We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video.

    我们计划将Transformer扩展到涉及输入和输出模式而不是文本的问题,并研究局部的、受限制的注意力机制,以有效地处理大量输入和输出,如图像、音频和视频。

    Making generation less sequential is another research goals of ours.

    减少生成的序列是我们的另一个研究目标。

    The code we used to train and evaluate our models is available at https://github.com/ tensorflow/tensor2tensor.

    我们用来训练和评估模型的代码可以在https://github.com/tensorflow / tensor2张量上找到。

    Acknowledgements We are grateful to Nal Kalchbrenner and Stephan Gouws for their fruitful comments, corrections and inspiration.

    感谢Nal Kalchbrenner和Stephan Gouws非常有用的评论、更正和灵感。

    References

    [1] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint

    层的归一化

    [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014.

    神经机器翻译是通过联合学习来对齐和翻译的

    [3] Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc V. Le. Massive exploration of neural machine translation architectures. CoRR, abs/1703.03906, 2017.

    对神经机器翻译结构的大量探索

    [4] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733, 2016.

    机器阅读的长期短期记忆网络

    [5] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. CoRR, abs/1406.1078, 2014.

    使用rnn编解码器学习短语表示,用于统计机器翻译

    [6] Francois Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.

    深度分离卷积学习

    [7] Junyoung Chung, Çaglar Gülçehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014.

    门控递归神经网络在序列建模中的经验评价

    [8] Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. Recurrent neural network grammars. In Proc. of NAACL, 2016.

    递归神经网络语法

    [9] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017.

    卷积序列到序列学习

    [10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.

    利用递归神经网络生成序列

    [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.

    用于图像识别的深度残差学习

    [12] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient flow in recurrent nets: the diffificulty of learning long-term dependencies, 2001.

    递归网络中的梯度流: 学习长期依赖关系的困难

    [13] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.

    长时间的短期记忆

    [14] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing*, pages 832–841. ACL, August 2009.

    具有跨语言隐藏注释的自训练PCFG语法

    [15] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.

    探索语言建模的局限性

    [16] Łukasz Kaiser and Samy Bengio. Can active memory replace attention? In Advances in Neural Information Processing Systems, (NIPS), 2016.

    能主动记忆能取代注意力吗?

    [17] Łukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In International Conference on Learning Representations (ICLR)*, 2016.

    神经GPUs学习算法

    [18] Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099v2, 2017.

    线性时间的神经机器翻译

    [19] Yoon Kim, Carl Denton, Luong Hoang, and Alexander M. Rush. Structured attention networks. In International Conference on Learning Representations, 2017.

    结构化的注意力网络

    [20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.

    Adam: 一种随机最优化的方法

    [21] Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for LSTM networks. arXiv preprint arXiv:1703.10722, 2017.

    LSTM网络的因数分解技巧

    [22] Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130, 2017.

    一个结构化的self-attentive的句子

    [23] Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114, 2015.

    多任务序列到序列学习

    [24] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.

    注意神经机器翻译的有效方法

    [25] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330, 1993.

    建立一个大型的英语注释语料库:the penn treebank

    [26] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152–159. ACL, June 2006.

    有效的解析自我训练

    [27] Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model. In Empirical Methods in Natural Language Processing, 2016.

    一个可分解的注意力模型

    [28] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017.

    一种用于抽象摘要的深度增强模型

    [29] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 433–440. ACL, July 2006.

    学习准确、紧凑、可解释的树状注释

    [30] Ofifir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.

    使用输出嵌入来改进语言模型

    [31] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.

    具有亚词单位的罕见词的神经机器翻译

    [32] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.

    大得惊人的神经网络:稀疏门控专家混合层。

    [33] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.

    Dropout:一种防止神经网络过拟合的简单方法

    [34] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015.

    端到端记忆网络

    [35] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.

    利用神经网络进行序列到序列的学习

    [36] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. CoRR, abs/1512.00567, 2015.

    重新思考计算机视觉的初始架构

    [37] Vinyals & Kaiser, Koo, Petrov, Sutskever, and Hinton. Grammar as a foreign language. In Advances in Neural Information Processing Systems, 2015.

    语法作为一门外语

    [38] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144*, 2016.

    谷歌的神经机器翻译系统:弥合人类和机器翻译之间的鸿沟

    [39] Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep recurrent models with fast-forward connections for neural machine translation. CoRR, abs/1606.04199, 2016.

    具有快速前向连接的神经机器翻译深度递归模型

    [40] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the ACL (Volume 1: Long Papers), pages 434–443. ACL, August 2013.

    快速而准确的shift-reduce组成解析。

    Attention Visualizations

    Figure 3: An example of the attention mechanism following long-distance dependencies in the encoder self-attention in layer 5 of 6.

    图3:一个注意机制的例子,它遵循了第5层或第6层中编码器自我注意的长距离依赖。

    Many of the attention heads attend to a distant dependency of the verb ‘making’, completing the phrase ‘making...more diffificult’.

    很多人的注意力都集中在动词“making”与动词“making…more diffificult’.”的关系上。

    Attentions here shown only for the word ‘making’.

    这里我们只关注“制造”这个词。

    Different colors represent different heads.

    不同的颜色代表不同的头像。

    Best viewed in color.

    彩色效果最佳。

    Figure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution.

    图4:两个attention heads,也在第5层或第6层,显然涉及回指分解。

    Top: Full attentions for head 5.

    上:注意头5。

    Bottom: Isolated attentions from just the word ‘its’ for attention heads 5 and 6.

    底部:注意力从“它”这个词中分离出来,第5个和第6个是注意力。

    Note that the attentions are very sharp for this word.

    请注意,注意这个词是非常尖锐的。

    Figure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the sentence.

    图5:许多“注意头”表现出的行为似乎与句子结构有关。

    We give two such examples above, from two different heads from the encoder self-attention at layer 5 of 6.

    我们给出了两个这样的例子,从两个不同的头部编码器的自我注意在第5层6。

    The heads clearly learned to perform different tasks.

    大脑显然学会了执行不同的任务。

  • 相关阅读:
    Apache(文章测试)
    这是一篇Markdown手册
    Linux MySQL 8.0 忘记密码
    composer 自动加载源码解析
    Linux 连接 Internet
    PHP namespace、abstract、interface、trait使用介绍
    网络基础知识
    CentOS7安装Nginx、MySQL、PHP
    局域网内使用ssh连接两台计算机总结
    整数和浮点数的表示方法
  • 原文地址:https://www.cnblogs.com/wwj99/p/12156301.html
Copyright © 2011-2022 走看看