torch.nn.LSTM:
定义LSTM时的参数:
input_size: “输入数据的特征数目”(the number of expected features in the input `x`)。
hidden_size:“隐藏状态的特征数目”(The number of features in the hidden state `h`)。
num_layers: 为达到更好的效果,需要将2个或者2个以上LSTM堆叠起来,形成多层,网络深度增加?!把凡愕氖坷鏽um_layers=2, 意思是将两个LSTM堆叠一起形成一个堆叠LSTM, 第二个LSTM将第一个LSTM的输出结果作为输入,计算出最终结果,缺省为1层,”(Number of recurrent layers. E.g., setting ``num_layers=2`` would mean stacking two LSTMs together to form a `stacked LSTM`, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1)。
注意的是:如果num_layers设置了多层时,在初始化的时候:h_0, c_0的tensor(D*num_layers,?N,?Hout)?num_layers保持相同,否则出错。
bias: 是否需要bias权重如`b_ih` 、`b_hh`,缺省是True。
batch_first:输入和输出数据的第一个维度的数是否是batch size,缺省是False; 若是True,输入和输出数据的维度表示为 (batch, seq, feature):torch.Size([64, 30, 512])
dropout: dropout probability, 如果非0, 则除了最后一层之外,每个LSTM输出后会接一个Dropout层,缺省为0(If non-zero, introduces a `Dropout` layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to :attr:`dropout`. Default: 0)
bidirectional:如果是True, 为bidirectional LSTM,?缺省: ``False``
输入:input, (h_0, c_0)
input: 当batch_first=False时,shape为(L,?N,?Hin)即(seq,?batch, feature)的张量; 当batch_first=True时,shape为(N,?L,?Hin)即(batch, seq, feature)的张量。L/Seq:字典长度,N/batch:batch size; fHin/eature:单词或者字经过Embedding的维度大小(embedding_dim)。
(h_0, c_0): h_0:隐藏状态单元初始化,c_0:记忆单元初始化。
h_0: shape为(D*num_layers,?N,?Hout)的tensor, 包含在这个批次中每个元素的初始化状态(tensor of shape (D*num_layers,?N,?Hout) containing the initial hidden state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided). 这里的D是num_directions:方向的数目,如果bidirectional被设置为True, 则num_directions=2,由此可见,它代表LSTM是单项还是双向的数,缺省num_directions=1。在上面源码注释中,没有说明num_directions是什么,但是pytorch论坛里面有人问和解答,请参考:https://discuss.pytorch.org/t/what-is-num-directions-of-nn-lstm/11663
num_layers是LSTM层数,N是batch size,Hout是初始化隐藏状态单元的大小。
c_0: shape为(D*num_layers,?N,?Hcell)的tensor, 包含在这个批次中每个元素的初始化状态(tensor of shape (D*num_layers,?N,?Hcell) containing the initial cell state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided)。D*num_layers,?N意思与h_0相同,Hcell代表初始化记忆单元的大小。
假设sequence长度为10,batch size为32,embedding dimension为512,从下图可以看出,h_0, c_0的大小中,batch size与embedding dimension必须与输入input相同,因为是sequence模型,所以刚开始的时候,sequence length只能为1开始。如果为多层LSTM,因此需要多个下面的立方体,所以h_0, c_0需要乘以num_layers。
num_layers = 1
batch_size = 32
embedding_dim = 512
first_hidden = (torch.zeros(1*num_layers, batch_size, embedding_dim),
? ? ? ? ? ? ? ? ? ? ? ?torch.zeros(1*num_layers, batch_size, embedding_dim))
输出:output, (h_n, c_n)
output: 当batch_first=False时,shape为(L,?N,?D*Hout)即(seq,?batch, D*feature)的tensor,?当batch_first=True时,shape为?(N,?L,?D*Hout)即(batch,?seq, D*feature)的tensor, L/seq:?字典长度; N/batch:batch size;?D*Hout/D*feature为输出特征的大小,D为num_directions:方向的数目(tensor of shape (L,?N,?D*Hout) when batch_first=False or (N,?L,?D*Hout) when batch_first=True containing the output features (h_t) from the last layer of the LSTM, for each t. If a torch.nn.utils.rnn.PackedSequence has been given as the input, the output will also be a packed sequence).
某个sequence最终输出特征如下图红块区域,针对某个sequence(蓝色区域),红块以上是中间状态,因为这句话还没有处理完,最后一个才处理完,包含整个语句的语义。
h_n: 批次中包含每个元素的最终隐藏状态的tensor, shape为(D*num_layers,?N,?Hout)(tensor of shape (D*num_layers,?N,?Hout) containing the final hidden state for each element in the batch),D为num_directions:方向的数目;num_layers:堆叠层数;N:batch? size;?Hout隐藏单元特征大小。
c_n: 批次中包含每个元素的最终细胞状态即记忆单元的tensor, shape为(D*num_layers,?N,?Hcell)(tensor of shape (D*num_layers,?N,?Hcell) containing the final cell state for each element in the batch)D为num_directions:方向的数目;num_layers:堆叠层数;N:batch? size;?Hcell细胞或者记忆单元特征大小。
例子:
1) 定义参数:
2) h_0, c_0初始化隐藏状态单元、记忆单元
3) 调用LSTM网络
etexts的shape和etexts:
etexts为embedding后的tensor
4). 执行和结果
lstm_output.shape和lstm_output:
last_hidden type和last_hidden:
LSTM: 长短期记忆网络
忘记门:将值朝0减少,公式:
=(++)?
输入门:决定是否勿略掉输入数据,公式:
=(++)
输出门:决定是否使用隐藏状态,公式:
=(+)
是带激活函数的FC层
候选记忆单元,公式:
?
记忆单元,公式:
可以看出,当等于0时,之前t-1时刻的记忆就被忽略掉了,相当于忘掉之前的记忆;当等于0时,现在时刻的记忆(就是候选记忆单元)被忽略掉了。
隐藏状态,公式 :
可以看出,当等于0时,隐藏状态信息变成0,相当于被重置了。