By Andreas Olesen


2018-12-07 13:15:07 8 Comments

I am trying to understand the Keras layers better. I am working on a sequence to sequence model where I embed a sentence and pass it to a LSTM that returns sequences. Hereafter, I want to apply a Dense layer to each timestep (word) in the sentence and it seems like TimeDistributed does the job for three-dimensional tensors like this case.

In my understanding, Dense layers only work for two-dimensional tensors and TimeDistributed just applies the same dense on every timestep in three dimensions. Could one then not simply flatten the timesteps, apply a dense layer and perform a reshape to obtain the same result or are these not equivalent in some way that I am missing?

3 comments

@yuvaraj8blr 2019-09-09 14:16:52

Adding to the above answers, here are few pictures comparing the output shapes of the two layers. So when using one of these layers after LSTM(for example) would have different behaviors. enter image description here

@Andrey Kite Gorin 2019-02-02 13:35:30

Dense layer can act on any tensor, not necessarily rank 2. And I think that TimeDistributed wrapper does not change anything in the way Dense layer acts. Just applying Dense layer to a tensor of rank 3 will do exactly the same as applying TimeDistributed wrapper of the Dense layer. Here is illustration:

from tensorflow.keras.layers import *
from tensorflow.keras.models import *

model = Sequential()

model.add(Dense(5,input_shape=(50,10)))

model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_5 (Dense)              (None, 50, 5)             55        
=================================================================
Total params: 55
Trainable params: 55
Non-trainable params: 0
_________________________________________________________________
model1 = Sequential()

model1.add(TimeDistributed(Dense(5),input_shape=(50,10)))

model1.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
time_distributed_3 (TimeDist (None, 50, 5)             55        
=================================================================
Total params: 55
Trainable params: 55
Non-trainable params: 0
_________________________________________________________________

@jdehesa 2018-12-09 22:59:42

Imagine you have a batch of 4 time steps, each containing a 3-element vector. Let's represent that with this:

Input batch

Now you want to transform this batch using a dense layer, so you get 5 features per time step. The output of the layer can be represented as something like this:

Output batch

You consider two options, a TimeDistributed dense layer, or reshaping as a flat input, apply a dense layer and reshaping back to time steps.

In the first option, you would apply a dense layer with 3 inputs and 5 outputs to every single time step. This could look like this:

TimeDistributed layer

Each blue circle here is a unit in the dense layer. By doing this with every input time step you get the total output. Importantly, these five units are the same for all the time steps, so you only have the parameters of a single dense layer with 3 inputs and 5 outputs.

The second option would involve flattening the input into a 12-element vector, applying a dense layer with 12 inputs and 20 outputs, and then reshaping that back. This is how it would look:

Flat dense layer

Here the input connections of only one unit are drawn for clarity, but every unit would be connected to every input. Here, obviously, you have many more parameters (those of a dense layer with 12 inputs and 20 outputs), and also note that each output value is influenced by every input value, so values in one time step would affect outputs in other time steps. Whether this is something good or bad depends on your problem and model, but it is an important difference with respect to the previous, where each time step input and output were independent. In addition to that, this configuration requires you to use a fixed number of time steps on each batch, whereas the previous works independently of the number of time steps.

You could also consider the option of having four dense layers, each applied independently to each time step (I didn't draw it but hopefully you get the idea). That would be similar to the previous one, only each unit would receive input connections only from its respective time step inputs. I don't think there is a straightforward way to do that in Keras, you would have to split the input into four, apply dense layers to each part and merge the outputs. Again, in this case the number of time steps would be fixed.

Related Questions

Sponsored Content

1 Answered Questions

[SOLVED] Loss Calculations for LSTM + TimeDistributed(Dense...) layers

  • 2017-08-18 21:14:35
  • Phil
  • 1614 View
  • 2 Score
  • 1 Answer
  • Tags:   keras

3 Answered Questions

[SOLVED] Understanding Keras LSTMs

1 Answered Questions

1 Answered Questions

2 Answered Questions

[SOLVED] Keras LSTM dense layer multidimensional input

1 Answered Questions

[SOLVED] Can TimeDistributed Layer used for many-to-one LSTM?

2 Answered Questions

1 Answered Questions

Sponsored Content