I am trying to understand the use of TimeDistributed layer in keras/tensorflow. I have read some threads and articles but still I didn't get it properly.
The threads that gave me some understanding of what the TImeDistributed layer does are -
But I still don't know why the layer is actually used!
For example, both the below codes will provide same output (& output_shape):
model = Sequential() model.add(TimeDistributed(LSTM(5, input_shape = (10, 20), return_sequences = True))) print(model.output_shape) model = Sequential() model.add(LSTM(5, input_shape = (10, 20), return_sequences = True)) print(model.output_shape)
And the output shape will be (according to my knowledge) -
(None, 10, 5)
So, if both the models provide same output, what is actually the use of TimeDistributed Layer?
And I also had one other question. TimeDistributed layer applies time related data to separate layers (sharing same weights). So, how is it different from unrolling the LSTM layer which is provided in keras API as:
unroll: Boolean (default False). If True, the network will be unrolled, else a symbolic loop will be used. Unrolling can speed-up a RNN, although it tends to be more memory-intensive. Unrolling is only suitable for short sequences.
What is the difference between these two?
Thank you.. I am still a newbie and so have many questions.