A Keras model can used as a Tensorflow function on a Tensor, through the functional API, as described here.
So we can do:
from keras.layers import InputLayer a = tf.placeholder(dtype=tf.float32, shape=(None, 784)) model = Sequential() model.add(InputLayer(input_tensor=a, input_shape=(None, 784))) model.add(Dense(32, activation='relu')) model.add(Dense(10, activation='softmax')) output = model.output
Which is a tensor:
<tf.Tensor 'dense_24/Softmax:0' shape=(?, 10) dtype=float32>
But, this also works without any
a = tf.placeholder(dtype=tf.float32, shape=(None, 784)) model = Sequential() model.add(Dense(32, activation='relu', input_shape=(784,))) model.add(Dense(10, activation='softmax')) output = model(a)
output has the same shape as before:
<tf.Tensor 'sequential_9/dense_22/Softmax:0' shape=(?, 10) dtype=float32>
I assume the first form permits:
- to explicitely attach the
outputsas attributes of the model (of the same names), so we can reuse them elsewhere. For example with other TF ops.
- to transform the tensors given as inputs into Keras inputs, with additional metadata (such as
_keras_historyas stated in the source code).
But this is not something we cannot do with the second form, so, is there a special usage of the
Input a fortiori) (except for multiple inputs)?
InputLayer is tricky because it's using
input_shape differently from other keras layers: we specify the batch size (
None here), which is not usually the case...