我正在Google colab中构建一个Tensorflow模型。 我对嵌入层的行为感到困惑。它将输入层的大小不断减小一半。
def build_model(vocab_size,embedding_dim,rnn_units,batch_size):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size,batch_input_shape=[batch_size,100]),tf.keras.layers.GRU(rnn_units,return_sequences=True,stateful=True,recurrent_initializer='glorot_uniform'),tf.keras.layers.Dense(vocab_size)
])
return model
model = build_model(
vocab_size = len(vocab),embedding_dim=embedding_dim,rnn_units=rnn_units,batch_size=BATCH_SIZE)
model.compile(optimizer='adam',loss=loss)
这是输入。
dataset = helperDf(df,64,100)
数据集是批处理帮助程序类。每次调用时,它都会返回带有两个张量的数组,它们的大小分别为[(64,100),(64,100)],用于训练和标注。
通话
example_batch_predictions = model(dataset.batch())
print(example_batch_predictions.shape,"# (batch_size,sequence_length,vocab_size)")
还可以。与
(64,100,48) # (batch_size,vocab_size)
但是当我打电话给我时:
history = model.fit(dataset.batch(),epochs=EPOCHS,callbacks=[checkpoint_callback])
返回:
WARNING:tensorflow:Model was constructed with shape (64,100) for input Tensor("embedding_4_input:0",shape=(64,100),dtype=float32),but it was called on an input with incompatible shape (32,100).
为什么将输入检测为32而不是64?
致谢