我将相同的输入(即相同的数据和相同的真实标签)传递给keras train_on_batch()和test_on_batch()。我想知道为什么两个函数的损耗值都不同。
代码:
model_del_fin.compile(optimizer=SGD(lr=0.001,decay=0.001/15),loss='categorical_crossentropy',metrics=['accuracy'])
iters_per_epoch = 1285 // 50
print(iters_per_epoch)
num_epochs = 15
outs_store_freq = 20 # in iters
print_loss_freq = 20 # in iters
iter_num = 0
epoch_num = 0
model_outputs = []
loss_history = []
while epoch_num < num_epochs:
print("ok")
while iter_num < iters_per_epoch:
x_train,y_train = next(train_it2)
loss_history += [model_del_fin.train_on_batch([x_train,x_train],y_train)]
print("Iter {} loss: {}".format(iter_num,loss_history[-1]))
print(model_del_fin.test_on_batch([x_train,y_train))
iter_num += 1
print("EPOCH {} FINISHED".format(epoch_num + 1))
epoch_num += 1
iter_num = 0 # reset counter
**结果:**
0次损失:[5.860205,0.24] [2.5426426,0.68] 迭代1损失:[3.5718067,0.48] [1.7102847,0.68]迭代2损失:[2.0221999,0.68] [1.310905,0.94] Iter 3损失:[1.6114614,0.74] [1.2987132,0.92]