张量流中自适应损失的负损失值

我在神经网络上使用了自适应损失实现,但是在训练模型足够长的时间后,我得到了负损失值。任何帮助/建议将不胜感激!如果您需要其他信息,请告诉我

模型定义 -

hyperparameter_space = {"gru_up": 64,"up_dropout": 0.2,"learning_rate": 0.004}

def many_to_one_model(params):
    input_1 = tf.keras.Input(shape = (1,53),name = 'input_1')
    input_2 = tf.keras.Input(shape = (1,19),name = 'input_2')
    input_3 = tf.keras.Input(shape = (1,130),name = 'input_3')

    input_3_flatten = flatten()(input_3)
    input_3_flatten = RepeatVector(1)(input_3_flatten)

    concat_outputs = concatenate()([input_1,input_2,input_3_flatten])

    output_1 = GRU(units = int(params['gru_up']),kernel_initializer = tf.keras.initializers.he_uniform(),activation = 'relu')(concat_outputs)
    output_1 = Dropout(rate = float(params['up_dropout']))(output_1)
    output_1 = Dense(units = 1,activation = 'linear',name = 'output_1')(output_1)

    model = tf.keras.models.Model(inputs = [input_1,input_3],outputs = [output_1],name = 'many_to_one_model')

    return model

many_to_one_model(hyperparameter_space)

模型总结-

'''
Model: "many_to_one_model"
______________________________________________________________________________________________
Layer (type)                      Output Shape       Param #       Connected to
______________________________________________________________________________________________
input_3 (InputLayer)              [(None,1,130)]   0    

flatten_5 (flatten)               (None,130)        0             input_3[0][0]

input_1 (InputLayer)              [(None,53)]    0

input_2 (InputLayer)              [(None,19)]    0

repeat_vector_5 (RepeatVector)    (None,130)     0             flatten_5[0][0]

concatenate_5 (concatenate)       (None,202)     0             input_1[0][0]
                                                                   input_2[0][0]
                                                                   repeat_vector_5[0][0]
gru_5 (GRU)                       (None,64)         51456         concatenate_5[0][0]

dropout_5 (Dropout)               (None,64)         0             gru_5[0][0]

output_1 (Dense)                  (None,1)          65            dropout_5[0][0]
_____________________________________________________________________________________________
Total params: 51,521
Trainable params: 51,521
Non-trainable params: 0

'''

自适应损失实现 -

import robust_loss.general
import robust_loss.adaptive

model = many_to_one_model(hyperparameter_space)

adaptive_lossfun = robust_loss.adaptive.AdaptiveLossFunction(num_channels = 1,float_dtype = np.float32)


variables = (list(model.trainable_variables) + list(adaptive_lossfun.trainable_variables))

optimizer_call = getattr(tf.keras.optimizers,"Adam")
optimizer = optimizer_call(learning_rate = hyperparameter_space["learning_rate"],amsgrad = True)

mlflow_callback = LambdaCallback()

for epoch in range(750):
    def lossfun():
    # Stealthily unsqueeze to an (n,1) matrix,and then compute the loss.
    # A matrix with this shape corresponds to a loss where there's one shape
    # and scale parameter per dimension (and there's only one dimension for
    # this data).
        aa = y_train_up - model([train_cat_ip,train_num_ip,ex_train_num_ip])
        mean_calc = tf.reduce_mean(adaptive_lossfun(aa))
        return mean_calc

    optimizer.minimize(lossfun,variables)

    loss = lossfun()
    alpha = adaptive_lossfun.alpha()[0,0]
    scale = adaptive_lossfun.scale()[0,0]
    print('{:<4}: loss={:+0.5f} alpha={:0.5f} scale={:0.5f}'.format(epoch,loss,alpha,scale))
    mlflow_callback.on_batch_end(epoch,mlflow.log_metrics({"loss":loss.numpy(),"alpha":alpha.numpy(),"scale":scale.numpy()},epoch))

损失、alpha 和规模 vs 时期图 -

loss against epochs

这是稳健自适应损失的 github 存储库:https://github.com/google-research/google-research/tree/5b4f2d4637b6adbddc5e3261647414e9bdc8010c/robust_loss

xiaoruyuango 回答:张量流中自适应损失的负损失值

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/14363.html

大家都在问