tensorflow2中怎么自定义损失函数,相信很多没有经验的人对此束手无策,为此本文总结了问题出现的原因和解决方法,通过这篇文章希望你能解决这个问题。
Keras的核心原则是逐步揭示复杂性,可以在保持相应的高级便利性的同时,对操作细节进行更多控制。当我们要自定义fit中的训练算法时,可以重写模型中的train_step方法,然后调用fit来训练模型。
这里以tensorflow2官网中的例子来说明:
import numpy as np import tensorflow as tf from tensorflow import keras x = np.random.random((1000, 32)) y = np.random.random((1000, 1)) class CustomModel(keras.Model): tf.random.set_seed(100) def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Update metrics (includes the metric that tracks the loss) self.compiled_metrics.update_state(y, y_pred) # Return a dict mapping metric names to current value return {m.name: m.result() for m in self.metrics} # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) model = CustomModel(inputs, outputs) model.compile(optimizer="adam", loss=tf.losses.MSE, metrics=["mae"]) # Just use `fit` as usual model.fit(x, y, epochs=1, shuffle=False) 32/32 [==============================] - 0s 1ms/step - loss: 0.2783 - mae: 0.4257 <tensorflow.python.keras.callbacks.History at 0x7ff7edf6dfd0>
这里的loss是tensorflow库中实现了的损失函数,如果想自定义损失函数,然后将损失函数传入model.compile中,能正常按我们预想的work吗?
答案竟然是否定的,而且没有错误提示,只是loss计算不会符合我们的预期。
def custom_mse(y_true, y_pred): return tf.reduce_mean((y_true - y_pred)**2, axis=-1) a_true = tf.constant([1., 1.5, 1.2]) a_pred = tf.constant([1., 2, 1.5]) custom_mse(a_true, a_pred) <tf.Tensor: shape=(), dtype=float32, numpy=0.11333332> tf.losses.MSE(a_true, a_pred) <tf.Tensor: shape=(), dtype=float32, numpy=0.11333332>
以上结果证实了我们自定义loss的正确性,下面我们直接将自定义的loss置入compile中的loss参数中,看看会发生什么。
my_model = CustomModel(inputs, outputs) my_model.compile(optimizer="adam", loss=custom_mse, metrics=["mae"]) my_model.fit(x, y, epochs=1, shuffle=False) 32/32 [==============================] - 0s 820us/step - loss: 0.1628 - mae: 0.3257 <tensorflow.python.keras.callbacks.History at 0x7ff7edeb7810>
我们看到,这里的loss与我们与标准的tf.losses.MSE明显不同。这说明我们自定义的loss以这种方式直接传递进model.compile中,是完全错误的操作。
正确运用自定义loss的姿势是什么呢?下面揭晓。
loss_tracker = keras.metrics.Mean(name="loss") mae_metric = keras.metrics.MeanAbsoluteError(name="mae") class MyCustomModel(keras.Model): tf.random.set_seed(100) def train_step(self, data): # Unpack the data. Its structure depends on your model and # on what you pass to `fit()`. x, y = data with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass # Compute the loss value # (the loss function is configured in `compile()`) loss = custom_mse(y, y_pred) # loss += self.losses # Compute gradients trainable_vars = self.trainable_variables gradients = tape.gradient(loss, trainable_vars) # Update weights self.optimizer.apply_gradients(zip(gradients, trainable_vars)) # Compute our own metrics loss_tracker.update_state(loss) mae_metric.update_state(y, y_pred) return {"loss": loss_tracker.result(), "mae": mae_metric.result()} @property def metrics(self): # We list our `Metric` objects here so that `reset_states()` can be # called automatically at the start of each epoch # or at the start of `evaluate()`. # If you don't implement this property, you have to call # `reset_states()` yourself at the time of your choosing. return [loss_tracker, mae_metric] # Construct and compile an instance of CustomModel inputs = keras.Input(shape=(32,)) outputs = keras.layers.Dense(1)(inputs) my_model_beta = MyCustomModel(inputs, outputs) my_model_beta.compile(optimizer="adam") # Just use `fit` as usual my_model_beta.fit(x, y, epochs=1, shuffle=False) 32/32 [==============================] - 0s 960us/step - loss: 0.2783 - mae: 0.4257 <tensorflow.python.keras.callbacks.History at 0x7ff7eda3d810>
看完上述内容,你们掌握tensorflow2中怎么自定义损失函数的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注亿速云行业资讯频道,感谢各位的阅读!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。