在Keras中实现迁移学习通常涉及使用预训练的模型作为基础,并根据新的数据集对其进行微调。以下是一个简单的示例,演示如何在Keras中实现迁移学习:
from keras.applications import VGG16
from keras.models import Model
from keras.layers import Dense, GlobalAveragePooling2D
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
base_model = VGG16(weights='imagenet', include_top=False)
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
for layer in base_model.layers:
layer.trainable = False
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_generator, steps_per_epoch=num_train_samples // batch_size, epochs=num_epochs, validation_data=validation_generator, validation_steps=num_val_samples // batch_size)
在训练过程中,可以根据需要解冻基础模型的一些层,并进一步微调模型。最后,可以使用训练好的模型进行预测。