web-dev-qa-db-fra.com

Comment utiliser les keras ReduceLROnPlateau

J'entraîne un modèle séquentiel de keras. Je souhaite que le taux d'apprentissage soit réduit lorsque la formation ne progresse pas.

J'utilise le rappel ReduceLROnPlateau.

Après 2 premières époques sans progrès, le taux d'apprentissage est réduit comme prévu. Mais ensuite, sa réduction toutes les 2 époques, entraînant l'arrêt de la formation.

Est-ce un bug de keras? ou j'utilise la fonction dans le mauvais sens?

Le code:

earlystopper = EarlyStopping(patience=8, verbose=1)
checkpointer = ModelCheckpoint(filepath = 'model_zero7.{Epoch:02d}-{val_loss:.6f}.hdf5',
                               verbose=1,
                               save_best_only=True, save_weights_only = True)

reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
                              patience=2, min_lr=0.000001, verbose=1)

history_zero7 = model_zero.fit_generator(bach_gen_only1,
                                        validation_data = (v_im, v_lb),
                                        steps_per_Epoch=25,epochs=100,
                    callbacks=[earlystopper, checkpointer, reduce_lr])

Le résultat:

Epoch 00006: val_loss did not improve from 0.68605
Epoch 7/100
25/25 [==============================] - 213s 9s/step - loss: 0.6873 - binary_crossentropy: 0.0797 - dice_coef_loss: -0.8224 - jaccard_distance_loss_flat: 0.2998 - val_loss: 0.6865 - val_binary_crossentropy: 0.0668 - val_dice_coef_loss: -0.8513 - val_jaccard_distance_loss_flat: 0.2578

Epoch 00007: val_loss did not improve from 0.68605

Epoch 00007: ReduceLROnPlateau reducing learning rate to 0.000200000009499.
Epoch 8/100
25/25 [==============================] - 214s 9s/step - loss: 0.6865 - binary_crossentropy: 0.0648 - dice_coef_loss: -0.8547 - jaccard_distance_loss_flat: 0.2528 - val_loss: 0.6860 - val_binary_crossentropy: 0.0694 - val_dice_coef_loss: -0.8575 - val_jaccard_distance_loss_flat: 0.2485

Epoch 00008: val_loss improved from 0.68605 to 0.68598, saving model to model_zero7.08-0.685983.hdf5
Epoch 9/100
25/25 [==============================] - 208s 8s/step - loss: 0.6868 - binary_crossentropy: 0.0624 - dice_coef_loss: -0.8554 - jaccard_distance_loss_flat: 0.2518 - val_loss: 0.6860 - val_binary_crossentropy: 0.0746 - val_dice_coef_loss: -0.8527 - val_jaccard_distance_loss_flat: 0.2557

Epoch 00009: val_loss improved from 0.68598 to 0.68598, saving model to model_zero7.09-0.685982.hdf5

Epoch 00009: ReduceLROnPlateau reducing learning rate to 4.00000018999e-05.
Epoch 10/100
25/25 [==============================] - 211s 8s/step - loss: 0.6865 - binary_crossentropy: 0.0640 - dice_coef_loss: -0.8570 - jaccard_distance_loss_flat: 0.2493 - val_loss: 0.6859 - val_binary_crossentropy: 0.0630 - val_dice_coef_loss: -0.8688 - val_jaccard_distance_loss_flat: 0.2311

Epoch 00010: val_loss improved from 0.68598 to 0.68589, saving model to model_zero7.10-0.685890.hdf5
Epoch 11/100
25/25 [==============================] - 211s 8s/step - loss: 0.6869 - binary_crossentropy: 0.0610 - dice_coef_loss: -0.8580 - jaccard_distance_loss_flat: 0.2480 - val_loss: 0.6859 - val_binary_crossentropy: 0.0681 - val_dice_coef_loss: -0.8616 - val_jaccard_distance_loss_flat: 0.2422

Epoch 00011: val_loss improved from 0.68589 to 0.68589, saving model to model_zero7.11-0.685885.hdf5
Epoch 12/100
25/25 [==============================] - 210s 8s/step - loss: 0.6866 - binary_crossentropy: 0.0575 - dice_coef_loss: -0.8612 - jaccard_distance_loss_flat: 0.2426 - val_loss: 0.6858 - val_binary_crossentropy: 0.0636 - val_dice_coef_loss: -0.8679 - val_jaccard_distance_loss_flat: 0.2325

Epoch 00012: val_loss improved from 0.68589 to 0.68585, saving model to model_zero7.12-0.685847.hdf5

Epoch 00012: ReduceLROnPlateau reducing learning rate to 8.0000005255e-06.

Les 6 premières époques:

Epoch 1/100
25/25 [==============================] - 254s 10s/step - loss: 0.6886 - binary_crossentropy: 0.1356 - dice_coef_loss: -0.7302 - jaccard_distance_loss_flat: 0.4151 - val_loss: 0.6867 - val_binary_crossentropy: 0.1013 - val_dice_coef_loss: -0.8161 - val_jaccard_distance_loss_flat: 0.3096

Epoch 00001: val_loss improved from inf to 0.68673, saving model to model_zero7.01-0.686732.hdf5
Epoch 2/100
25/25 [==============================] - 211s 8s/step - loss: 0.6871 - binary_crossentropy: 0.0805 - dice_coef_loss: -0.8274 - jaccard_distance_loss_flat: 0.2932 - val_loss: 0.6865 - val_binary_crossentropy: 0.1005 - val_dice_coef_loss: -0.8100 - val_jaccard_distance_loss_flat: 0.3183

Epoch 00002: val_loss improved from 0.68673 to 0.68653, saving model to model_zero7.02-0.686533.hdf5
Epoch 3/100
25/25 [==============================] - 214s 9s/step - loss: 0.6871 - binary_crossentropy: 0.0778 - dice_coef_loss: -0.8268 - jaccard_distance_loss_flat: 0.2934 - val_loss: 0.6863 - val_binary_crossentropy: 0.0811 - val_dice_coef_loss: -0.8402 - val_jaccard_distance_loss_flat: 0.2743

Epoch 00003: val_loss improved from 0.68653 to 0.68635, saving model to model_zero7.03-0.686345.hdf5
Epoch 4/100
25/25 [==============================] - 210s 8s/step - loss: 0.6869 - binary_crossentropy: 0.0692 - dice_coef_loss: -0.8397 - jaccard_distance_loss_flat: 0.2749 - val_loss: 0.6862 - val_binary_crossentropy: 0.0820 - val_dice_coef_loss: -0.8445 - val_jaccard_distance_loss_flat: 0.2682

Epoch 00004: val_loss improved from 0.68635 to 0.68621, saving model to model_zero7.04-0.686206.hdf5
Epoch 5/100
25/25 [==============================] - 208s 8s/step - loss: 0.6868 - binary_crossentropy: 0.0693 - dice_coef_loss: -0.8446 - jaccard_distance_loss_flat: 0.2676 - val_loss: 0.6861 - val_binary_crossentropy: 0.0761 - val_dice_coef_loss: -0.8495 - val_jaccard_distance_loss_flat: 0.2606

Epoch 00005: val_loss improved from 0.68621 to 0.68605, saving model to model_zero7.05-0.686055.hdf5
Epoch 6/100
25/25 [==============================] - 203s 8s/step - loss: 0.6874 - binary_crossentropy: 0.0792 - dice_coef_loss: -0.8200 - jaccard_distance_loss_flat: 0.3024 - val_loss: 0.6865 - val_binary_crossentropy: 0.0559 - val_dice_coef_loss: -0.8716 - val_jaccard_distance_loss_flat: 0.2269

Epoch 00006: val_loss did not improve from 0.68605
5
Naomi Fridman

Et bien c'est un bug dans les keras. https://github.com/keras-team/keras/issues/3991

Pour le résoudre, utilisez: cooldown = 1

3
Naomi Fridman

Je ne pense pas que cela devrait être imputé à ce bogue car il semble avoir été corrigé déjà en 2016. Notez qu'il y a un argument positif dans cette fonction:

min_delta: seuil pour mesurer le nouvel optimum, pour se concentrer uniquement sur les changements significatifs.

Qui est défini comme 0,0001 par défaut. Par conséquent, même si val_loss s'est amélioré par rapport à la dernière époque, si la réduction est inférieure à min_delta. Il sera toujours considéré comme mauvais lr.

0
chongkai Lu