前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >深度残差网络+自适应参数化ReLU激活函数(调参记录13)

深度残差网络+自适应参数化ReLU激活函数(调参记录13)

作者头像
用户6915903
修改2020-05-06 17:54:15
3110
修改2020-05-06 17:54:15
举报
文章被收录于专栏:深度神经网络深度神经网络

从以往的调参结果来看,过拟合是最主要的问题。本文在调参记录12的基础上,将层数减少,减到9个残差模块,再试一次。

自适应参数化ReLU激活函数原理如下:

自适应参数化ReLU激活函数
自适应参数化ReLU激活函数

Keras程序如下:

代码语言:python
复制
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.0.1 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, 2020,  DOI: 10.1109/TIE.2020.2972458 

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Noised data
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 1500 epoches
def scheduler(epoch):
    if epoch % 1500 == 0 and epoch != 0:
        lr = K.get_value(model.optimizer.lr)
        K.set_value(model.optimizer.lr, lr * 0.1)
        print("lr changed to {}".format(lr * 0.1))
    return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
    # get the number of channels
    channels = inputs.get_shape().as_list()[-1]
    # get a zero feature map
    zeros_input = keras.layers.subtract([inputs, inputs])
    # get a feature map with only positive features
    pos_input = Activation('relu')(inputs)
    # get a feature map with only negative features
    neg_input = Minimum()([inputs,zeros_input])
    # define a network to obtain the scaling coefficients
    scales_p = GlobalAveragePooling2D()(pos_input)
    scales_n = GlobalAveragePooling2D()(neg_input)
    scales = Concatenate()([scales_n, scales_p])
    scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('relu')(scales)
    scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
    scales = Activation('sigmoid')(scales)
    scales = Reshape((1,1,channels))(scales)
    # apply a paramtetric relu
    neg_part = keras.layers.multiply([scales, neg_input])
    return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                   downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels
        if in_channels != out_channels:
            zeros_identity = keras.layers.subtract([identity, identity])
            identity = keras.layers.concatenate([identity, zeros_identity])
            in_channels = out_channels
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 3, 16, downsample=False)
net = residual_block(net, 1, 32, downsample=True)
net = residual_block(net, 2, 32, downsample=False)
net = residual_block(net, 1, 64, downsample=True)
net = residual_block(net, 2, 64, downsample=False)
net = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
    # randomly rotate images in the range (deg 0 to 180)
    rotation_range=30,
    # Range for random zoom
    zoom_range = 0.2,
    # shear angle in counter-clockwise direction in degrees
    shear_range = 30,
    # randomly flip images
    horizontal_flip=True,
    # randomly shift images horizontally
    width_shift_range=0.125,
    # randomly shift images vertically
    height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=625),
                    validation_data=(x_test, y_test), epochs=5000, 
                    verbose=1, callbacks=[reduce_lr], workers=10)

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=625, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=625, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

实验结果如下:

代码语言:python
复制
Epoch 2996/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1232 - acc: 0.9866 - val_loss: 0.4663 - val_acc: 0.9024
Epoch 2997/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1210 - acc: 0.9881 - val_loss: 0.4663 - val_acc: 0.9046
Epoch 2998/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1201 - acc: 0.9876 - val_loss: 0.4492 - val_acc: 0.9065
Epoch 2999/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1260 - acc: 0.9861 - val_loss: 0.4677 - val_acc: 0.9031
Epoch 3000/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1256 - acc: 0.9861 - val_loss: 0.4517 - val_acc: 0.9044
Epoch 3001/5000
lr changed to 0.0009999999776482583
80/80 [==============================] - 12s 151ms/step - loss: 0.1226 - acc: 0.9877 - val_loss: 0.4332 - val_acc: 0.9071
Epoch 3002/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1123 - acc: 0.9911 - val_loss: 0.4282 - val_acc: 0.9088
Epoch 3003/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1072 - acc: 0.9926 - val_loss: 0.4277 - val_acc: 0.9110
Epoch 3004/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1051 - acc: 0.9938 - val_loss: 0.4253 - val_acc: 0.9108
Epoch 3005/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1041 - acc: 0.9941 - val_loss: 0.4242 - val_acc: 0.9101
Epoch 3006/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1021 - acc: 0.9945 - val_loss: 0.4259 - val_acc: 0.9098
Epoch 3007/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1034 - acc: 0.9940 - val_loss: 0.4255 - val_acc: 0.9100
Epoch 3008/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1018 - acc: 0.9949 - val_loss: 0.4252 - val_acc: 0.9100
Epoch 3009/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1029 - acc: 0.9945 - val_loss: 0.4276 - val_acc: 0.9103
Epoch 3010/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.1018 - acc: 0.9947 - val_loss: 0.4275 - val_acc: 0.9102
Epoch 3011/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.1004 - acc: 0.9951 - val_loss: 0.4237 - val_acc: 0.9106
Epoch 3012/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0996 - acc: 0.9954 - val_loss: 0.4213 - val_acc: 0.9120
Epoch 3013/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0997 - acc: 0.9953 - val_loss: 0.4247 - val_acc: 0.9112
Epoch 3014/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0998 - acc: 0.9956 - val_loss: 0.4249 - val_acc: 0.9111
Epoch 3015/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0999 - acc: 0.9953 - val_loss: 0.4261 - val_acc: 0.9103
Epoch 3016/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0984 - acc: 0.9958 - val_loss: 0.4285 - val_acc: 0.9102
Epoch 3017/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0999 - acc: 0.9954 - val_loss: 0.4284 - val_acc: 0.9098
Epoch 3018/5000
80/80 [==============================] - 12s 154ms/step - loss: 0.0997 - acc: 0.9952 - val_loss: 0.4290 - val_acc: 0.9105
Epoch 3019/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0992 - acc: 0.9955 - val_loss: 0.4273 - val_acc: 0.9118
Epoch 3020/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0988 - acc: 0.9953 - val_loss: 0.4270 - val_acc: 0.9110
Epoch 3021/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0988 - acc: 0.9957 - val_loss: 0.4298 - val_acc: 0.9104
Epoch 3022/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0984 - acc: 0.9957 - val_loss: 0.4317 - val_acc: 0.9103
Epoch 3023/5000
80/80 [==============================] - 12s 151ms/step - loss: 0.0976 - acc: 0.9960 - val_loss: 0.4282 - val_acc: 0.9107
Epoch 3024/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0984 - acc: 0.9959 - val_loss: 0.4283 - val_acc: 0.9111
Epoch 3025/5000
80/80 [==============================] - 12s 152ms/step - loss: 0.0969 - acc: 0.9960 - val_loss: 0.4288 - val_acc: 0.9090

过拟合依然严重,还是得继续减小网络。

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht, Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, IEEE Transactions on Industrial Electronics, 2020, DOI: 10.1109/TIE.2020.2972458

https://ieeexplore.ieee.org/document/8998530

————————————————

版权声明:本文为CSDN博主「dangqing1988」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/dangqing1988/article/details/105831087

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档