前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >深度残差网络+自适应参数化ReLU激活函数(调参记录12)

深度残差网络+自适应参数化ReLU激活函数(调参记录12)

作者头像
用户6915903
修改2020-05-06 17:54:12
3700
修改2020-05-06 17:54:12
举报
文章被收录于专栏:深度神经网络深度神经网络

本文在调参记录10的基础上,在数据增强部分添加了zoom_range = 0.2,将训练迭代次数增加到5000个epoch,批量大小改成了625,测试Adaptively Parametric ReLU(APReLU)激活函数在Cifar10图像集上的效果。

Adaptively Parametric ReLU(APReLU)激活函数的原理如下:

自适应参数化ReLU激活函数
自适应参数化ReLU激活函数

Keras程序:

代码语言:python
复制
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.0.1 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, 2020, DOI: 10.1109/TIE.2020.2972458 

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

# Noised data
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 1500 epoches
def scheduler(epoch):
  if epoch % 1500 == 0 and epoch != 0:
    lr = K.get_value(model.optimizer.lr)
    K.set_value(model.optimizer.lr, lr * 0.1)
    print("lr changed to {}".format(lr * 0.1))
  return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
  # get the number of channels
  channels = inputs.get_shape().as_list()[-1]
  # get a zero feature map
  zeros_input = keras.layers.subtract([inputs, inputs])
  # get a feature map with only positive features
  pos_input = Activation('relu')(inputs)
  # get a feature map with only negative features
  neg_input = Minimum()([inputs,zeros_input])
  # define a network to obtain the scaling coefficients
  scales_p = GlobalAveragePooling2D()(pos_input)
  scales_n = GlobalAveragePooling2D()(neg_input)
  scales = Concatenate()([scales_n, scales_p])
  scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
  scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
  scales = Activation('relu')(scales)
  scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
  scales = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(scales)
  scales = Activation('sigmoid')(scales)
  scales = Reshape((1,1,channels))(scales)
  # apply a paramtetric relu
  neg_part = keras.layers.multiply([scales, neg_input])
  return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
          downsample_strides=2):
  
  residual = incoming
  in_channels = incoming.get_shape().as_list()[-1]
  
  for i in range(nb_blocks):
    
    identity = residual
    
    if not downsample:
      downsample_strides = 1
    
    residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
    residual = aprelu(residual)
    residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
             padding='same', kernel_initializer='he_normal', 
             kernel_regularizer=l2(1e-4))(residual)
    
    residual = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(residual)
    residual = aprelu(residual)
    residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
             kernel_regularizer=l2(1e-4))(residual)
    
    # Downsampling
    if downsample_strides > 1:
      identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
      
    # Zero_padding to match channels
    if in_channels != out_channels:
      zeros_identity = keras.layers.subtract([identity, identity])
      identity = keras.layers.concatenate([identity, zeros_identity])
      in_channels = out_channels
    
    residual = keras.layers.add([residual, identity])
  
  return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_block(net, 9, 16, downsample=False)
net = residual_block(net, 1, 32, downsample=True)
net = residual_block(net, 8, 32, downsample=False)
net = residual_block(net, 1, 64, downsample=True)
net = residual_block(net, 8, 64, downsample=False)
net = BatchNormalization(momentum=0.9, gamma_regularizer=l2(1e-4))(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
  # randomly rotate images in the range (deg 0 to 180)
  rotation_range=30,
  # Range for random zoom
  zoom_range = 0.2,
  # shear angle in counter-clockwise direction in degrees
  shear_range = 30,
  # randomly flip images
  horizontal_flip=True,
  # randomly shift images horizontally
  width_shift_range=0.125,
  # randomly shift images vertically
  height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=625),
          validation_data=(x_test, y_test), epochs=5000, 
          verbose=1, callbacks=[reduce_lr], workers=10)

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=625, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=625, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

实验结果:

代码语言:python
复制
Epoch 3500/5000
80/80 - 32s 400ms/step - loss: 0.0685 - acc: 0.9995 - val_loss: 0.3981 - val_acc: 0.9313
Epoch 3501/5000
80/80 - 32s 400ms/step - loss: 0.0688 - acc: 0.9995 - val_loss: 0.4005 - val_acc: 0.9320
Epoch 3502/5000
80/80 - 32s 400ms/step - loss: 0.0682 - acc: 0.9998 - val_loss: 0.3992 - val_acc: 0.9312
Epoch 3503/5000
80/80 - 32s 400ms/step - loss: 0.0684 - acc: 0.9996 - val_loss: 0.3993 - val_acc: 0.9306
Epoch 3504/5000
80/80 - 32s 400ms/step - loss: 0.0687 - acc: 0.9995 - val_loss: 0.4003 - val_acc: 0.9306
Epoch 3505/5000
80/80 - 32s 401ms/step - loss: 0.0685 - acc: 0.9997 - val_loss: 0.4017 - val_acc: 0.9301
Epoch 3506/5000
80/80 - 32s 401ms/step - loss: 0.0682 - acc: 0.9997 - val_loss: 0.4053 - val_acc: 0.9299
Epoch 3507/5000
80/80 - 32s 401ms/step - loss: 0.0682 - acc: 0.9997 - val_loss: 0.4047 - val_acc: 0.9301
Epoch 3508/5000
80/80 - 32s 400ms/step - loss: 0.0689 - acc: 0.9994 - val_loss: 0.4059 - val_acc: 0.9289
Epoch 3509/5000
80/80 - 32s 402ms/step - loss: 0.0687 - acc: 0.9995 - val_loss: 0.4074 - val_acc: 0.9289
Epoch 3510/5000
80/80 - 32s 402ms/step - loss: 0.0686 - acc: 0.9995 - val_loss: 0.4042 - val_acc: 0.9301
Epoch 3511/5000
80/80 - 32s 400ms/step - loss: 0.0685 - acc: 0.9995 - val_loss: 0.4029 - val_acc: 0.9308
Epoch 3512/5000
80/80 - 32s 401ms/step - loss: 0.0687 - acc: 0.9996 - val_loss: 0.4020 - val_acc: 0.9301
Epoch 3513/5000
80/80 - 32s 398ms/step - loss: 0.0682 - acc: 0.9997 - val_loss: 0.3987 - val_acc: 0.9305
Epoch 3514/5000
80/80 - 32s 400ms/step - loss: 0.0683 - acc: 0.9996 - val_loss: 0.3991 - val_acc: 0.9302
Epoch 3515/5000
80/80 - 32s 399ms/step - loss: 0.0688 - acc: 0.9996 - val_loss: 0.3989 - val_acc: 0.9307
Epoch 3516/5000
80/80 - 32s 398ms/step - loss: 0.0687 - acc: 0.9995 - val_loss: 0.4004 - val_acc: 0.9313
Epoch 3517/5000
80/80 - 32s 400ms/step - loss: 0.0681 - acc: 0.9996 - val_loss: 0.3996 - val_acc: 0.9313
Epoch 3518/5000
80/80 - 32s 399ms/step - loss: 0.0683 - acc: 0.9997 - val_loss: 0.4004 - val_acc: 0.9308
Epoch 3519/5000
80/80 - 32s 399ms/step - loss: 0.0686 - acc: 0.9995 - val_loss: 0.4002 - val_acc: 0.9306
Epoch 3520/5000
80/80 - 32s 400ms/step - loss: 0.0687 - acc: 0.9995 - val_loss: 0.4007 - val_acc: 0.9310
Epoch 3521/5000
80/80 - 32s 400ms/step - loss: 0.0684 - acc: 0.9996 - val_loss: 0.4021 - val_acc: 0.9318
Epoch 3522/5000
80/80 - 32s 399ms/step - loss: 0.0683 - acc: 0.9996 - val_loss: 0.4033 - val_acc: 0.9320
Epoch 3523/5000
80/80 - 32s 400ms/step - loss: 0.0681 - acc: 0.9997 - val_loss: 0.4064 - val_acc: 0.9316
Epoch 3524/5000
80/80 - 34s 424ms/step - loss: 0.0681 - acc: 0.9996 - val_loss: 0.4061 - val_acc: 0.9309
Epoch 3525/5000
80/80 - 32s 398ms/step - loss: 0.0680 - acc: 0.9997 - val_loss: 0.4032 - val_acc: 0.9310
Epoch 3526/5000
80/80 - 32s 397ms/step - loss: 0.0681 - acc: 0.9995 - val_loss: 0.4033 - val_acc: 0.9317
Epoch 3527/5000
80/80 - 32s 398ms/step - loss: 0.0681 - acc: 0.9995 - val_loss: 0.4034 - val_acc: 0.9317
Epoch 3528/5000
80/80 - 32s 398ms/step - loss: 0.0681 - acc: 0.9996 - val_loss: 0.4021 - val_acc: 0.9317
Epoch 3529/5000
80/80 - 32s 398ms/step - loss: 0.0682 - acc: 0.9996 - val_loss: 0.4032 - val_acc: 0.9317
Epoch 3530/5000
80/80 - 32s 398ms/step - loss: 0.0678 - acc: 0.9997 - val_loss: 0.4026 - val_acc: 0.9316
Epoch 3531/5000
80/80 - 32s 398ms/step - loss: 0.0679 - acc: 0.9996 - val_loss: 0.4000 - val_acc: 0.9312
Epoch 3532/5000
80/80 - 32s 398ms/step - loss: 0.0682 - acc: 0.9995 - val_loss: 0.4004 - val_acc: 0.9325
Epoch 3533/5000
80/80 - 32s 398ms/step - loss: 0.0683 - acc: 0.9996 - val_loss: 0.4012 - val_acc: 0.9320
Epoch 3534/5000
80/80 - 32s 398ms/step - loss: 0.0681 - acc: 0.9996 - val_loss: 0.4006 - val_acc: 0.9320
Epoch 3535/5000
80/80 - 32s 398ms/step - loss: 0.0678 - acc: 0.9996 - val_loss: 0.4026 - val_acc: 0.9308
Epoch 3536/5000
80/80 - 32s 398ms/step - loss: 0.0675 - acc: 0.9997 - val_loss: 0.4037 - val_acc: 0.9306
Epoch 3537/5000
80/80 - 32s 398ms/step - loss: 0.0683 - acc: 0.9995 - val_loss: 0.4056 - val_acc: 0.9296
Epoch 3538/5000
80/80 - 32s 398ms/step - loss: 0.0684 - acc: 0.9995 - val_loss: 0.4053 - val_acc: 0.9313
Epoch 3539/5000
80/80 - 32s 400ms/step - loss: 0.0682 - acc: 0.9995 - val_loss: 0.4037 - val_acc: 0.9305

没跑完就把程序给中断了。训练准确率已经接近100%,测试准确率一直在93%波动,似乎已经进入过拟合阶段了,再继续训练也没什么用处。

怎么抑制过拟合呢?数据增强部分已经用了很多种方式了,下一步减小网络规模?

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht, Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, IEEE Transactions on Industrial Electronics, 2020, DOI: 10.1109/TIE.2020.2972458

https://ieeexplore.ieee.org/document/8998530

————————————————

版权声明:本文为CSDN博主「dangqing1988」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/dangqing1988/article/details/105823443

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档