前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【哈工大版】Dynamic ReLU:自适应参数化ReLU及Keras代码(调参记录8)

【哈工大版】Dynamic ReLU:自适应参数化ReLU及Keras代码(调参记录8)

作者头像
用户7368967
修改2020-05-27 18:11:21
3430
修改2020-05-27 18:11:21
举报
文章被收录于专栏:深度学习知识深度学习知识

本文介绍哈工大团队提出的一种Dynamic ReLU激活函数,即自适应参数化ReLU激活函数,原本是应用在基于一维振动信号的故障诊断,能够让每个样本有自己独特的ReLU参数,在2019年5月3日投稿至IEEE Transactions on Industrial Electronics,2020年1月24日录用,2020年2月13日在IEEE官网公布

续上一篇:

【哈工大版】Dynamic ReLU:Adaptively Parametric ReLU及Keras代码(调参记录7)

本文将层数设置得很少,只有两个残差模块,测试Adaptively Parametric ReLU(APReLU)激活函数在Cifar10图像集上的效果。

自适应参数化ReLU:一种Dynamic ReLU激活函数
自适应参数化ReLU:一种Dynamic ReLU激活函数

Keras代码如下:

代码语言:python
复制
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 14 04:17:45 2020
Implemented using TensorFlow 1.0.1 and Keras 2.2.1

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht,
Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, 
IEEE Transactions on Industrial Electronics, DOI: 10.1109/TIE.2020.2972458,
Date of Publication: 13 February 2020

@author: Minghang Zhao
"""

from __future__ import print_function
import keras
import numpy as np
from keras.datasets import cifar10
from keras.layers import Dense, Conv2D, BatchNormalization, Activation, Minimum
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D, Concatenate, Reshape
from keras.regularizers import l2
from keras import backend as K
from keras.models import Model
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler
K.set_learning_phase(1)

# The data, split between train and test sets
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_test = x_test-np.mean(x_train)
x_train = x_train-np.mean(x_train)
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

# Schedule the learning rate, multiply 0.1 every 300 epoches
def scheduler(epoch):
    if epoch % 300 == 0 and epoch != 0:
        lr = K.get_value(model.optimizer.lr)
        K.set_value(model.optimizer.lr, lr * 0.1)
        print("lr changed to {}".format(lr * 0.1))
    return K.get_value(model.optimizer.lr)

# An adaptively parametric rectifier linear unit (APReLU)
def aprelu(inputs):
    # get the number of channels
    channels = inputs.get_shape().as_list()[-1]
    # get a zero feature map
    zeros_input = keras.layers.subtract([inputs, inputs])
    # get a feature map with only positive features
    pos_input = Activation('relu')(inputs)
    # get a feature map with only negative features
    neg_input = Minimum()([inputs,zeros_input])
    # define a network to obtain the scaling coefficients
    scales_p = GlobalAveragePooling2D()(pos_input)
    scales_n = GlobalAveragePooling2D()(neg_input)
    scales = Concatenate()([scales_n, scales_p])
    scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization()(scales)
    scales = Activation('relu')(scales)
    scales = Dense(channels, activation='linear', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(scales)
    scales = BatchNormalization()(scales)
    scales = Activation('sigmoid')(scales)
    scales = Reshape((1,1,channels))(scales)
    # apply a paramtetric relu
    neg_part = keras.layers.multiply([scales, neg_input])
    return keras.layers.add([pos_input, neg_part])

# Residual Block
def residual_block(incoming, nb_blocks, out_channels, downsample=False,
                   downsample_strides=2):
    
    residual = incoming
    in_channels = incoming.get_shape().as_list()[-1]
    
    for i in range(nb_blocks):
        
        identity = residual
        
        if not downsample:
            downsample_strides = 1
        
        residual = BatchNormalization()(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, strides=(downsample_strides, downsample_strides), 
                          padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        residual = BatchNormalization()(residual)
        residual = aprelu(residual)
        residual = Conv2D(out_channels, 3, padding='same', kernel_initializer='he_normal', 
                          kernel_regularizer=l2(1e-4))(residual)
        
        # Downsampling
        if downsample_strides > 1:
            identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
            
        # Zero_padding to match channels
        if in_channels != out_channels:
            zeros_identity = keras.layers.subtract([identity, identity])
            identity = keras.layers.concatenate([identity, zeros_identity])
            in_channels = out_channels
        
        residual = keras.layers.add([residual, identity])
    
    return residual


# define and train a model
inputs = Input(shape=(32, 32, 3))
net = Conv2D(16, 3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
# net = residual_block(net, 9, 16, downsample=False)
net = residual_block(net, 1, 32, downsample=True)
# net = residual_block(net, 8, 32, downsample=False)
net = residual_block(net, 1, 64, downsample=True)
# net = residual_block(net, 8, 64, downsample=False)
net = BatchNormalization()(net)
net = aprelu(net)
net = GlobalAveragePooling2D()(net)
outputs = Dense(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
sgd = optimizers.SGD(lr=0.1, decay=0., momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])

# data augmentation
datagen = ImageDataGenerator(
    # randomly rotate images in the range (deg 0 to 180)
    rotation_range=30,
    # randomly flip images
    horizontal_flip=True,
    # randomly shift images horizontally
    width_shift_range=0.125,
    # randomly shift images vertically
    height_shift_range=0.125)

reduce_lr = LearningRateScheduler(scheduler)
# fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train, batch_size=100),
                    validation_data=(x_test, y_test), epochs=1000, 
                    verbose=1, callbacks=[reduce_lr], workers=4)

# get results
K.set_learning_phase(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbose=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbose=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])

实验结果如下:

代码语言:javascript
复制
Epoch 755/1000
16s 33ms/step - loss: 0.2417 - acc: 0.9453 - val_loss: 0.5321 - val_acc: 0.8719
Epoch 756/1000
17s 33ms/step - loss: 0.2436 - acc: 0.9450 - val_loss: 0.5294 - val_acc: 0.8714
Epoch 757/1000
17s 33ms/step - loss: 0.2463 - acc: 0.9435 - val_loss: 0.5351 - val_acc: 0.8710
Epoch 758/1000
17s 33ms/step - loss: 0.2461 - acc: 0.9444 - val_loss: 0.5356 - val_acc: 0.8708
Epoch 759/1000
17s 33ms/step - loss: 0.2440 - acc: 0.9449 - val_loss: 0.5355 - val_acc: 0.8674
Epoch 760/1000
16s 32ms/step - loss: 0.2431 - acc: 0.9447 - val_loss: 0.5329 - val_acc: 0.8711
Epoch 761/1000
16s 32ms/step - loss: 0.2469 - acc: 0.9440 - val_loss: 0.5294 - val_acc: 0.8712
Epoch 762/1000
16s 32ms/step - loss: 0.2420 - acc: 0.9445 - val_loss: 0.5296 - val_acc: 0.8725
Epoch 763/1000
16s 33ms/step - loss: 0.2411 - acc: 0.9469 - val_loss: 0.5353 - val_acc: 0.8712
Epoch 764/1000
17s 33ms/step - loss: 0.2426 - acc: 0.9453 - val_loss: 0.5363 - val_acc: 0.8715
Epoch 765/1000
17s 33ms/step - loss: 0.2415 - acc: 0.9449 - val_loss: 0.5322 - val_acc: 0.8718
Epoch 766/1000
17s 33ms/step - loss: 0.2392 - acc: 0.9450 - val_loss: 0.5321 - val_acc: 0.8696
Epoch 767/1000
17s 34ms/step - loss: 0.2396 - acc: 0.9463 - val_loss: 0.5353 - val_acc: 0.8699
Epoch 768/1000
16s 33ms/step - loss: 0.2457 - acc: 0.9436 - val_loss: 0.5314 - val_acc: 0.8713
Epoch 769/1000
16s 32ms/step - loss: 0.2442 - acc: 0.9440 - val_loss: 0.5327 - val_acc: 0.8740
Epoch 770/1000
16s 32ms/step - loss: 0.2449 - acc: 0.9445 - val_loss: 0.5336 - val_acc: 0.8706
Epoch 771/1000
16s 33ms/step - loss: 0.2408 - acc: 0.9458 - val_loss: 0.5359 - val_acc: 0.8706
Epoch 772/1000
17s 33ms/step - loss: 0.2400 - acc: 0.9454 - val_loss: 0.5362 - val_acc: 0.8690
Epoch 773/1000
17s 33ms/step - loss: 0.2409 - acc: 0.9455 - val_loss: 0.5343 - val_acc: 0.8688
Epoch 774/1000
17s 33ms/step - loss: 0.2390 - acc: 0.9464 - val_loss: 0.5321 - val_acc: 0.8690
Epoch 775/1000
17s 33ms/step - loss: 0.2438 - acc: 0.9439 - val_loss: 0.5363 - val_acc: 0.8700
Epoch 776/1000
16s 33ms/step - loss: 0.2424 - acc: 0.9441 - val_loss: 0.5359 - val_acc: 0.8691
Epoch 777/1000
16s 32ms/step - loss: 0.2398 - acc: 0.9448 - val_loss: 0.5354 - val_acc: 0.8689
Epoch 778/1000
16s 32ms/step - loss: 0.2420 - acc: 0.9450 - val_loss: 0.5385 - val_acc: 0.8681
Epoch 779/1000
16s 33ms/step - loss: 0.2391 - acc: 0.9459 - val_loss: 0.5320 - val_acc: 0.8698
Epoch 780/1000
17s 33ms/step - loss: 0.2425 - acc: 0.9443 - val_loss: 0.5363 - val_acc: 0.8683
Epoch 781/1000
17s 33ms/step - loss: 0.2381 - acc: 0.9457 - val_loss: 0.5345 - val_acc: 0.8680
Epoch 782/1000
17s 33ms/step - loss: 0.2374 - acc: 0.9470 - val_loss: 0.5301 - val_acc: 0.8710
Epoch 783/1000
17s 33ms/step - loss: 0.2389 - acc: 0.9460 - val_loss: 0.5334 - val_acc: 0.8696
Epoch 784/1000
16s 33ms/step - loss: 0.2386 - acc: 0.9473 - val_loss: 0.5286 - val_acc: 0.8691
Epoch 785/1000
16s 32ms/step - loss: 0.2387 - acc: 0.9447 - val_loss: 0.5362 - val_acc: 0.8690
Epoch 786/1000
16s 32ms/step - loss: 0.2386 - acc: 0.9461 - val_loss: 0.5345 - val_acc: 0.8690
Epoch 787/1000
16s 33ms/step - loss: 0.2358 - acc: 0.9464 - val_loss: 0.5344 - val_acc: 0.8709
Epoch 788/1000
17s 33ms/step - loss: 0.2374 - acc: 0.9463 - val_loss: 0.5322 - val_acc: 0.8716
Epoch 789/1000
17s 33ms/step - loss: 0.2388 - acc: 0.9449 - val_loss: 0.5267 - val_acc: 0.8744
Epoch 790/1000
17s 33ms/step - loss: 0.2349 - acc: 0.9471 - val_loss: 0.5347 - val_acc: 0.8706
Epoch 791/1000
17s 33ms/step - loss: 0.2391 - acc: 0.9458 - val_loss: 0.5336 - val_acc: 0.8693
Epoch 792/1000
16s 32ms/step - loss: 0.2385 - acc: 0.9447 - val_loss: 0.5387 - val_acc: 0.8687
Epoch 793/1000
16s 32ms/step - loss: 0.2356 - acc: 0.9471 - val_loss: 0.5374 - val_acc: 0.8686
Epoch 794/1000
16s 33ms/step - loss: 0.2398 - acc: 0.9455 - val_loss: 0.5375 - val_acc: 0.8678
Epoch 795/1000
17s 33ms/step - loss: 0.2365 - acc: 0.9452 - val_loss: 0.5302 - val_acc: 0.8710
Epoch 796/1000
16s 33ms/step - loss: 0.2358 - acc: 0.9469 - val_loss: 0.5374 - val_acc: 0.8711
Epoch 797/1000
17s 33ms/step - loss: 0.2379 - acc: 0.9450 - val_loss: 0.5335 - val_acc: 0.8686
Epoch 798/1000
17s 34ms/step - loss: 0.2369 - acc: 0.9454 - val_loss: 0.5340 - val_acc: 0.8687
Epoch 799/1000
17s 33ms/step - loss: 0.2337 - acc: 0.9470 - val_loss: 0.5360 - val_acc: 0.8698
Epoch 800/1000
16s 33ms/step - loss: 0.2412 - acc: 0.9432 - val_loss: 0.5353 - val_acc: 0.8697
Epoch 801/1000
16s 32ms/step - loss: 0.2357 - acc: 0.9456 - val_loss: 0.5362 - val_acc: 0.8689
Epoch 802/1000
16s 33ms/step - loss: 0.2347 - acc: 0.9464 - val_loss: 0.5371 - val_acc: 0.8698
Epoch 803/1000
16s 33ms/step - loss: 0.2311 - acc: 0.9474 - val_loss: 0.5328 - val_acc: 0.8683
Epoch 804/1000
17s 33ms/step - loss: 0.2396 - acc: 0.9449 - val_loss: 0.5358 - val_acc: 0.8699
Epoch 805/1000
17s 33ms/step - loss: 0.2380 - acc: 0.9459 - val_loss: 0.5351 - val_acc: 0.8689
Epoch 806/1000
17s 33ms/step - loss: 0.2351 - acc: 0.9452 - val_loss: 0.5362 - val_acc: 0.8693
Epoch 807/1000
17s 33ms/step - loss: 0.2350 - acc: 0.9463 - val_loss: 0.5281 - val_acc: 0.8711
Epoch 808/1000
16s 33ms/step - loss: 0.2333 - acc: 0.9467 - val_loss: 0.5347 - val_acc: 0.8703
Epoch 809/1000
16s 33ms/step - loss: 0.2333 - acc: 0.9463 - val_loss: 0.5338 - val_acc: 0.8709
Epoch 810/1000
16s 32ms/step - loss: 0.2330 - acc: 0.9478 - val_loss: 0.5351 - val_acc: 0.8704
Epoch 811/1000
17s 33ms/step - loss: 0.2374 - acc: 0.9439 - val_loss: 0.5400 - val_acc: 0.8696
Epoch 812/1000
16s 33ms/step - loss: 0.2321 - acc: 0.9467 - val_loss: 0.5361 - val_acc: 0.8709
Epoch 813/1000
17s 33ms/step - loss: 0.2349 - acc: 0.9457 - val_loss: 0.5307 - val_acc: 0.8706
Epoch 814/1000
17s 33ms/step - loss: 0.2361 - acc: 0.9457 - val_loss: 0.5368 - val_acc: 0.8686
Epoch 815/1000
17s 33ms/step - loss: 0.2351 - acc: 0.9459 - val_loss: 0.5344 - val_acc: 0.8692
Epoch 816/1000
16s 33ms/step - loss: 0.2362 - acc: 0.9464 - val_loss: 0.5297 - val_acc: 0.8693
Epoch 817/1000
16s 32ms/step - loss: 0.2326 - acc: 0.9471 - val_loss: 0.5344 - val_acc: 0.8688
Epoch 818/1000
16s 33ms/step - loss: 0.2353 - acc: 0.9448 - val_loss: 0.5418 - val_acc: 0.8698
Epoch 819/1000
16s 33ms/step - loss: 0.2361 - acc: 0.9439 - val_loss: 0.5353 - val_acc: 0.8705
Epoch 820/1000
17s 33ms/step - loss: 0.2320 - acc: 0.9468 - val_loss: 0.5411 - val_acc: 0.8701
Epoch 821/1000
17s 33ms/step - loss: 0.2311 - acc: 0.9466 - val_loss: 0.5360 - val_acc: 0.8683
Epoch 822/1000
17s 34ms/step - loss: 0.2298 - acc: 0.9464 - val_loss: 0.5369 - val_acc: 0.8722
Epoch 823/1000
17s 33ms/step - loss: 0.2360 - acc: 0.9450 - val_loss: 0.5409 - val_acc: 0.8657
Epoch 824/1000
16s 32ms/step - loss: 0.2319 - acc: 0.9471 - val_loss: 0.5340 - val_acc: 0.8689
Epoch 825/1000
16s 32ms/step - loss: 0.2307 - acc: 0.9483 - val_loss: 0.5338 - val_acc: 0.8695
Epoch 826/1000
16s 32ms/step - loss: 0.2337 - acc: 0.9465 - val_loss: 0.5364 - val_acc: 0.8676
Epoch 827/1000
16s 33ms/step - loss: 0.2348 - acc: 0.9454 - val_loss: 0.5367 - val_acc: 0.8676
Epoch 828/1000
17s 33ms/step - loss: 0.2334 - acc: 0.9453 - val_loss: 0.5284 - val_acc: 0.8699
Epoch 829/1000
17s 34ms/step - loss: 0.2334 - acc: 0.9459 - val_loss: 0.5325 - val_acc: 0.8689
Epoch 830/1000
17s 33ms/step - loss: 0.2332 - acc: 0.9462 - val_loss: 0.5346 - val_acc: 0.8701
Epoch 831/1000
17s 33ms/step - loss: 0.2355 - acc: 0.9438 - val_loss: 0.5329 - val_acc: 0.8687
Epoch 832/1000
16s 33ms/step - loss: 0.2325 - acc: 0.9459 - val_loss: 0.5325 - val_acc: 0.8694
Epoch 833/1000
16s 33ms/step - loss: 0.2299 - acc: 0.9480 - val_loss: 0.5328 - val_acc: 0.8683
Epoch 834/1000
16s 33ms/step - loss: 0.2328 - acc: 0.9462 - val_loss: 0.5345 - val_acc: 0.8686
Epoch 835/1000
16s 33ms/step - loss: 0.2310 - acc: 0.9468 - val_loss: 0.5396 - val_acc: 0.8684
Epoch 836/1000
17s 33ms/step - loss: 0.2300 - acc: 0.9477 - val_loss: 0.5312 - val_acc: 0.8680
Epoch 837/1000
17s 33ms/step - loss: 0.2316 - acc: 0.9470 - val_loss: 0.5363 - val_acc: 0.8696
Epoch 838/1000
17s 33ms/step - loss: 0.2287 - acc: 0.9466 - val_loss: 0.5400 - val_acc: 0.8691
Epoch 839/1000
17s 33ms/step - loss: 0.2324 - acc: 0.9456 - val_loss: 0.5354 - val_acc: 0.8691
Epoch 840/1000
16s 33ms/step - loss: 0.2321 - acc: 0.9456 - val_loss: 0.5269 - val_acc: 0.8701
Epoch 841/1000
16s 32ms/step - loss: 0.2328 - acc: 0.9440 - val_loss: 0.5319 - val_acc: 0.8713
Epoch 842/1000
16s 32ms/step - loss: 0.2304 - acc: 0.9454 - val_loss: 0.5295 - val_acc: 0.8697
Epoch 843/1000
16s 33ms/step - loss: 0.2287 - acc: 0.9459 - val_loss: 0.5329 - val_acc: 0.8720
Epoch 844/1000
17s 33ms/step - loss: 0.2315 - acc: 0.9452 - val_loss: 0.5334 - val_acc: 0.8709
Epoch 845/1000
17s 34ms/step - loss: 0.2313 - acc: 0.9461 - val_loss: 0.5357 - val_acc: 0.8691
Epoch 846/1000
17s 33ms/step - loss: 0.2283 - acc: 0.9470 - val_loss: 0.5292 - val_acc: 0.8743
Epoch 847/1000
17s 33ms/step - loss: 0.2308 - acc: 0.9447 - val_loss: 0.5303 - val_acc: 0.8713
Epoch 848/1000
16s 33ms/step - loss: 0.2320 - acc: 0.9458 - val_loss: 0.5297 - val_acc: 0.8675
Epoch 849/1000
16s 32ms/step - loss: 0.2261 - acc: 0.9473 - val_loss: 0.5278 - val_acc: 0.8712
Epoch 850/1000
16s 33ms/step - loss: 0.2289 - acc: 0.9466 - val_loss: 0.5329 - val_acc: 0.8710
Epoch 851/1000
16s 33ms/step - loss: 0.2291 - acc: 0.9474 - val_loss: 0.5331 - val_acc: 0.8715
Epoch 852/1000
17s 33ms/step - loss: 0.2312 - acc: 0.9463 - val_loss: 0.5269 - val_acc: 0.8727
Epoch 853/1000
17s 33ms/step - loss: 0.2319 - acc: 0.9458 - val_loss: 0.5287 - val_acc: 0.8701
Epoch 854/1000
17s 34ms/step - loss: 0.2291 - acc: 0.9461 - val_loss: 0.5300 - val_acc: 0.8731
Epoch 855/1000
17s 33ms/step - loss: 0.2294 - acc: 0.9469 - val_loss: 0.5342 - val_acc: 0.8703
Epoch 856/1000
16s 32ms/step - loss: 0.2305 - acc: 0.9456 - val_loss: 0.5324 - val_acc: 0.8703
Epoch 857/1000
16s 33ms/step - loss: 0.2318 - acc: 0.9448 - val_loss: 0.5338 - val_acc: 0.8677
Epoch 858/1000
16s 33ms/step - loss: 0.2286 - acc: 0.9466 - val_loss: 0.5299 - val_acc: 0.8688
Epoch 859/1000
17s 33ms/step - loss: 0.2302 - acc: 0.9467 - val_loss: 0.5329 - val_acc: 0.8686
Epoch 860/1000
17s 33ms/step - loss: 0.2305 - acc: 0.9457 - val_loss: 0.5350 - val_acc: 0.8687
Epoch 861/1000
17s 33ms/step - loss: 0.2284 - acc: 0.9457 - val_loss: 0.5376 - val_acc: 0.8689
Epoch 862/1000
17s 33ms/step - loss: 0.2302 - acc: 0.9460 - val_loss: 0.5317 - val_acc: 0.8705
Epoch 863/1000
17s 33ms/step - loss: 0.2276 - acc: 0.9462 - val_loss: 0.5327 - val_acc: 0.8694
Epoch 864/1000
16s 33ms/step - loss: 0.2273 - acc: 0.9471 - val_loss: 0.5338 - val_acc: 0.8706
Epoch 865/1000
16s 32ms/step - loss: 0.2278 - acc: 0.9462 - val_loss: 0.5311 - val_acc: 0.8703
Epoch 866/1000
16s 32ms/step - loss: 0.2277 - acc: 0.9455 - val_loss: 0.5312 - val_acc: 0.8727
Epoch 867/1000
16s 33ms/step - loss: 0.2282 - acc: 0.9467 - val_loss: 0.5315 - val_acc: 0.8707
Epoch 868/1000
17s 33ms/step - loss: 0.2264 - acc: 0.9458 - val_loss: 0.5371 - val_acc: 0.8694
Epoch 869/1000
17s 33ms/step - loss: 0.2266 - acc: 0.9470 - val_loss: 0.5354 - val_acc: 0.8684
Epoch 870/1000
17s 33ms/step - loss: 0.2269 - acc: 0.9473 - val_loss: 0.5325 - val_acc: 0.8674
Epoch 871/1000
17s 33ms/step - loss: 0.2269 - acc: 0.9481 - val_loss: 0.5349 - val_acc: 0.8706 ETA: 12s - loss: 0.2107 - acc: 0.9536
Epoch 872/1000
16s 32ms/step - loss: 0.2230 - acc: 0.9476 - val_loss: 0.5338 - val_acc: 0.8709
Epoch 873/1000
16s 32ms/step - loss: 0.2291 - acc: 0.9455 - val_loss: 0.5291 - val_acc: 0.8692
Epoch 874/1000
16s 32ms/step - loss: 0.2271 - acc: 0.9459 - val_loss: 0.5248 - val_acc: 0.8708
Epoch 875/1000
16s 33ms/step - loss: 0.2307 - acc: 0.9456 - val_loss: 0.5285 - val_acc: 0.8705
Epoch 876/1000
17s 33ms/step - loss: 0.2288 - acc: 0.9452 - val_loss: 0.5331 - val_acc: 0.8694
Epoch 877/1000
17s 33ms/step - loss: 0.2296 - acc: 0.9455 - val_loss: 0.5343 - val_acc: 0.8671
Epoch 878/1000
17s 33ms/step - loss: 0.2263 - acc: 0.9469 - val_loss: 0.5348 - val_acc: 0.8703
Epoch 879/1000
17s 33ms/step - loss: 0.2274 - acc: 0.9462 - val_loss: 0.5333 - val_acc: 0.8678
Epoch 880/1000
16s 33ms/step - loss: 0.2234 - acc: 0.9483 - val_loss: 0.5364 - val_acc: 0.8684
Epoch 881/1000
16s 32ms/step - loss: 0.2257 - acc: 0.9469 - val_loss: 0.5303 - val_acc: 0.8693
Epoch 882/1000
16s 32ms/step - loss: 0.2256 - acc: 0.9474 - val_loss: 0.5301 - val_acc: 0.8691
Epoch 883/1000
16s 33ms/step - loss: 0.2280 - acc: 0.9457 - val_loss: 0.5285 - val_acc: 0.8670
Epoch 884/1000
17s 33ms/step - loss: 0.2256 - acc: 0.9479 - val_loss: 0.5231 - val_acc: 0.8709
Epoch 885/1000
17s 33ms/step - loss: 0.2270 - acc: 0.9444 - val_loss: 0.5311 - val_acc: 0.8693
Epoch 886/1000
17s 34ms/step - loss: 0.2281 - acc: 0.9458 - val_loss: 0.5255 - val_acc: 0.8707
Epoch 887/1000
17s 33ms/step - loss: 0.2280 - acc: 0.9457 - val_loss: 0.5289 - val_acc: 0.8727
Epoch 888/1000
16s 33ms/step - loss: 0.2240 - acc: 0.9470 - val_loss: 0.5342 - val_acc: 0.8698
Epoch 889/1000
16s 32ms/step - loss: 0.2264 - acc: 0.9464 - val_loss: 0.5375 - val_acc: 0.8672
Epoch 890/1000
16s 32ms/step - loss: 0.2261 - acc: 0.9469 - val_loss: 0.5372 - val_acc: 0.8686
Epoch 891/1000
16s 33ms/step - loss: 0.2214 - acc: 0.9472 - val_loss: 0.5297 - val_acc: 0.8692
Epoch 892/1000
17s 33ms/step - loss: 0.2239 - acc: 0.9473 - val_loss: 0.5325 - val_acc: 0.8705
Epoch 893/1000
17s 34ms/step - loss: 0.2236 - acc: 0.9470 - val_loss: 0.5261 - val_acc: 0.8673
Epoch 894/1000
17s 33ms/step - loss: 0.2269 - acc: 0.9465 - val_loss: 0.5368 - val_acc: 0.8674
Epoch 895/1000
17s 33ms/step - loss: 0.2242 - acc: 0.9469 - val_loss: 0.5361 - val_acc: 0.8684
Epoch 896/1000
16s 33ms/step - loss: 0.2241 - acc: 0.9470 - val_loss: 0.5322 - val_acc: 0.8689
Epoch 897/1000
17s 34ms/step - loss: 0.2239 - acc: 0.9466 - val_loss: 0.5413 - val_acc: 0.8645
Epoch 898/1000
17s 34ms/step - loss: 0.2239 - acc: 0.9467 - val_loss: 0.5379 - val_acc: 0.8674
Epoch 899/1000
17s 34ms/step - loss: 0.2287 - acc: 0.9460 - val_loss: 0.5365 - val_acc: 0.8663
Epoch 900/1000
17s 34ms/step - loss: 0.2216 - acc: 0.9482 - val_loss: 0.5382 - val_acc: 0.8695
Epoch 901/1000
lr changed to 9.999999310821295e-05
17s 34ms/step - loss: 0.2176 - acc: 0.9493 - val_loss: 0.5316 - val_acc: 0.8716
Epoch 902/1000
17s 35ms/step - loss: 0.2117 - acc: 0.9507 - val_loss: 0.5298 - val_acc: 0.8711
Epoch 903/1000
17s 34ms/step - loss: 0.2125 - acc: 0.9514 - val_loss: 0.5297 - val_acc: 0.8705
Epoch 904/1000
17s 34ms/step - loss: 0.2120 - acc: 0.9520 - val_loss: 0.5282 - val_acc: 0.8713
Epoch 905/1000
17s 34ms/step - loss: 0.2125 - acc: 0.9510 - val_loss: 0.5284 - val_acc: 0.8710
Epoch 906/1000
17s 34ms/step - loss: 0.2103 - acc: 0.9520 - val_loss: 0.5281 - val_acc: 0.8719
Epoch 907/1000
17s 34ms/step - loss: 0.2075 - acc: 0.9528 - val_loss: 0.5279 - val_acc: 0.8719
Epoch 908/1000
17s 34ms/step - loss: 0.2074 - acc: 0.9526 - val_loss: 0.5284 - val_acc: 0.8713
Epoch 909/1000
17s 35ms/step - loss: 0.2070 - acc: 0.9530 - val_loss: 0.5271 - val_acc: 0.8705
Epoch 910/1000
17s 34ms/step - loss: 0.2093 - acc: 0.9532 - val_loss: 0.5271 - val_acc: 0.8712
Epoch 911/1000
17s 34ms/step - loss: 0.2071 - acc: 0.9528 - val_loss: 0.5277 - val_acc: 0.8705
Epoch 912/1000
17s 34ms/step - loss: 0.2052 - acc: 0.9541 - val_loss: 0.5279 - val_acc: 0.8701
Epoch 913/1000
17s 33ms/step - loss: 0.2076 - acc: 0.9533 - val_loss: 0.5279 - val_acc: 0.8696
Epoch 914/1000
17s 34ms/step - loss: 0.2070 - acc: 0.9525 - val_loss: 0.5284 - val_acc: 0.8690
Epoch 915/1000
17s 34ms/step - loss: 0.2040 - acc: 0.9542 - val_loss: 0.5286 - val_acc: 0.8694
Epoch 916/1000
17s 34ms/step - loss: 0.2035 - acc: 0.9550 - val_loss: 0.5276 - val_acc: 0.8702
Epoch 917/1000
17s 34ms/step - loss: 0.2032 - acc: 0.9549 - val_loss: 0.5278 - val_acc: 0.8701
Epoch 918/1000
17s 34ms/step - loss: 0.2041 - acc: 0.9540 - val_loss: 0.5274 - val_acc: 0.8707
Epoch 919/1000
17s 34ms/step - loss: 0.2039 - acc: 0.9541 - val_loss: 0.5278 - val_acc: 0.8706
Epoch 920/1000
17s 34ms/step - loss: 0.2024 - acc: 0.9552 - val_loss: 0.5284 - val_acc: 0.8716
Epoch 921/1000
17s 34ms/step - loss: 0.2046 - acc: 0.9534 - val_loss: 0.5271 - val_acc: 0.8714
Epoch 922/1000
17s 34ms/step - loss: 0.2062 - acc: 0.9539 - val_loss: 0.5275 - val_acc: 0.8709
Epoch 923/1000
17s 34ms/step - loss: 0.2046 - acc: 0.9545 - val_loss: 0.5276 - val_acc: 0.8708
Epoch 924/1000
17s 34ms/step - loss: 0.2058 - acc: 0.9526 - val_loss: 0.5259 - val_acc: 0.8702
Epoch 925/1000
17s 35ms/step - loss: 0.2047 - acc: 0.9537 - val_loss: 0.5269 - val_acc: 0.8700
Epoch 926/1000
17s 34ms/step - loss: 0.2053 - acc: 0.9533 - val_loss: 0.5270 - val_acc: 0.8712 ETA: 12s - loss: 0.2096 - acc: 0.9512
Epoch 927/1000
17s 34ms/step - loss: 0.2059 - acc: 0.9532 - val_loss: 0.5264 - val_acc: 0.8712
Epoch 928/1000
17s 34ms/step - loss: 0.2047 - acc: 0.9544 - val_loss: 0.5287 - val_acc: 0.8694
Epoch 929/1000
17s 33ms/step - loss: 0.2071 - acc: 0.9531 - val_loss: 0.5276 - val_acc: 0.8695
Epoch 930/1000
17s 33ms/step - loss: 0.2050 - acc: 0.9544 - val_loss: 0.5281 - val_acc: 0.8697
Epoch 931/1000
16s 33ms/step - loss: 0.2042 - acc: 0.9538 - val_loss: 0.5275 - val_acc: 0.8688
Epoch 932/1000
17s 33ms/step - loss: 0.2022 - acc: 0.9551 - val_loss: 0.5274 - val_acc: 0.8700
Epoch 933/1000
17s 33ms/step - loss: 0.2035 - acc: 0.9546 - val_loss: 0.5293 - val_acc: 0.8702
Epoch 934/1000
17s 34ms/step - loss: 0.2042 - acc: 0.9547 - val_loss: 0.5289 - val_acc: 0.8695
Epoch 935/1000
17s 33ms/step - loss: 0.2026 - acc: 0.9550 - val_loss: 0.5290 - val_acc: 0.8707
Epoch 936/1000
16s 33ms/step - loss: 0.2031 - acc: 0.9545 - val_loss: 0.5307 - val_acc: 0.8696
Epoch 937/1000
16s 32ms/step - loss: 0.2034 - acc: 0.9549 - val_loss: 0.5291 - val_acc: 0.8699
Epoch 938/1000
16s 33ms/step - loss: 0.2042 - acc: 0.9534 - val_loss: 0.5273 - val_acc: 0.8713
Epoch 939/1000
17s 33ms/step - loss: 0.2067 - acc: 0.9536 - val_loss: 0.5276 - val_acc: 0.8707
Epoch 940/1000
17s 33ms/step - loss: 0.2035 - acc: 0.9544 - val_loss: 0.5285 - val_acc: 0.8706
Epoch 941/1000
17s 34ms/step - loss: 0.2011 - acc: 0.9553 - val_loss: 0.5285 - val_acc: 0.8704
Epoch 942/1000
17s 33ms/step - loss: 0.2026 - acc: 0.9549 - val_loss: 0.5278 - val_acc: 0.8715
Epoch 943/1000
17s 33ms/step - loss: 0.2005 - acc: 0.9557 - val_loss: 0.5292 - val_acc: 0.8713
Epoch 944/1000
16s 33ms/step - loss: 0.2029 - acc: 0.9543 - val_loss: 0.5300 - val_acc: 0.8696
Epoch 945/1000
16s 33ms/step - loss: 0.2020 - acc: 0.9548 - val_loss: 0.5295 - val_acc: 0.8702
Epoch 946/1000
16s 33ms/step - loss: 0.2020 - acc: 0.9553 - val_loss: 0.5305 - val_acc: 0.8683
Epoch 947/1000
16s 33ms/step - loss: 0.2012 - acc: 0.9549 - val_loss: 0.5295 - val_acc: 0.8685
Epoch 948/1000
17s 33ms/step - loss: 0.2003 - acc: 0.9557 - val_loss: 0.5292 - val_acc: 0.8694 ETA: 2s - loss: 0.1982 - acc: 0.9567
Epoch 949/1000
17s 33ms/step - loss: 0.2058 - acc: 0.9534 - val_loss: 0.5290 - val_acc: 0.8700
Epoch 950/1000
17s 34ms/step - loss: 0.2018 - acc: 0.9551 - val_loss: 0.5295 - val_acc: 0.8711
Epoch 951/1000
17s 33ms/step - loss: 0.2009 - acc: 0.9560 - val_loss: 0.5285 - val_acc: 0.8704
Epoch 952/1000
16s 33ms/step - loss: 0.2016 - acc: 0.9559 - val_loss: 0.5291 - val_acc: 0.8695
Epoch 953/1000
16s 32ms/step - loss: 0.2033 - acc: 0.9546 - val_loss: 0.5310 - val_acc: 0.8703
Epoch 954/1000
16s 33ms/step - loss: 0.2022 - acc: 0.9540 - val_loss: 0.5318 - val_acc: 0.8704
Epoch 955/1000
17s 33ms/step - loss: 0.1997 - acc: 0.9558 - val_loss: 0.5312 - val_acc: 0.8700
Epoch 956/1000
17s 33ms/step - loss: 0.2007 - acc: 0.9555 - val_loss: 0.5297 - val_acc: 0.8701
Epoch 957/1000
17s 33ms/step - loss: 0.2035 - acc: 0.9531 - val_loss: 0.5308 - val_acc: 0.8697
Epoch 958/1000
17s 33ms/step - loss: 0.2018 - acc: 0.9547 - val_loss: 0.5324 - val_acc: 0.8699
Epoch 959/1000
17s 33ms/step - loss: 0.2006 - acc: 0.9557 - val_loss: 0.5318 - val_acc: 0.8695
Epoch 960/1000
16s 33ms/step - loss: 0.1996 - acc: 0.9558 - val_loss: 0.5311 - val_acc: 0.8690
Epoch 961/1000
16s 32ms/step - loss: 0.1991 - acc: 0.9564 - val_loss: 0.5318 - val_acc: 0.8686
Epoch 962/1000
16s 32ms/step - loss: 0.2013 - acc: 0.9544 - val_loss: 0.5323 - val_acc: 0.8681
Epoch 963/1000
17s 33ms/step - loss: 0.2013 - acc: 0.9548 - val_loss: 0.5310 - val_acc: 0.8704
Epoch 964/1000
17s 33ms/step - loss: 0.2026 - acc: 0.9549 - val_loss: 0.5317 - val_acc: 0.8702
Epoch 965/1000
17s 33ms/step - loss: 0.1990 - acc: 0.9565 - val_loss: 0.5312 - val_acc: 0.8708
Epoch 966/1000
17s 33ms/step - loss: 0.2021 - acc: 0.9535 - val_loss: 0.5303 - val_acc: 0.8706 ETA: 7s - loss: 0.2001 - acc: 0.9555
Epoch 967/1000
17s 33ms/step - loss: 0.1996 - acc: 0.9568 - val_loss: 0.5307 - val_acc: 0.8700
Epoch 968/1000
16s 33ms/step - loss: 0.2007 - acc: 0.9556 - val_loss: 0.5313 - val_acc: 0.8699
Epoch 969/1000
16s 33ms/step - loss: 0.2004 - acc: 0.9553 - val_loss: 0.5307 - val_acc: 0.8693
Epoch 970/1000
16s 32ms/step - loss: 0.1988 - acc: 0.9550 - val_loss: 0.5330 - val_acc: 0.8709
Epoch 971/1000
16s 33ms/step - loss: 0.2025 - acc: 0.9548 - val_loss: 0.5326 - val_acc: 0.8710
Epoch 972/1000
17s 33ms/step - loss: 0.2042 - acc: 0.9541 - val_loss: 0.5333 - val_acc: 0.8709
Epoch 973/1000
17s 33ms/step - loss: 0.2011 - acc: 0.9555 - val_loss: 0.5328 - val_acc: 0.8697
Epoch 974/1000
17s 33ms/step - loss: 0.2018 - acc: 0.9550 - val_loss: 0.5328 - val_acc: 0.8705
Epoch 975/1000
17s 33ms/step - loss: 0.2047 - acc: 0.9535 - val_loss: 0.5337 - val_acc: 0.8700
Epoch 976/1000
16s 33ms/step - loss: 0.1987 - acc: 0.9568 - val_loss: 0.5329 - val_acc: 0.8706
Epoch 977/1000
16s 33ms/step - loss: 0.2009 - acc: 0.9551 - val_loss: 0.5332 - val_acc: 0.8698
Epoch 978/1000
16s 33ms/step - loss: 0.1967 - acc: 0.9574 - val_loss: 0.5329 - val_acc: 0.8700
Epoch 979/1000
16s 33ms/step - loss: 0.2011 - acc: 0.9550 - val_loss: 0.5333 - val_acc: 0.8692
Epoch 980/1000
17s 33ms/step - loss: 0.2017 - acc: 0.9546 - val_loss: 0.5333 - val_acc: 0.8690
Epoch 981/1000
17s 33ms/step - loss: 0.1982 - acc: 0.9564 - val_loss: 0.5333 - val_acc: 0.8705
Epoch 982/1000
17s 34ms/step - loss: 0.1995 - acc: 0.9567 - val_loss: 0.5330 - val_acc: 0.8702
Epoch 983/1000
17s 33ms/step - loss: 0.1985 - acc: 0.9559 - val_loss: 0.5334 - val_acc: 0.8704
Epoch 984/1000
16s 33ms/step - loss: 0.2007 - acc: 0.9552 - val_loss: 0.5334 - val_acc: 0.8700
Epoch 985/1000
16s 32ms/step - loss: 0.2023 - acc: 0.9538 - val_loss: 0.5323 - val_acc: 0.8704
Epoch 986/1000
16s 32ms/step - loss: 0.1995 - acc: 0.9563 - val_loss: 0.5331 - val_acc: 0.8694
Epoch 987/1000
17s 33ms/step - loss: 0.2004 - acc: 0.9560 - val_loss: 0.5336 - val_acc: 0.8694
Epoch 988/1000
16s 33ms/step - loss: 0.1985 - acc: 0.9562 - val_loss: 0.5339 - val_acc: 0.8684
Epoch 989/1000
17s 33ms/step - loss: 0.2021 - acc: 0.9540 - val_loss: 0.5339 - val_acc: 0.8688
Epoch 990/1000
17s 33ms/step - loss: 0.1997 - acc: 0.9562 - val_loss: 0.5347 - val_acc: 0.8686
Epoch 991/1000
17s 33ms/step - loss: 0.2011 - acc: 0.9547 - val_loss: 0.5339 - val_acc: 0.8699
Epoch 992/1000
16s 32ms/step - loss: 0.1998 - acc: 0.9556 - val_loss: 0.5326 - val_acc: 0.8692
Epoch 993/1000
16s 32ms/step - loss: 0.2015 - acc: 0.9556 - val_loss: 0.5336 - val_acc: 0.8706
Epoch 994/1000
16s 33ms/step - loss: 0.2020 - acc: 0.9546 - val_loss: 0.5334 - val_acc: 0.8702
Epoch 995/1000
16s 33ms/step - loss: 0.2054 - acc: 0.9538 - val_loss: 0.5338 - val_acc: 0.8690
Epoch 996/1000
17s 33ms/step - loss: 0.1983 - acc: 0.9563 - val_loss: 0.5346 - val_acc: 0.8709
Epoch 997/1000
17s 33ms/step - loss: 0.1989 - acc: 0.9559 - val_loss: 0.5352 - val_acc: 0.8685
Epoch 998/1000
17s 33ms/step - loss: 0.2007 - acc: 0.9557 - val_loss: 0.5339 - val_acc: 0.8681
Epoch 999/1000
17s 33ms/step - loss: 0.2015 - acc: 0.9537 - val_loss: 0.5345 - val_acc: 0.8694
Epoch 1000/1000
16s 33ms/step - loss: 0.2002 - acc: 0.9548 - val_loss: 0.5354 - val_acc: 0.8686
Train loss: 0.16600286397337913
Train accuracy: 0.968400007367134
Test loss: 0.5354112640023232
Test accuracy: 0.8685999995470047

训练集准确率比测试集高了接近10%,看来小网络也会过拟合。

到目前为止,测试准确率最高的,还是调参记录6里的93.23%。

Minghang Zhao, Shisheng Zhong, Xuyun Fu, Baoping Tang, Shaojiang Dong, Michael Pecht, Deep Residual Networks with Adaptively Parametric Rectifier Linear Units for Fault Diagnosis, IEEE Transactions on Industrial Electronics, DOI: 10.1109/TIE.2020.2972458, Date of Publication: 13 February 2020

https://ieeexplore.ieee.org/document/8998530

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
作者已关闭评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档