前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >卷积神经网络(猫狗分类)

卷积神经网络(猫狗分类)

作者头像
火星娃统计
发布2020-09-15 15:19:17
1.6K0
发布2020-09-15 15:19:17
举报
文章被收录于专栏:火星娃统计火星娃统计

卷积神经网络(猫狗分类)

概述

  • 数据来源:kaggle数据
  • 下载地址:从https://www.kaggle.com/c/dogs-vs-cats/data (需要注册,并下载,文件大小800M)
  • 目标:根据图像识别猫狗分类
  • 方法:卷积神经网络

数据集整理

代码语言:javascript
复制
# 创建新数据集
import os, shutil
# 原始数据
original_dataset_dir = '/home/sunqi/python_study/train'
# 用来保存小数据集,缩小运算
base_dir = '/home/sunqi/python_study/cats_and_dogs_small'
# 创建文件夹
# os.mkdir(base_dir)
# 创建训练文件夹
train_dir = os.path.join(base_dir, 'train')
# os.mkdir(train_dir)
# 创建验证文件夹
validation_dir = os.path.join(base_dir, 'validation')
# os.mkdir(validation_dir)
# 创建测试文件夹
test_dir = os.path.join(base_dir, 'test')
# os.mkdir(test_dir)
# 放训练猫图
train_cats_dir = os.path.join(train_dir, 'cats')
# os.mkdir(train_cats_dir)
# 放训练狗图
train_dogs_dir = os.path.join(train_dir, 'dogs')
# os.mkdir(train_dogs_dir)
# 放验证猫图
validation_cats_dir = os.path.join(validation_dir, 'cats')
# os.mkdir(validation_cats_dir)
# 放验证狗图
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
# os.mkdir(validation_dogs_dir)
# 放测试猫图
test_cats_dir = os.path.join(test_dir, 'cats')
# os.mkdir(test_cats_dir)
# 放测试狗图
test_dogs_dir = os.path.join(test_dir, 'dogs')
# os.mkdir(test_dogs_dir)
代码语言:javascript
复制
# 将图片放入建立好的文件夹
# 前1000张图,文件名为cat.1-999,所以通过循环进行
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(train_cats_dir, fname)
    shutil.copyfile(src, dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(validation_cats_dir, fname)
    shutil.copyfile(src, dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(test_cats_dir, fname)
    shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(train_dogs_dir, fname)
    shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(validation_dogs_dir, fname)
    shutil.copyfile(src, dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir, fname)
    dst = os.path.join(test_dogs_dir, fname)
    shutil.copyfile(src, dst)
代码语言:javascript
复制
# 构建模型
# 这里和之前的内容一样
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
代码语言:javascript
复制
Using TensorFlow backend.
代码语言:javascript
复制
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])

数据预处理

  1. 读取图像文件
  2. 将JPEG 文件解码为RGB 像素网格
  3. 将这些像素网格转换为浮点数张量
  4. 将像素值(0-255 范围内)缩放到0-1区间
代码语言:javascript
复制
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale=1./255)#图像缩放
test_datagen = ImageDataGenerator(rescale=1./255)
# 这个是一个python生成器,里面的东西不知道是什么,不断读取的意思
train_generator = train_datagen.flow_from_directory(
                train_dir,# 目标目录
                target_size=(150, 150),# 将图像大小调整为150*150,对应模型的输入张量
                batch_size=20,
                class_mode='binary')#标签
validation_generator = test_datagen.flow_from_directory(
                validation_dir,
                target_size=(150, 150),
                batch_size=20,
                class_mode='binary')
代码语言:javascript
复制
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
代码语言:javascript
复制
# 利用批量生成器拟合模型
history = model.fit_generator(
    train_generator,# python生成器,相当于循环
    steps_per_epoch=100,#抽100个批量
    epochs=30,#30次迭代
    validation_data=validation_generator,
    validation_steps=50)

# 保存模型
model.save('cats_and_dogs_small_1.h5')
代码语言:javascript
复制
Epoch 1/30
100/100 [==============================] - 13s 130ms/step - loss: 0.6921 - acc: 0.5240 - val_loss: 0.6474 - val_acc: 0.5540
Epoch 2/30
100/100 [==============================] - 12s 120ms/step - loss: 0.6591 - acc: 0.6095 - val_loss: 0.6260 - val_acc: 0.5480
Epoch 3/30
100/100 [==============================] - 12s 119ms/step - loss: 0.6157 - acc: 0.6675 - val_loss: 0.6492 - val_acc: 0.6670
Epoch 4/30
100/100 [==============================] - 12s 120ms/step - loss: 0.5689 - acc: 0.7135 - val_loss: 0.5767 - val_acc: 0.6150
Epoch 5/30
100/100 [==============================] - 12s 119ms/step - loss: 0.5439 - acc: 0.7225 - val_loss: 0.7602 - val_acc: 0.6490
Epoch 6/30
100/100 [==============================] - 12s 119ms/step - loss: 0.5148 - acc: 0.7300 - val_loss: 0.7002 - val_acc: 0.6870
Epoch 7/30
100/100 [==============================] - 12s 120ms/step - loss: 0.4917 - acc: 0.7730 - val_loss: 0.6359 - val_acc: 0.6550
Epoch 8/30
100/100 [==============================] - 12s 120ms/step - loss: 0.4717 - acc: 0.7750 - val_loss: 0.8081 - val_acc: 0.7080
Epoch 9/30
100/100 [==============================] - 12s 120ms/step - loss: 0.4472 - acc: 0.7875 - val_loss: 0.5322 - val_acc: 0.7040
Epoch 10/30
100/100 [==============================] - 12s 120ms/step - loss: 0.4158 - acc: 0.8170 - val_loss: 0.3662 - val_acc: 0.7460
Epoch 11/30
100/100 [==============================] - 12s 119ms/step - loss: 0.4017 - acc: 0.8155 - val_loss: 0.4341 - val_acc: 0.7200
Epoch 12/30
100/100 [==============================] - 12s 120ms/step - loss: 0.3803 - acc: 0.8270 - val_loss: 0.4769 - val_acc: 0.7250
Epoch 13/30
100/100 [==============================] - 12s 121ms/step - loss: 0.3574 - acc: 0.8425 - val_loss: 0.6853 - val_acc: 0.7240
Epoch 14/30
100/100 [==============================] - 12s 120ms/step - loss: 0.3360 - acc: 0.8565 - val_loss: 0.2616 - val_acc: 0.7040
Epoch 15/30
100/100 [==============================] - 12s 119ms/step - loss: 0.3012 - acc: 0.8735 - val_loss: 0.5142 - val_acc: 0.7460
Epoch 16/30
100/100 [==============================] - 12s 119ms/step - loss: 0.2834 - acc: 0.8880 - val_loss: 0.6345 - val_acc: 0.7280
Epoch 17/30
100/100 [==============================] - 12s 119ms/step - loss: 0.2663 - acc: 0.8955 - val_loss: 0.4337 - val_acc: 0.7220
Epoch 18/30
100/100 [==============================] - 12s 119ms/step - loss: 0.2482 - acc: 0.9065 - val_loss: 0.3528 - val_acc: 0.7420
Epoch 19/30
100/100 [==============================] - 12s 120ms/step - loss: 0.2251 - acc: 0.9120 - val_loss: 0.4797 - val_acc: 0.7270
Epoch 20/30
100/100 [==============================] - 12s 118ms/step - loss: 0.2171 - acc: 0.9125 - val_loss: 0.3961 - val_acc: 0.7180
Epoch 21/30
100/100 [==============================] - 12s 119ms/step - loss: 0.1905 - acc: 0.9365 - val_loss: 1.1006 - val_acc: 0.7300
Epoch 22/30
100/100 [==============================] - 12s 118ms/step - loss: 0.1747 - acc: 0.9400 - val_loss: 0.5780 - val_acc: 0.7140
Epoch 23/30
100/100 [==============================] - 12s 120ms/step - loss: 0.1476 - acc: 0.9495 - val_loss: 0.6581 - val_acc: 0.7220
Epoch 24/30
100/100 [==============================] - 12s 120ms/step - loss: 0.1385 - acc: 0.9540 - val_loss: 0.3878 - val_acc: 0.7340
Epoch 25/30
100/100 [==============================] - 12s 119ms/step - loss: 0.1168 - acc: 0.9625 - val_loss: 0.4306 - val_acc: 0.7410
Epoch 26/30
100/100 [==============================] - 12s 119ms/step - loss: 0.1064 - acc: 0.9675 - val_loss: 0.5825 - val_acc: 0.7260
Epoch 27/30
100/100 [==============================] - 12s 119ms/step - loss: 0.1011 - acc: 0.9720 - val_loss: 1.1769 - val_acc: 0.7090
Epoch 28/30
100/100 [==============================] - 12s 119ms/step - loss: 0.0864 - acc: 0.9765 - val_loss: 0.9043 - val_acc: 0.7370
Epoch 29/30
100/100 [==============================] - 12s 118ms/step - loss: 0.0761 - acc: 0.9790 - val_loss: 0.6539 - val_acc: 0.7320
Epoch 30/30
100/100 [==============================] - 12s 118ms/step - loss: 0.0597 - acc: 0.9855 - val_loss: 0.4191 - val_acc: 0.7160
代码语言:javascript
复制
# 绘制进度曲线
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在第5轮之后就出现了训练和验证的差异,存在过拟合问题

数据增强

数据增强是从现有的训练样本中生成更多的训练数据,方法是利用多种能够生成可信图像的随机变换来增加样本,比如对图片进行角度变换,平移等方法 目的是为了防止模型的过拟合

代码语言:javascript
复制
# 对图片的数据增强
datagen = ImageDataGenerator(
    rotation_range=40,#角度
    width_shift_range=0.2,#宽度移动
    height_shift_range=0.2,#高度移动
    shear_range=0.2,#随机切换角度
    zoom_range=0.2,#随机放大
    horizontal_flip=True,#随机将一半图像水平翻
    fill_mode='nearest')#填充方法
代码语言:javascript
复制
# 查看几个数据增强后的结果
from keras.preprocessing import image# 导入图像处理模块
fnames = [os.path.join(train_cats_dir, fname) for
    fname in os.listdir(train_cats_dir)]
img_path = fnames[3]#找一张图看看
img = image.load_img(img_path, target_size=(150, 150))#读取图片调整大小
# 转换为数组,并调整形状
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
# 进行随机变换
i = 0
for batch in datagen.flow(x, batch_size=1):
    plt.figure(i)
    imgplot = plt.imshow(image.array_to_img(batch[0]))
    i += 1
    if i % 4 == 0:#目的是生成限制生成四张图片
        break
plt.show()

生成的四张图片和原始图片内容一致,但是角度方向等存在差异,又可以看做不一致

Dropout减少过拟合

前面的数据增强可以减少一部分过拟合,但是不足以完全消除过拟合,因此可以添加一个Dropout层再次减少过拟合 Dropout层进行的操作是舍弃一些值较小的特征根

代码语言:javascript
复制
## 重新训练一个模型
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                                    input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
                optimizer=optimizers.RMSprop(lr=1e-4),
                metrics=['acc'])
代码语言:javascript
复制
# 使用数据增强训练
train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,)
test_datagen = ImageDataGenerator(rescale=1./255)# 测试数据不能使用数据增强
train_generator = train_datagen.flow_from_directory(
        train_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')
history = model.fit_generator(
        train_generator,
        steps_per_epoch=100,
        epochs=100,
        validation_data=validation_generator,
        validation_steps=50)
代码语言:javascript
复制
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/100
100/100 [==============================] - 27s 271ms/step - loss: 0.6917 - acc: 0.5234 - val_loss: 0.6728 - val_acc: 0.5019
Epoch 2/100
100/100 [==============================] - 25s 252ms/step - loss: 0.6838 - acc: 0.5726 - val_loss: 0.7077 - val_acc: 0.6198
Epoch 3/100
100/100 [==============================] - 25s 253ms/step - loss: 0.6066 - acc: 0.6809 - val_loss: 0.6399 - val_acc: 0.6961
Epoch 8/100
100/100 [==============================] - 26s 255ms/step - loss: 0.5883 - acc: 0.6888 - val_loss: 0.7738 - val_acc: 0.6991
Epoch 9/100
100/100 [==============================] - 25s 254ms/step - loss: 0.5854 - acc: 0.6793 - val_loss: 0.5877 - val_acc: 0.6720
Epoch 10/100
100/100 [==============================] - 25s 253ms/step - loss: 0.5666 - acc: 0.7105 - val_loss: 0.5192 - val_acc: 0.7030
Epoch 11/100
100/100 [==============================] - 26s 263ms/step - loss: 0.5717 - acc: 0.6932 - val_loss: 0.6657 - val_acc: 0.6334
Epoch 12/100
100/100 [==============================] - 25s 251ms/step - loss: 0.5644 - acc: 0.7143 - val_loss: 0.4830 - val_acc: 0.7367
Epoch 13/100
100/100 [==============================] - 26s 255ms/step - loss: 0.5572 - acc: 0.7051 - val_loss: 0.5695 - val_acc: 0.7210
Epoch 14/100
100/100 [==============================] - 25s 251ms/step - loss: 0.5526 - acc: 0.7128 - val_loss: 0.4823 - val_acc: 0.7506
Epoch 15/100
100/100 [==============================] - 25s 253ms/step - loss: 0.5372 - acc: 0.7289 - val_loss: 0.6230 - val_acc: 0.6914
Epoch 16/100
100/100 [==============================] - 26s 258ms/step - loss: 0.5427 - acc: 0.7162 - val_loss: 0.5232 - val_acc: 0.7494
Epoch 17/100
100/100 [==============================] - 25s 250ms/step - loss: 0.5378 - acc: 0.7384 - val_loss: 0.5064 - val_acc: 0.7494
Epoch 18/100
100/100 [==============================] - 27s 270ms/step - loss: 0.5448 - acc: 0.7159 - val_loss: 0.4783 - val_acc: 0.7468
Epoch 19/100
100/100 [==============================] - 25s 254ms/step - loss: 0.5202 - acc: 0.7427 - val_loss: 0.6163 - val_acc: 0.7360
Epoch 20/100
100/100 [==============================] - 26s 257ms/step - loss: 0.5259 - acc: 0.7306 - val_loss: 0.4490 - val_acc: 0.7571
Epoch 21/100
100/100 [==============================] - 25s 254ms/step - loss: 0.5214 - acc: 0.7380 - val_loss: 0.5362 - val_acc: 0.7627
Epoch 22/100
100/100 [==============================] - 26s 258ms/step - loss: 0.5174 - acc: 0.7481 - val_loss: 0.4221 - val_acc: 0.7345
Epoch 23/100
100/100 [==============================] - 27s 268ms/step - loss: 0.5056 - acc: 0.7443 - val_loss: 0.6451 - val_acc: 0.7538
Epoch 24/100
100/100 [==============================] - 26s 258ms/step - loss: 0.5038 - acc: 0.7541 - val_loss: 0.6212 - val_acc: 0.7352
Epoch 25/100
100/100 [==============================] - 25s 252ms/step - loss: 0.5005 - acc: 0.7491 - val_loss: 0.5299 - val_acc: 0.7442
Epoch 26/100
100/100 [==============================] - 25s 254ms/step - loss: 0.5027 - acc: 0.7591 - val_loss: 0.6129 - val_acc: 0.7278
Epoch 27/100
100/100 [==============================] - 25s 255ms/step - loss: 0.4866 - acc: 0.7626 - val_loss: 0.4164 - val_acc: 0.7861
Epoch 28/100
100/100 [==============================] - 26s 264ms/step - loss: 0.4867 - acc: 0.7563 - val_loss: 0.5636 - val_acc: 0.7456
Epoch 29/100
100/100 [==============================] - 26s 258ms/step - loss: 0.4985 - acc: 0.7535 - val_loss: 0.4185 - val_acc: 0.7693
Epoch 30/100
100/100 [==============================] - 25s 251ms/step - loss: 0.4884 - acc: 0.7582 - val_loss: 0.5540 - val_acc: 0.7253
Epoch 31/100
100/100 [==============================] - 26s 256ms/step - loss: 0.4805 - acc: 0.7708 - val_loss: 0.3387 - val_acc: 0.7700
Epoch 32/100
100/100 [==============================] - 25s 251ms/step - loss: 0.4758 - acc: 0.7638 - val_loss: 0.3540 - val_acc: 0.7751
Epoch 33/100
100/100 [==============================] - 26s 262ms/step - loss: 0.4763 - acc: 0.7751 - val_loss: 0.4014 - val_acc: 0.7538
Epoch 34/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4736 - acc: 0.7758 - val_loss: 0.3543 - val_acc: 0.7622
Epoch 35/100
100/100 [==============================] - 27s 269ms/step - loss: 0.4729 - acc: 0.7758 - val_loss: 0.3109 - val_acc: 0.7963
Epoch 36/100
100/100 [==============================] - 25s 254ms/step - loss: 0.4740 - acc: 0.7713 - val_loss: 0.6062 - val_acc: 0.7790
Epoch 37/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4802 - acc: 0.7666 - val_loss: 0.4107 - val_acc: 0.7976
Epoch 38/100
100/100 [==============================] - 26s 259ms/step - loss: 0.4459 - acc: 0.7847 - val_loss: 0.4499 - val_acc: 0.7938
Epoch 39/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4638 - acc: 0.7795 - val_loss: 0.3357 - val_acc: 0.7881
Epoch 40/100
100/100 [==============================] - 27s 273ms/step - loss: 0.4673 - acc: 0.7732 - val_loss: 0.3718 - val_acc: 0.7764
Epoch 41/100
100/100 [==============================] - 25s 253ms/step - loss: 0.4564 - acc: 0.7759 - val_loss: 0.4136 - val_acc: 0.7932
Epoch 42/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4441 - acc: 0.7921 - val_loss: 0.5463 - val_acc: 0.7944
Epoch 43/100
100/100 [==============================] - 25s 254ms/step - loss: 0.4525 - acc: 0.7828 - val_loss: 0.7509 - val_acc: 0.7358
Epoch 44/100
100/100 [==============================] - 25s 253ms/step - loss: 0.4503 - acc: 0.7787 - val_loss: 0.2707 - val_acc: 0.8027
Epoch 45/100
100/100 [==============================] - 27s 266ms/step - loss: 0.4425 - acc: 0.7990 - val_loss: 0.4280 - val_acc: 0.7816
Epoch 46/100
100/100 [==============================] - 25s 250ms/step - loss: 0.4389 - acc: 0.7958 - val_loss: 0.2661 - val_acc: 0.7931
Epoch 47/100
100/100 [==============================] - 26s 255ms/step - loss: 0.4325 - acc: 0.7965 - val_loss: 0.4029 - val_acc: 0.8241
Epoch 48/100
100/100 [==============================] - 26s 255ms/step - loss: 0.4368 - acc: 0.7910 - val_loss: 0.3177 - val_acc: 0.7906
Epoch 49/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4303 - acc: 0.7999 - val_loss: 0.5532 - val_acc: 0.8014
Epoch 50/100
100/100 [==============================] - 26s 262ms/step - loss: 0.4190 - acc: 0.8056 - val_loss: 0.2984 - val_acc: 0.7970
Epoch 51/100
100/100 [==============================] - 25s 250ms/step - loss: 0.4372 - acc: 0.7958 - val_loss: 0.4816 - val_acc: 0.7792
Epoch 52/100
100/100 [==============================] - 27s 271ms/step - loss: 0.4146 - acc: 0.8084 - val_loss: 0.7304 - val_acc: 0.8112
Epoch 53/100
100/100 [==============================] - 25s 254ms/step - loss: 0.4406 - acc: 0.7932 - val_loss: 0.5950 - val_acc: 0.8141
Epoch 54/100
100/100 [==============================] - 26s 257ms/step - loss: 0.4172 - acc: 0.8071 - val_loss: 0.4298 - val_acc: 0.7790
Epoch 55/100
100/100 [==============================] - 25s 252ms/step - loss: 0.4205 - acc: 0.8097 - val_loss: 0.3395 - val_acc: 0.8173
Epoch 56/100
100/100 [==============================] - 26s 255ms/step - loss: 0.4145 - acc: 0.8090 - val_loss: 0.5249 - val_acc: 0.7680
Epoch 57/100
100/100 [==============================] - 26s 262ms/step - loss: 0.3981 - acc: 0.8185 - val_loss: 0.3922 - val_acc: 0.7938
Epoch 58/100
100/100 [==============================] - 25s 248ms/step - loss: 0.4123 - acc: 0.8084 - val_loss: 0.3180 - val_acc: 0.8033
Epoch 59/100
100/100 [==============================] - 25s 254ms/step - loss: 0.4123 - acc: 0.8049 - val_loss: 0.3938 - val_acc: 0.8144
Epoch 60/100
100/100 [==============================] - 25s 254ms/step - loss: 0.3896 - acc: 0.8200 - val_loss: 0.4451 - val_acc: 0.8115
Epoch 61/100
100/100 [==============================] - 26s 256ms/step - loss: 0.3918 - acc: 0.8247 - val_loss: 0.4170 - val_acc: 0.8196
Epoch 62/100
100/100 [==============================] - 26s 260ms/step - loss: 0.4121 - acc: 0.8188 - val_loss: 0.4389 - val_acc: 0.8103
Epoch 63/100
100/100 [==============================] - 25s 255ms/step - loss: 0.4033 - acc: 0.8163 - val_loss: 0.5156 - val_acc: 0.7977
Epoch 64/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3946 - acc: 0.8163 - val_loss: 0.2393 - val_acc: 0.8099
Epoch 65/100
100/100 [==============================] - 25s 249ms/step - loss: 0.3801 - acc: 0.8283 - val_loss: 0.3290 - val_acc: 0.8223
Epoch 66/100
100/100 [==============================] - 26s 256ms/step - loss: 0.3953 - acc: 0.8150 - val_loss: 0.4010 - val_acc: 0.8235
Epoch 67/100
100/100 [==============================] - 25s 253ms/step - loss: 0.3879 - acc: 0.8274 - val_loss: 0.7320 - val_acc: 0.7392
Epoch 68/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3910 - acc: 0.8116 - val_loss: 0.2840 - val_acc: 0.8035
Epoch 69/100
100/100 [==============================] - 26s 263ms/step - loss: 0.3904 - acc: 0.8241 - val_loss: 0.2904 - val_acc: 0.8147
Epoch 70/100
100/100 [==============================] - 26s 255ms/step - loss: 0.3844 - acc: 0.8239 - val_loss: 0.2826 - val_acc: 0.8009
Epoch 71/100
100/100 [==============================] - 25s 251ms/step - loss: 0.3807 - acc: 0.8307 - val_loss: 0.4430 - val_acc: 0.7735
Epoch 72/100
100/100 [==============================] - 26s 260ms/step - loss: 0.3808 - acc: 0.8198 - val_loss: 0.2435 - val_acc: 0.8164
Epoch 73/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3695 - acc: 0.8355 - val_loss: 0.2928 - val_acc: 0.8396
Epoch 74/100
100/100 [==============================] - 26s 262ms/step - loss: 0.3845 - acc: 0.8266 - val_loss: 0.3940 - val_acc: 0.8287
Epoch 75/100
100/100 [==============================] - 25s 249ms/step - loss: 0.3816 - acc: 0.8204 - val_loss: 0.3728 - val_acc: 0.7345
Epoch 76/100
100/100 [==============================] - 25s 247ms/step - loss: 0.3707 - acc: 0.8365 - val_loss: 0.5501 - val_acc: 0.7900
Epoch 77/100
100/100 [==============================] - 26s 257ms/step - loss: 0.3659 - acc: 0.8370 - val_loss: 0.7177 - val_acc: 0.8041
Epoch 78/100
100/100 [==============================] - 25s 253ms/step - loss: 0.3660 - acc: 0.8364 - val_loss: 0.3636 - val_acc: 0.8211
Epoch 79/100
100/100 [==============================] - 26s 263ms/step - loss: 0.3589 - acc: 0.8441 - val_loss: 0.7369 - val_acc: 0.8196
Epoch 80/100
100/100 [==============================] - 25s 249ms/step - loss: 0.3549 - acc: 0.8403 - val_loss: 0.4898 - val_acc: 0.8376
Epoch 81/100
100/100 [==============================] - 27s 268ms/step - loss: 0.3559 - acc: 0.8362 - val_loss: 0.5360 - val_acc: 0.8223
Epoch 82/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3660 - acc: 0.8342 - val_loss: 0.2845 - val_acc: 0.8003
Epoch 83/100
100/100 [==============================] - 25s 253ms/step - loss: 0.3560 - acc: 0.8381 - val_loss: 0.4417 - val_acc: 0.8319
Epoch 84/100
100/100 [==============================] - 26s 260ms/step - loss: 0.3612 - acc: 0.8492 - val_loss: 0.5339 - val_acc: 0.7642
Epoch 85/100
100/100 [==============================] - 25s 253ms/step - loss: 0.3589 - acc: 0.8295 - val_loss: 0.4682 - val_acc: 0.8096
Epoch 86/100
100/100 [==============================] - 27s 270ms/step - loss: 0.3407 - acc: 0.8458 - val_loss: 0.2344 - val_acc: 0.8318
Epoch 87/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3416 - acc: 0.8520 - val_loss: 0.5362 - val_acc: 0.7868
Epoch 88/100
100/100 [==============================] - 26s 257ms/step - loss: 0.3525 - acc: 0.8445 - val_loss: 0.2842 - val_acc: 0.8305
Epoch 89/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3437 - acc: 0.8523 - val_loss: 0.3767 - val_acc: 0.8454
Epoch 90/100
100/100 [==============================] - 25s 249ms/step - loss: 0.3410 - acc: 0.8551 - val_loss: 0.4359 - val_acc: 0.8376
Epoch 91/100
100/100 [==============================] - 27s 265ms/step - loss: 0.3308 - acc: 0.8614 - val_loss: 0.4693 - val_acc: 0.8222
Epoch 92/100
100/100 [==============================] - 26s 256ms/step - loss: 0.3391 - acc: 0.8499 - val_loss: 0.2185 - val_acc: 0.8528
Epoch 93/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3470 - acc: 0.8463 - val_loss: 0.4285 - val_acc: 0.7957
Epoch 94/100
100/100 [==============================] - 25s 252ms/step - loss: 0.3271 - acc: 0.8565 - val_loss: 0.7049 - val_acc: 0.8357
Epoch 95/100
100/100 [==============================] - 26s 258ms/step - loss: 0.3227 - acc: 0.8590 - val_loss: 0.6066 - val_acc: 0.7951
Epoch 96/100
100/100 [==============================] - 26s 260ms/step - loss: 0.3323 - acc: 0.8561 - val_loss: 0.6961 - val_acc: 0.8048
Epoch 97/100
100/100 [==============================] - 25s 251ms/step - loss: 0.3219 - acc: 0.8621 - val_loss: 0.5641 - val_acc: 0.8376
Epoch 98/100
100/100 [==============================] - 26s 265ms/step - loss: 0.3180 - acc: 0.8653 - val_loss: 0.5996 - val_acc: 0.8228
Epoch 99/100
100/100 [==============================] - 25s 251ms/step - loss: 0.3349 - acc: 0.8580 - val_loss: 0.7057 - val_acc: 0.8115
Epoch 100/100
100/100 [==============================] - 25s 251ms/step - loss: 0.3390 - acc: 0.8474 - val_loss: 0.4669 - val_acc: 0.8286

进行了100次迭代,废了好长的时间,可以看出模型的精确度在不断的增加

代码语言:javascript
复制
model.save('cats_and_dogs_small_2.h5')# 保存训练好的模型
代码语言:javascript
复制
# 再次绘制曲线
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在精确度的图中,训练集和验证机的模型精确度都是增加的,最终的精确度为acc: 0.8474 val_acc: 0.8286, 相比较没有进行正则化的模型精确度acc: 0.9855 - val_acc: 0.7160,虽然在训练集上的精确度高于未经正则化,但是在验证集上是显著高于未经正则化的模型的

结束语

随着学习的增加,数据的运算不断的增加,要是有块GPU就好了

love&peace

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2020-06-21,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 火星娃统计 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 卷积神经网络(猫狗分类)
    • 概述
      • 数据集整理
        • 数据预处理
          • 数据增强
            • Dropout减少过拟合
              • 结束语
              领券
              问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档