首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >使用min-max归一化后,训练mnist分类器的模型的准确率反而很低,这是为啥?

使用min-max归一化后,训练mnist分类器的模型的准确率反而很低,这是为啥?

提问于 2023-05-05 16:36:35
回答 0关注 0查看 67

我写了一段简答的代码用于训练一个逻辑回归分类器,对mnist数字进行分类,其中用到了min-max归一化,但是结果显示模型准确率非常低,代码如下:

代码语言:js
复制

# 导入各种包
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import os
import pickle
from sklearn.preprocessing import StandardScaler
# 超参数设置
numClasses = 10
inputSize = 784
batch_size = 64
learning_rate = 0.05

# 下载数据集
mnist = input_data.read_data_sets('original_data/', one_hot=True)

train_img = mnist.train.images
train_label = mnist.train.labels
test_img = mnist.test.images
test_label = mnist.test.labels
train_img /= 255.0
test_img /= 255.0

# # 对train_img进行标准化
# scaler = StandardScaler().fit(train_img)
# train_img = scaler.transform(train_img)
# # 对test_img进行标准化
# test_img = scaler.transform(test_img)

X = tf.compat.v1.placeholder(tf.float32, shape=[None, inputSize])
y = tf.compat.v1.placeholder(tf.float32, shape=[None, numClasses])
W = tf.Variable(tf.random_normal([inputSize, numClasses], stddev=0.1))
B = tf.Variable(tf.constant(0.1), [numClasses])
# y_pred = tf.nn.softmax(tf.matmul(X, W) + B)
y_pred = tf.matmul(X, W) + B

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_pred)) + 0.01 * tf.nn.l2_loss(W)
opt = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

saver = tf.train.Saver()
multiclass_parameters = {}
# 运行
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # 开始训练 外循环控制轮数,内循环训练
    for epoch in range(20):
        total_batch = int(len(train_img) / batch_size)  # 总批次=训练的样本总数/批次大小

        for batch in range(total_batch):  # 一个批次的训练
            # 从训练集获取到一个批次的数据
            batch_input = train_img[batch * batch_size: (batch + 1) * batch_size]
            batch_label = train_label[batch * batch_size: (batch + 1) * batch_size]

            _, trainingLoss = sess.run([opt, loss], feed_dict={X: batch_input, y: batch_label})

        train_acc = sess.run(accuracy, feed_dict={X: train_img, y: train_label})
        print("Epoch %d Training Accuracy %g" % (epoch + 1, train_acc))

    test_acc = sess.run(accuracy, feed_dict={X: test_img, y: test_label})
    print(test_acc)

运行结果如下图:

但是如果我把min-max归一化换成z-score归一化,模型的准确率就有0.9以上,我不明白这是为什么!请各位大佬帮我分析一下,谢谢

回答

和开发者交流更多问题细节吧,去 写回答
相关文章

相似问题

相关问答用户
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档