首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >val_accuracy,val_loss是损失: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00问题

val_accuracy,val_loss是损失: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00问题
EN

Stack Overflow用户
提问于 2022-11-26 06:00:03
回答 1查看 41关注 0票数 0

首先,我使用100级和150个视频每节课,我认为这80%是训练集,20%是验证集。

下面是我的密码

代码语言:javascript
运行
复制
def generator(filePath,labelList):
  
  tmp = [[x,y] for x, y in zip(filePath, labelList)]
  np.random.shuffle(tmp)

  Files = [n[0] for n in tmp]
  Labels = [n[1] for n in tmp]

    
  
  for File,Label in zip(Files,Labels):
    File = np.load(File)    
    #x = tf.squeeze(File,1)
    #x = tf.squeeze(x,2)
    #PoolingOutput = tf.keras.layers.AveragePooling1D()(x)
    #PoolingOutput = tf.squeeze(PoolingOutput)
    #x = tf.squeeze(PoolingOutput)
    #---------------------------------------------------------
    x = tf.squeeze(File)

    transformed_label = encoder.transform([Label])
    yield x, transformed_label[0]
     
    train_dataset = tf.data.Dataset.from_generator( generator, args = (TrainFilePath,TrainLabelList), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

train_dataset = train_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#train_dataset = train_dataset.batch(16)

valid_dataset = tf.data.Dataset.from_generator( generator, args = (ValiFilePath, VailLabelPath), output_types=(tf.float64, tf.int16), output_shapes=((20, 2048),len(EncoderOnlyList)))

valid_dataset = valid_dataset.batch(8).prefetch(tf.data.experimental.AUTOTUNE)
#valid_dataset = valid_dataset.batch(16)

with tf.device(device_name):
  model = Sequential()
  model.add(keras.layers.Input(shape=(20, 2048),))
  model.add(tf.keras.layers.Masking(mask_value=0.))
  model.add(tf.keras.layers.LSTM(256)
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(128,activation='relu'))
  model.add(tf.keras.layers.Dropout(0.5))
  model.add(tf.keras.layers.Dense(100, activation='softmax'))
  model.compile(optimizer=rmsprop,
              loss='categorical_crossentropy',
              metrics=['accuracy'])
  
  model.fit(train_dataset, epochs=20, validation_data=valid_dataset)

model.save_weights('/content/drive/MyDrive/Resnet50BaseWeight_3.h5', overwrite=True)
model.save("/content/drive/MyDrive/Resnet50Base_3.h5")

结果是这样的

1/20 1500/1500 ============================== - 97s 61 91s /步进损失: 0.0000e+00 -精度: 0.0012 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 2/20 1500/1500 ============================== -102 s 68 91s/阶-损失: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 3/20 1500/1500 ============================== -91 s 60 91s/step损失: 0.0000e+00 - val_loss: 0.0103 -val_loss: 0.0000e+00 - val_accuracy:0.0000e+00 Epoch 4/20 1500/1500 ============================== - 95s 63 93s /阶跃损失: 0.0000e+00 -精度: 0.0113 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 5/20 1500/1500 ============================== -93s 62 93s/阶-损失: 0.0000e+00 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Epoch 6/20 1500/1500 ============================== - 92s 61ms/step损失: 0.0000e+00 -准确性: 0.0098 - val_loss: 0.0000e+00 -==============================:0.0000e+00

即使时代增加了,精度也不会再提高了。

大部分的结果都是像这样的0.0000e+00

我不知道怎么回事

请帮助

EN

回答 1

Stack Overflow用户

发布于 2022-11-26 06:52:17

它是logits形状,并对不匹配进行分类,您需要选择100个具有不同形状的目标类,并将其形成( 100,0)或等效。

样本:相同是对输入类型的期望或比较,损失函数和统计矩阵的计算由对同一输入或不同输入的逐步估计和评价而来。RMS可以与任何仍然正确的东西一起使用,但是您需要将范围作为估计器。

代码语言:javascript
运行
复制
import tensorflow as tf

import os
from os.path import exists

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class_100_names = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 
'bed', 'bee', 'beetle', 'bicycle', 'bottle', 
'bowl', 'boy', 'bridge', 'bus', 'butterfly', 
'camel', 'can', 'castle', 'caterpillar', 
'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 
'cockroach', 'couch', 'crab', 'crocodile', 'cup', 
'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 
'fox', 'girl', 'hamster', 'house', 'kangaroo', 
'keyboard', 'lamp', 'lawn_mower', 'leopard', 'lion', 
'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 
'mountain', 'mouse', 'mushroom', 'oak_tree', 'orange', 
'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 
'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 
'possum', 'rabbit', 'raccoon', 'ray', 'road', 
'rocket', 'rose', 'sea', 'seal', 'shark', 
'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 
'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper', 
'table', 'tank', 'telephone', 'television', 'tiger', 
'tractor', 'train', 'trout', 'tulip', 'turtle', 
'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm'] 

checkpoint_path = "F:\\models\\checkpoint\\" + os.path.basename(__file__).split('.')[0] + "\\TF_DataSets_01.h5"
checkpoint_dir = os.path.dirname(checkpoint_path)

if not exists(checkpoint_dir) : 
    os.mkdir(checkpoint_dir)
    print("Create directory: " + checkpoint_dir)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Dataset
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar100.load_data(label_mode='fine')

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.models.Sequential([
    tf.keras.layers.InputLayer(input_shape=( 32, 32, 3 )),
    tf.keras.layers.Normalization(mean=3., variance=2.),
    tf.keras.layers.Normalization(mean=4., variance=6.),
    tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D((2, 2)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Reshape((128, 225)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96, return_sequences=True, return_state=False)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(96)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(192, activation='relu'),
    tf.keras.layers.Dense(100),
])

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Callback
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class custom_callback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if( logs['accuracy'] >= 0.95 ):
            self.model.stop_training = True
    
custom_callback = custom_callback()

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.RMSprop(
    learning_rate=0.001,
    rho=0.9,
    momentum=0.0,
    epsilon=1e-07,
    centered=False,
    # decay=None,           # {'lr', 'global_clipnorm', 'clipnorm', 'decay', 'clipvalue'}
    # clipnorm=None,
    # clipvalue=None,
    # global_clipnorm=None,
    # use_ema=False,
    # ema_momentum=0.99,
    # ema_overwrite_frequency=100,
    # jit_compile=True,
    name='RMSprop',
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False,
    reduction=tf.keras.losses.Reduction.AUTO,
    name='sparse_categorical_crossentropy'
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
# model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])
model.compile(optimizer=optimizer,
    loss=lossfn,
    metrics=['accuracy'])
    
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: FileWriter
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
if exists(checkpoint_path) :
    model.load_weights(checkpoint_path)
    print("model load: " + checkpoint_path)
    input("Press Any Key!")

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit( train_images, train_labels, batch_size=100, epochs=10000, callbacks=[custom_callback] )
model.save_weights(checkpoint_path)

输出:比较100个类不是问题,更新新的或没有相同的输入应该选择收藏类。

代码语言:javascript
运行
复制
2022-11-26 12:02:17.507553: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8100
500/500 [==============================] - 34s 54ms/step - loss: 10.1518 - accuracy: 0.0104
Epoch 2/10000
500/500 [==============================] - 27s 53ms/step - loss: 9.5093 - accuracy: 0.0122
Epoch 3/10000
500/500 [==============================] - 26s 53ms/step - loss: 9.2861 - accuracy: 0.0127
Epoch 4/10000
462/500 [==========================>...] - ETA: 2s - loss: 9.1570 - accuracy: 0.0126

输出:图像分类,类型的数量并不会有多大的影响,但它们是多么相同,当等待CIFAR100培训.

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/74580072

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档