为了学习目的,我使用了Tensorflow.js,当我尝试使用带有批处理数据集(10×10)的fit方法来学习批处理培训过程时,我遇到了一个错误。
我有一些图像600x600x3我想分类(2个输出,要么1,要么0)
这是我的训练循环:
const batches = await loadDataset()
for (let i = 0; i < batches.length; i++) {
const batch = batches[i]
const xs = batch.xs.reshape([batch.size, 600, 600, 3])
const ys = tf.oneHot(batch.ys, 2)
console.log({
xs: xs.shape,
ys: ys.shape,
})
// { xs: [ 10, 600, 600, 3 ], ys: [ 10, 2 ] }
const history = await model.fit(
xs, ys,
{
batchSize: batch.size,
epochs: 1
}) // <----- The code throws here
const loss = history.history.loss[0]
const accuracy = history.history.acc[0]
console.log({ loss, accuracy })
}下面是我如何定义数据集的方法
const chunks = chunk(examples, BATCH_SIZE)
const batches = chunks.map(
batch => {
const ys = tf.tensor1d(batch.map(e => e.y), 'int32')
const xs = batch
.map(e => imageToInput(e.x, 3))
.reduce((p, c) => p ? p.concat(c) : c)
return { size: batch.length, xs , ys }
}
)以下是模型:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 5,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))在for-循环的第一次迭代中,我从.fit中得到一个错误,如下所示:
Error: new shape and old shape must have the same number of elements.
at Object.assert (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/util.js:36:15)
at reshape_ (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/array_ops.js:271:10)
at Object.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/operation.js:23:29)
at Tensor.reshape (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tensor.js:273:26)
at Object.derB [as $b] (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/ops/binary_ops.js:32:24)
at _loop_1 (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:90:47)
at Object.backpropagateGradients (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/tape.js:108:9)
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:334:20
at /Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:91:22
at Engine.scopedRun (/Users/person/nn/node_modules/@tensorflow/tfjs-core/dist/engine.js:101:23)我不知道从中能理解什么,也没有发现任何关于具体错误的文档或帮助,有什么想法吗?
发布于 2018-10-10 22:16:53
模型的问题在于convolution与maxPooling一起应用的方式。
第一层是用20、20和50个滤波器进行kernelSize 60的卷积。该层的输出将具有近似形状的[600 / 20, 600 / 20, 50] = [30, 30, 50]。
最大池以[20, 20]的步调应用。这一层的输出也将具有近似形状的[30 / 20, 30 / 20, 50] =[1, 1, 50 ]。
从这一步开始,模型不能再与kernelSize 5进行卷积,因为内核形状[5, 5]大于输入形状[1, 1],导致抛出的误差。该模型唯一能实现的卷积是核的卷积,其大小为1。显然,该卷积不需要任何变换就能输出输入。
同样的规则适用于最后一个maxPooling,它的poolingSize不能与1不同,否则将引发错误。
下面是一个片段:
const model = tf.sequential()
model.add(tf.layers.conv2d({
inputShape: [600, 600, 3],
kernelSize: 60,
filters: 50,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: [20, 20],
strides: [20, 20]
}))
model.add(tf.layers.conv2d({
kernelSize: 1,
filters: 100,
strides: 20,
activation: 'relu',
kernelInitializer: 'VarianceScaling'
}))
model.add(tf.layers.maxPooling2d({
poolSize: 1,
strides: [20, 20]
}))
model.add(tf.layers.flatten())
model.add(tf.layers.dense({
units: 2,
kernelInitializer: 'VarianceScaling',
activation: 'softmax'
}))
model.compile({optimizer: 'sgd', loss: 'meanSquaredError'});
model.fit(tf.ones([10, 600, 600, 3]), tf.ones([10, 2]), {batchSize: 4});
model.predict(tf.ones([1, 600, 600, 3])).print()<html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.0"> </script>
</head>
<body>
</body>
</html>
https://stackoverflow.com/questions/52729443
复制相似问题