我已经将一个大小为torch.Size([3, 28, 28])
的pytorch张量转换为一个大小为(28, 28, 3)
的numpy数组,这似乎没有任何问题。然后,我尝试使用img = Image.fromarray(img.astype('uint8'), mode='RGB')
将其转换为PIL图像,但返回的img
的尺寸是(28, 28)
,而我期望它是(28, 28, 3)
(或(3, 28, 28)
)。我不明白为何会这样。我确保转换为uint8并使用RGB模式,就像网上的其他帖子所建议的那样,但是这两种模式(也没有使用np.ascontiguousarray)都没有帮助。
PIL版本1.1.7
# This code implements the __getitem__ function for a child class of datasets.MNIST in pytorch
# https://pytorch.org/docs/stable/_modules/torchvision/datasets/mnist.html#MNIST
img, label = self.data[index], self.targets[index]
assert img.shape == (3, 28, 28), \
(f'[Before PIL] Incorrect image shape: expecting (3, 28, 28),'
f'received {img.shape}')
print('Before reshape:', img.shape) # torch.Size([3, 28, 28])
img = img.numpy().reshape(3, 28, 28)
img = np.stack([img[0,:,:], img[1,:,:], img[2,:,:]], axis=2)
print('After reshape:', img.shape) # (28, 28, 3)
# doing this so that it is consistent with all other datasets
# to return a PIL Image
img = Image.fromarray(img.astype('uint8'), mode='RGB') # Returns 28 x 28 image
assert img.size == (3, 28, 28), \
(f'[Before Transform] Incorrect image shape: expecting (3, 28, 28), '
f'received {img.size}')
编辑:这是一个最小的例子。我将把上面的内容留到上下文中去,以防它有任何帮助。
from PIL import Image
import numpy as np
img = np.random.randn(28, 28, 3)
img = Image.fromarray(img.astype('uint8'), mode='RGB') # Returns 28 x 28 image
assert img.size == (28, 28, 3), \
(f'[Before Transform] Incorrect image shape: expecting (3, 28, 28), '
f'received {img.size}')
AssertionError: [Before Transform] Incorrect image shape: expecting (3, 28, 28), received (28, 28)
发布于 2019-05-28 02:29:17
我认为您需要这样,其中RGB值的范围是0..255范围内的整数:
import numpy as np
from PIL import Image
# Make random 28x28 RGB image
img =np.random.randint(0,256,(28,28,3), dtype=np.uint8)
# Convert to PIL Image
pImg=Image.fromarray(img, mode='RGB')
现在检查我们已有的内容:
In [19]: pImg
Out[19]: <PIL.Image.Image image mode=RGB size=28x28 at 0x120CE9CF8>
并保存:
pImg.save('result.png')
https://stackoverflow.com/questions/56330561
复制相似问题