我使用以下代码来压缩视频并提取其帧。请注意,我不想保存生成的视频。
output_args = {
"vcodec": "libx265",
"crf": 24,
}
out, err = (
ffmpeg
.input(in_filename)
.output('pipe:', format='rawvideo', pix_fmt='rgb24',**output_args)
.run(capture_stdout=True)
)
frames = np.frombuffer(out, np.uint8).reshape(-1,width,height,3)
当我尝试将输出缓冲区重塑为原始视频尺寸时,我得到以下错误:cannot reshape array of size 436567 into shape (1920,1080,3)
这是预期的,因为生成的视频尺寸较小。有没有办法计算压缩视频的帧数、宽度和高度,以便对缓冲区中的帧进行整形?
此外,如果我保存压缩视频,而不是加载其帧,然后从压缩视频中加载视频帧,则这些帧将具有与原始视频相同的尺寸。我怀疑在引擎盖下发生了某种插值。有没有办法在不保存视频的情况下应用它?
发布于 2020-01-30 06:43:45
我找到了一个使用ffmpeg-python的解决方案。
假设:
out
将整个h265编码流保存在内存缓冲区中。该解决方案适用于以下情况:
FFmpeg
,sdtin
作为输入pipe
,stdout
作为输出pipe
。输入将是视频流(内存缓冲区)。
输出格式为BGR像素格式的原始视频帧。
pipe
(到stdin
)。
cv2.imshow
) 为了测试这个解决方案,我创建了一个样本视频文件,并将其读入内存缓冲区(编码为H.265)。
我使用内存缓冲区作为上述代码(您的out
缓冲区)的输入。
下面是完整的代码,包括测试代码:
import ffmpeg
import numpy as np
import cv2
import io
in_filename = 'in.mp4'
# Build synthetic video, for testing begins:
###############################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=192x108:rate=1 -c:v libx265 -crf 24 -t 5 in.mp4
width, height = 192, 108
(
ffmpeg
.input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
.output(in_filename, vcodec='libx265', crf=24, t=5)
.overwrite_output()
.run()
)
###############################################
# Use ffprobe to get video frames resolution
###############################################
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
n_frames = int(p['streams'][0]['nb_frames'])
###############################################
# Stream the entire video as one large array of bytes
###############################################
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
in_bytes, _ = (
ffmpeg
.input(in_filename)
.video # Video only (no audio).
.output('pipe:', format='hevc', crf=24)
.run(capture_stdout=True) # Run asynchronous, and stream to stdout
)
###############################################
# Open In-memory binary streams
stream = io.BytesIO(in_bytes)
# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The output format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
ffmpeg
.input('pipe:', format='hevc')
.video
.output('pipe:', format='rawvideo', pix_fmt='bgr24')
.run_async(pipe_stdin=True, pipe_stdout=True)
)
# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue()) # Write stream content to the pipe
process.stdin.close() # close stdin (flush and send EOF)
# Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
# Read raw video frame from stdout as bytes array.
in_bytes = process.stdout.read(width * height * 3)
if not in_bytes:
break
# transform the byte read into a numpy array
in_frame = (
np
.frombuffer(in_bytes, np.uint8)
.reshape([height, width, 3])
)
# Display the frame
cv2.imshow('in_frame', in_frame)
if cv2.waitKey(100) & 0xFF == ord('q'):
break
process.wait()
cv2.destroyAllWindows()
注意:我使用了stdin
和stdout
,而不是命名管道,因为我希望代码在Windows和Linux中都能工作。
https://stackoverflow.com/questions/59965990
复制相似问题