我从网络摄像头中读取图像,并在其中找到一些运动数据。我写了一个小类,它测量从读取图像,检测运动等所有步骤,然后打印出来。
现在,当我在没有特定分辨率的情况下插入摄像头时,所拍摄的帧大小为640x480。现在我想提高分辨率,所以我将其设置为1920x1080:
WEBCAM_RAW_RES = (640, 480)
FRAMERATE = 20
vid = cv2.VideoCapture(0)
vid.set(cv2.CAP_PROP_FPS, FRAMERATE)
vid.set(cv2.CAP_PROP_FRAME_WIDTH, WEBCAM_RAW_RES[0])
vid.set(cv2.CAP_PROP_FRAME_HEIGHT, WEBCAM_RAW_RES[1])它可以工作,但是vid.read()语句的平均时间从10 to到>200 to。因此,我将其改为1280x800,vid.read()语句降到了~80 So。
顺便提一下,支持的网络摄像头决议如下:
pi@rpi:~/ $ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'MJPG' (Motion-JPEG, compressed)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x800
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640x400
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
[1]: 'YUYV' (YUYV 4:2:2)
Size: Discrete 1920x1080
Interval: Discrete 0.200s (5.000 fps)
Interval: Discrete 0.333s (3.000 fps)
Size: Discrete 1280x800
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640x400
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Size: Discrete 320x240
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)如何在没有200 do延迟的情况下获得更高的分辨率?我必须切换到MJPEG流格式吗?我在网上找不到什么.
这是我的全部代码:
import os
import cv2
from MotionDetectorBlob import MotionDetectorBlob
from utils import CalcTimer, resize_and_crop
from config import *
WEBCAM_RAW_RES = (640, 480)
FRAMERATE = 20
def start_webcam():
print("Setting up webcam and finding the correct brightness.")
vid = cv2.VideoCapture(0)
vid.set(cv2.CAP_PROP_FPS, FRAMERATE)
vid.set(cv2.CAP_PROP_FRAME_WIDTH, WEBCAM_RAW_RES[0])
vid.set(cv2.CAP_PROP_FRAME_HEIGHT, WEBCAM_RAW_RES[1])
vid.set(cv2.CAP_PROP_AUTO_EXPOSURE, 3) # auto mode
time.sleep(5) # find brightness
#vid.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) # manual exposure mode
vid.set(cv2.CAP_PROP_AUTO_WB, 0) # manual awb
vid.set(cv2.CAP_PROP_WB_TEMPERATURE, 4000) # manual awb
print("Webcam setup done.")
detector = MotionDetectorBlob()
firstrun = True
timer = CalcTimer()
while True:
# Capture the video frame
# by frame
try:
timer.start()
ret, frame = vid.read()
print(f"original webcam res: {frame.shape}")
timer.measure("read")
frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
timer.measure("rotate")
# as we skip analyser, directly write the frames
data = detector.detect(frame)
print(timer.results())
except KeyboardInterrupt:
break
# After the loop release the cap object
vid.release()
start_webcam()发布于 2022-10-28 10:39:46
vid.read()性能缓慢的主要原因是您使用的是YUYV格式,它是未压缩的,比MJPG格式需要更多的带宽和处理能力,MJPG格式是压缩的,能够更快地处理更高的分辨率。
要切换到MJPG格式,您需要在初始化vid对象之后再向代码中添加一行:
vid = cv2.VideoCapture(0)
vid.set(cv2.CAP_PROP_FPS, FRAMERATE)
vid.set(cv2.CAP_PROP_FRAME_WIDTH, WEBCAM_RAW_RES[0])
vid.set(cv2.CAP_PROP_FRAME_HEIGHT, WEBCAM_RAW_RES[1])
vid.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG")) # add this line这将告诉OpenCV使用MJPG编解码器捕获视频流。您还可以使用vid.get()方法和适当的属性代码(如cv2.CAP_PROP_FOURCC和cv2.CAP_PROP_FORMAT )来检查您的摄像头所支持的格式和编解码器。
说明: YUYV格式是一种原始格式,它用两个字节对每个像素进行编码,一个用于亮度(Y),另一个用于色度(UV)。这意味着对于1920x1080分辨率,每个帧将有1920x1080x2 = 4147200字节,或约4MB。要以20 FPS传输此数量的数据,您需要带宽为4x20 = 80 MB/s,这对于USB摄像头来说是相当高的。
MJPG格式是一种压缩格式,它使用JPEG算法将每个帧编码为图像。这大大减少了每个帧的大小,取决于图像的质量和复杂性。例如,1920x1080分辨率的典型JPEG图像可以有大约200 KB的大小,或0.2MB。要在20个FPS传输此数据量,您需要0.2x20 =4MB/s的带宽,这比YUYV格式要低得多。因此,通过使用MJPG格式,可以减少vid.read()的延迟和速度,并节省处理帧的一些CPU周期。
https://stackoverflow.com/questions/74234070
复制相似问题