首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >idxs =cv2.dnn.NMSBoxes(方框、信任、MIN_CORP、NMS_THRESH) TypeError:无法解析‘分数’。输入参数不提供序列协议

idxs =cv2.dnn.NMSBoxes(方框、信任、MIN_CORP、NMS_THRESH) TypeError:无法解析‘分数’。输入参数不提供序列协议
EN

Stack Overflow用户
提问于 2021-12-05 15:36:33
回答 1查看 938关注 0票数 0

帮助我收到错误,在我的社会距离检测系统的编码使用摄像头。我已经搜索了错误,但是与我的代码TT没有什么区别,我使用notepad++编写代码,并使用命令提示符运行。以下是我的错误:

代码语言:javascript
运行
复制
C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time>python Run.py
[INFO] loading YOLO from disk...
[INFO] setting preferable backend and target to CUDA...
[INFO] accessing video stream...
[ WARN:0] global D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\dnn.cpp (1447) cv::dnn::dnn4_v20211004::Net::Impl::setUpNet DNN module was not built with CUDA backend; switching to CPU
Traceback (most recent call last):
  File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\Run.py", line 77, in <module>
    results = detect_people(frame, net, ln,
  File "C:\Users\User\Downloads\Social_Distancing_Detection_Real_Time\mylib\detection.py", line 58, in detect_people
    idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH)
TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol
[ WARN:1] global D:\a\opencv-python\opencv-python\opencv\modules\videoio\src\cap_msmf.cpp (438) `anonymous-namespace'::SourceReaderCB::~SourceReaderCB terminating async callback

我的错误

下面是我的文件detection.py的完整代码

代码语言:javascript
运行
复制
#import the necessary packages
from .config import NMS_THRESH, MIN_CORP, People_Counter
import numpy as np
import cv2

def detect_people(frame, net, In, personIdx = 0):
    #grab the dimensions of the frame and initialize the list of results
    (H, W) = frame.shape[:2]
    results = []
    
    #construct a blob from the input frame and then perform a forward
    #pass of the YOLO object detector, giving us our boarding boxes
    #add associated probabilities
    blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),
        swapRB=True, crop=False)
    net.setInput(blob)
    layerOutputs = net.forward(In)
        
    #initialize out lists of detected bounding boxes, centroids and 
    #confidence, respectively
    boxes = []
    centroids = []
    confidences = []
    
    #loop over each of the layer outputs
    for output in layerOutputs:
        #for detection in output;
        for detection in output:
            #extract the class ID and confidence[i.e., probability)
            #of the current object detection
            scores = detection[5:]
            classID = np.argmax(scores)
            confidence = scores[classID]
            
            #filter detections by (1) ensuring that the object
            #detected was a person and (2) that the minimum
            #confidence is met
            if classID == personIdx and confidence > MIN_CORP:
                #scale the bounding box coordinates back relative to 
                #the size of the image, keeping in mind that YOLO
                #actually returns the center (x,y) -coordinates of 
                #the bounding box followed by the boxes' width and height
                box = detection[0:4] * np.array([W, H, W, H])
                (centerX, centerY, width, height) = box.astype("int")
                
                #use the center (x,y) -coordinates to derive the top
                #and left corner of the bounding box
                x = int(centerX - (width / 2))
                y = int(centerY - (height / 2))
                    
                #update our list of bounding box coordinates,
                #centroids and confidences
                boxes.append([x, y, int(width), int(height)])
                centroids.append((centerX, centerY))
                confidences.append(float(confidence))                  
 
    #apply non-maxim suppression to suppress weak, overlapping bounding boxes
    idxs = cv2.dnn.NMSBoxes(boxes, confidence, MIN_CORP, NMS_THRESH)
    #print('Total people count:', len(idxs))
    #compute the total people counter
    #if People_Counter:
        #human_count = "Human count: {}".format(len(idxs))
        #cv2.putText(frame, human_count, (470, frame.shape[0] - 75), cv2.FONT_HERSHEY_SIMPLEX, 0.70, (0, 0, 0), 2)
    
    #ensure at least one detection exists
    if len(idxs) > 0:
    #loop over the indexes we are keeping
        for i in idxs.flatten():
            #extract the bounding box coordinates
            (x, y) = (boxes[i][0], boxes[i][1])
            (w, h) = (boxes[i][2], boxes[i][3])
        
            #update our results list to consist of the person
            #prediction probability, bounding box coordinates,
            #and the centroids
            r = (confidences[i], (x, y, x + w, y + h), centroids[i])
            results.append(r)
        
    #return the list of the results
    return results
EN

回答 1

Stack Overflow用户

发布于 2022-12-01 11:24:44

对您的问题的回答(通常)喜欢从翻译那里得到的回答:

代码语言:javascript
运行
复制
TypeError: Can't parse 'scores'. Input argument doesn't provide sequence protocol

scorescv2.dnn.NMSBoxes的第二个参数,在您的例子中是confidenceconfidence是一个单一的数字,您不能迭代它。您已经做了一个错误,并且可能想传递confidences,这是一个列表。

将代码更改为:

idxs =cv2.dnn.NMSBoxes(方框,信任s,MIN_CORP,NMS_THRESH)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/70235839

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档