请原谅我的可怕代码,这只是我的爱好。但是,当x和y轴的人脸跟踪都在一个确定的中心点时,我试图让python播放一个声音。现在,我正在测试播放声音,一旦声音播放,我就可以用发送数据给arduino的方式来交换声音,为orbeez微型枪安装一个无刷电机。其余的代码正在工作,但我无法播放声音。如果我说它是另一种方式的话,它的声音就会播放。
下面是代码的一个片段
此外,我尝试了许多不同的方法'if (xcenter + ycenter) == 2:',这只是最后一次尝试。
# This will send data to the arduino according to the x coordinate
def angle_servox(angle):
if angle>320:
prov=1
ser.write(b'2')
print("Right")
xcenter = 0
elif angle<250:
prov=2
ser.write(b'1')
print("Left")
xcenter = 0
elif angle>250 & angle<320:
ser.write(b'0')
print("Stop")
xcenter = 1
# This will send data to the arduino according to the x coordinate
def angle_servoy(angle):
if angle>250:
prov=3
ser.write(b'4')
print("Down")
ycenter = 0
elif angle<75:
prov=4
ser.write(b'3')
print("Up")
ycenter = 0
elif angle>80 & angle<240:
ser.write(b'5')
print("Stop")
ycenter = 1
# import the haarcascade file
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
#train the face for recognition
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
labels = {"person_name": 1}
with open("pickles/face-labels.pickle", 'rb') as f:
og_labels = pickle.load(f)
labels = {v:k for k,v in og_labels.items()}
# for default camera put value 0 or else 1
videoWeb = cv2.VideoCapture(1)
n=0
while (videoWeb.isOpened()):
print(ser.read().decode().strip('\r\n'))
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
#cv2.imshow('xyz',imag)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
if (xcenter + ycenter) == 2:
voice.play(active2)提前感谢
发布于 2019-03-22 08:48:29
据我所知,您没有正确地询问从cv2人脸检测中得到的信息(而且,您可能会删除这个问题的两个angle_servo()函数)。
在高级别上,您希望您的脚本:
我从未使用过openCV,但根据我找到的这里示例代码,您的脚本的一个修订版可能是:
# Setup our face detection
face_casc = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read("recognizers/face-trainer.yml.txt")
# Setup our video input
videoWeb = cv2.VideoCapture(1)
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
def is_close_to(self, other_pt, error=5):
x_is_close = abs(self.x - other_pt.x) <= error
y_is_close = abs(self.y - other_pt.y) <= error
return x_is_close and y_is_close
# If an image from the video camera looks like this:
#
# x-->
# +-------------+
# y | A | A is at (10, 20)
# | | |
# v | |
# | B | B is at (300, 500)
# +-------------+
#
# Our predetermined references points
A = Point(10, 20)
B = Point(300, 500)
# Continuously get images from the camera
while (videoWeb.isOpened()):
# and try to find faces in them
ret,imag = videoWeb.read()
gray = cv2.cvtColor(imag, cv2.COLOR_BGR2GRAY)
faces = face_casc.detectMultiScale(
gray,
scaleFactor=1.4,
minNeighbors=5,
minSize=(30,30)
)
# I'm just guessing here, I have no idea if "size" is a property or not
middle_of_imag = Point(imag.size.x/2, imag.size.y/2)
# check the location of every face we found
for (x,y,w,h) in faces:
face_pt = Point(x, y)
#NB: it might be better to get the "middle" of the face
#face_pt = Point(x + w/2, y + h/2)
if face_pt.is_close_to(A):
print("face at point A")
if face_pt.is_close_to(B):
print("face at point B")
if face_pt.is_close_to(middle_of_imag):
print("face somewhat in the middle")
# if x in faces is between 250 & 320 AND y in faces is between 80 & 240: send 'fire'"
if (250 <= x <= 320) and (80 <= y <= 240):
print("send fire")https://stackoverflow.com/questions/55295026
复制相似问题