我目前使用的是XBOX KINECT型号1414,处理器为2.2.1。我希望用右手作为鼠标来引导角色通过屏幕。
我设法画了一个椭圆,沿着kinect骨架上的右手关节。我如何才能找出该关节的位置,以便在需要时更换mouseX和mouseY?
下面的代码将跟踪您的右手,并在右手上绘制一个红色椭圆:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;
// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);
PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}
///////////////////////////////////////////////////////
void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);
float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
任何形式的链接或帮助都将非常感谢,谢谢!
发布于 2015-11-23 23:22:54
在你的情况下,我建议你使用右手关节的坐标。下面是获取它们的方法:
foreach (Skeleton skeleton in skeletons) {
Joint RightHand = skeleton.Joints[JointType.HandRight];
double rightX = RightHand.Position.X;
double rightY = RightHand.Position.Y;
double rightZ = RightHand.Position.Z;
}
请注意,我们看到的是3个维度,因此您将拥有x,y和z坐标。
仅供参考:您必须在事件处理程序SkeletonFramesReady
中插入这些代码行。如果你仍然想要它周围的圆圈,可以看看Kinect SDK中的Skeleton-Basics WPF示例。
这对你有帮助吗?
发布于 2015-11-23 23:42:17
你想要达到什么目的还不太清楚。如果你只需要手在2D屏幕坐标中的位置,你发布的代码已经包含了以下内容:
kinect.getJointPositionSkeleton()
检索3D coordinateskinect.convertRealWorldToProjective()
并将其转换为2D屏幕坐标。如果希望能够在使用kinect跟踪的手部坐标和鼠标坐标之间进行切换,可以将2D转换中使用的PVector存储为对整个草图可见的变量,如果正在跟踪该草图,则通过kinect骨架或鼠标进行更新:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
PVector user1RightHandPos = new PVector();
float ellipseSize;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
}else{//if the skeleton isn't tracked, use the mouse
user1RightHandPos.set(mouseX,mouseY,0);
ellipseSize = 20;
}
//draw ellipse regardless of the skeleton tracking or mouse mode
fill(255, 0, 0);
ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}
///////////////////////////////////////////////////////
void updateRightHand2DCoords(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
或者,您可以在测试时使用布尔值在鼠标/kinect模式之间切换。
如果您只需要鼠标坐标来进行测试,而不必一直使用kinect,我建议您查看RecorderPlay示例(通过处理>文件>示例>贡献库> SimpleOpenNI > OpenNI > RecorderPlay)。OpenNI具有记录场景(包括深度数据)的能力,这将使测试变得更简单:只需记录具有最常见交互的.oni文件,然后在开发时重用该记录。要使用.oni文件,只需为OpenNI使用不同的构造函数签名:
kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni");
需要注意的是:深度是以分辨率的一半存储的(因此,需要将坐标加倍,才能与实时版本保持一致)。
https://stackoverflow.com/questions/33850320
复制相似问题