首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >如何在ES6 node.js环境中运行mediapipe facemesh

如何在ES6 node.js环境中运行mediapipe facemesh
EN

Stack Overflow用户
提问于 2021-05-24 15:05:58
回答 1查看 2.4K关注 0票数 3

我试图在create应用程序中从https://codepen.io/mediapipe/details/KKgVaPJ中运行这个https://google.github.io/mediapipe/solutions/face_mesh#javascript-solution-api示例。我已经做了:

所有facemesh mediapipe packages.

  • Already的
  • npm安装都用节点导入替换了jsdelivr标记,我得到了定义,并将视频元素functions.
  • Replaced替换为react cam

我不知道该如何替换这个jsdelivr,可能会影响:

代码语言:javascript
运行
复制
const faceMesh = new FaceMesh({
    locateFile: (file) => {
      return `https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}`;
    }
  });

所以问题是:

  • 为什么这个脸没有显现?有什么例子可以说明我想做什么吗?

这是我的App.js代码(很抱歉调试脚手架):

代码语言:javascript
运行
复制
import './App.css';
import React, { useState, useEffect } from "react";
import Webcam from "react-webcam";
import { Camera, CameraOptions } from '@mediapipe/camera_utils'
import {
  FaceMesh,
  FACEMESH_TESSELATION,
  FACEMESH_RIGHT_EYE,
  FACEMESH_LEFT_EYE,
  FACEMESH_RIGHT_EYEBROW,
  FACEMESH_LEFT_EYEBROW,
  FACEMESH_FACE_OVAL,
  FACEMESH_LIPS
} from '@mediapipe/face_mesh'
import { drawConnectors } from '@mediapipe/drawing_utils'

const videoConstraints = {
  width: 1280,
  height: 720,
  facingMode: "user"
};

function App() {
  const webcamRef = React.useRef(null);
  const canvasReference = React.useRef(null);
  const [cameraReady, setCameraReady] = useState(false);
  let canvasCtx
  let camera

  const videoElement = document.getElementsByClassName('input_video')[0];
  // const canvasElement = document.getElementsByClassName('output_canvas')[0];

  const canvasElement = document.createElement('canvas');

  console.log('canvasElement', canvasElement)
  console.log('canvasCtx', canvasCtx)

  useEffect(() => {
    camera = new Camera(webcamRef.current, {
      onFrame: async () => {
        console.log('{send}',await faceMesh.send({ image: webcamRef.current.video }));
      },
      width: 1280,
      height: 720
    });

    canvasCtx = canvasReference.current.getContext('2d');
    camera.start();
    console.log('canvasReference', canvasReference)

  }, [cameraReady]);

  function onResults(results) {
    console.log('results')
    canvasCtx.save();
    canvasCtx.clearRect(0, 0, canvasElement.width, canvasElement.height);
    canvasCtx.drawImage(
      results.image, 0, 0, canvasElement.width, canvasElement.height);
    if (results.multiFaceLandmarks) {
      for (const landmarks of results.multiFaceLandmarks) {
        drawConnectors(canvasCtx, landmarks, FACEMESH_TESSELATION, { color: '#C0C0C070', lineWidth: 1 });
        drawConnectors(canvasCtx, landmarks, FACEMESH_RIGHT_EYE, { color: '#FF3030' });
        drawConnectors(canvasCtx, landmarks, FACEMESH_RIGHT_EYEBROW, { color: '#FF3030' });
        drawConnectors(canvasCtx, landmarks, FACEMESH_LEFT_EYE, { color: '#30FF30' });
        drawConnectors(canvasCtx, landmarks, FACEMESH_LEFT_EYEBROW, { color: '#30FF30' });
        drawConnectors(canvasCtx, landmarks, FACEMESH_FACE_OVAL, { color: '#E0E0E0' });
        drawConnectors(canvasCtx, landmarks, FACEMESH_LIPS, { color: '#E0E0E0' });
      }
    }
    canvasCtx.restore();
  }

  const faceMesh = new FaceMesh({
    locateFile: (file) => {
      return `https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}`;
    }
  });
  faceMesh.setOptions({
    selfieMode: true,
    maxNumFaces: 1,
    minDetectionConfidence: 0.5,
    minTrackingConfidence: 0.5
  });
  faceMesh.onResults(onResults);

  // const camera = new Camera(webcamRef.current, {
  //   onFrame: async () => {
  //     await faceMesh.send({ image: videoElement });
  //   },
  //   width: 1280,
  //   height: 720
  // });
  // camera.start();

  return (
    <div className="App">
      <Webcam
        audio={false}
        height={720}
        ref={webcamRef}
        screenshotFormat="image/jpeg"
        width={1280}
        videoConstraints={videoConstraints}
        onUserMedia={() => {
          console.log('webcamRef.current', webcamRef.current);
          // navigator.mediaDevices
          //   .getUserMedia({ video: true })
          //   .then(stream => webcamRef.current.srcObject = stream)
          //   .catch(console.log);

          setCameraReady(true)
        }}
      />
      <canvas
        ref={canvasReference}
        style={{
          position: "absolute",
          marginLeft: "auto",
          marginRight: "auto",
          left: 0,
          right: 0,
          textAlign: "center",
          zindex: 9,
          width: 1280,
          height: 720,
        }}
      />

    </div >
  );
}

export default App;
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-06-07 14:59:51

您不必替换jsdelivr,这段代码很好;我还认为您需要重新排序您的代码:

  • 您应该将faceMesh初始化放在useEffect中,并以[]作为参数;因此,当页面第一次呈现
  • 时,算法将启动,您也不需要使用doc获取videoElement和canvasElement。*,因为已经定义了一些参考文献(

)。

一个代码示例:

代码语言:javascript
运行
复制
useEffect(() => {
const faceMesh = new FaceDetection({
  locateFile: (file) => {
    return `https://cdn.jsdelivr.net/npm/@mediapipe/face_detection/${file}`;
  },
});

faceMesh.setOptions({
  maxNumFaces: 1,
  minDetectionConfidence: 0.5,
  minTrackingConfidence: 0.5,
});

faceMesh.onResults(onResults);

if (
  typeof webcamRef.current !== "undefined" &&
  webcamRef.current !== null
) {
  camera = new Camera(webcamRef.current.video, {
    onFrame: async () => {
      await faceMesh.send({ image: webcamRef.current.video });
    },
    width: 1280,
    height: 720,
  });
  camera.start();
    }
  }, []);

最后,在onResults回调中,我建议首先打印结果,以检查Mediapipe实现是否正常。别忘了在画东西之前设定画布的尺寸。

代码语言:javascript
运行
复制
function onResults(results){
   console.log(results)
   canvasCtx = canvasReference.current.getContext('2d')
   canvas.width = webcamRef.current.video.videoWidth;
   canvas.height = webcamRef.current.video.videoHeight;;

   ...
}

祝你好运!)

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/67674453

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档