首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >Emgu CV冲浪获得匹配点坐标

Emgu CV冲浪获得匹配点坐标
EN

Stack Overflow用户
提问于 2016-03-28 18:53:41
回答 2查看 6.4K关注 0票数 4

我使用Emgu CV的冲浪特征来识别图像中类似的物体。

该图像被绘制正确,显示所有关键点,在这两个图像中,相似点(,这是我想要的)和矩形(通常是矩形,有时只是一条线),涵盖相似的点。

问题是,图像中可以看到类似的点,但它们不是以我想要的格式保存的,实际上,它们存储在一个VectorOfKeyPoint对象中,该对象只存储一个指针,以及其他内存数据,其中的点存储在内存中(这就是我所认为的)。意思是,我不能成对地得到类似的点,比如:

((img1X,img1Y),(img2X,img2Y))

这就是我要找的,这样以后我就可以使用这些点了。现在,我可以看到结果图像中的点,但我不能成对地得到它们。

我使用的代码是Emgu CV中的示例。

代码语言:javascript
运行
复制
//----------------------------------------------------------------------------
//  Copyright (C) 2004-2016 by EMGU Corporation. All rights reserved.       
//----------------------------------------------------------------------------
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Drawing;
using System.Runtime.InteropServices;
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Features2D;
using Emgu.CV.Structure;
using Emgu.CV.Util;
#if !__IOS__
using Emgu.CV.Cuda;
#endif
using Emgu.CV.XFeatures2D;

namespace FirstEmgu
{

    public static class DrawMatches
    {
  // --------------------------------
  // ORIGINAL FUNCTION FROM EXAMPLE
  // --------------------------------
        private static void FindMatch(Mat modelImage, Mat observedImage, out long matchTime, out VectorOfKeyPoint modelKeyPoints, out VectorOfKeyPoint observedKeyPoints, VectorOfVectorOfDMatch matches, out Mat mask, out Mat homography)
        {
            int k = 2;
            double uniquenessThreshold = 0.8;
            double hessianThresh = 300;

            Stopwatch watch;
            homography = null;

            modelKeyPoints = new VectorOfKeyPoint();
            observedKeyPoints = new VectorOfKeyPoint();

#if !__IOS__
            if (CudaInvoke.HasCuda)
            {
                CudaSURF surfCuda = new CudaSURF((float)hessianThresh);
                using (GpuMat gpuModelImage = new GpuMat(modelImage))
                //extract features from the object image
                using (GpuMat gpuModelKeyPoints = surfCuda.DetectKeyPointsRaw(gpuModelImage, null))
                using (GpuMat gpuModelDescriptors = surfCuda.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
                using (CudaBFMatcher matcher = new CudaBFMatcher(DistanceType.L2))
                {
                    surfCuda.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
                    watch = Stopwatch.StartNew();

                    // extract features from the observed image
                    using (GpuMat gpuObservedImage = new GpuMat(observedImage))
                    using (GpuMat gpuObservedKeyPoints = surfCuda.DetectKeyPointsRaw(gpuObservedImage, null))
                    using (GpuMat gpuObservedDescriptors = surfCuda.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
                    //using (GpuMat tmp = new GpuMat())
                    //using (Stream stream = new Stream())
                    {
                        matcher.KnnMatch(gpuObservedDescriptors, gpuModelDescriptors, matches, k);

                        surfCuda.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);

                        mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);
                        mask.SetTo(new MCvScalar(255));
                        Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);

                        int nonZeroCount = CvInvoke.CountNonZero(mask);
                        if (nonZeroCount >= 4)
                        {
                            nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,
                               matches, mask, 1.5, 20);
                            if (nonZeroCount >= 4)
                                homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,
                                   observedKeyPoints, matches, mask, 2);
                        }
                    }
                    watch.Stop();
                }
            }
            else
#endif
            {
                using (UMat uModelImage = modelImage.ToUMat(AccessType.Read))
                using (UMat uObservedImage = observedImage.ToUMat(AccessType.Read))
                {
                    SURF surfCPU = new SURF(hessianThresh);
                    //extract features from the object image
                    UMat modelDescriptors = new UMat();
                    surfCPU.DetectAndCompute(uModelImage, null, modelKeyPoints, modelDescriptors, false);

                    watch = Stopwatch.StartNew();

                    // extract features from the observed image
                    UMat observedDescriptors = new UMat();
                    surfCPU.DetectAndCompute(uObservedImage, null, observedKeyPoints, observedDescriptors, false);
                    BFMatcher matcher = new BFMatcher(DistanceType.L2);
                    matcher.Add(modelDescriptors);

                    matcher.KnnMatch(observedDescriptors, matches, k, null);
                    mask = new Mat(matches.Size, 1, DepthType.Cv8U, 1);
                    mask.SetTo(new MCvScalar(255));
                    Features2DToolbox.VoteForUniqueness(matches, uniquenessThreshold, mask);

                    int nonZeroCount = CvInvoke.CountNonZero(mask);
                    if (nonZeroCount >= 4)
                    {
                        nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints,
                           matches, mask, 1.5, 20);
                        if (nonZeroCount >= 4)
                            homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints,
                               observedKeyPoints, matches, mask, 2);
                    }

                    watch.Stop();
                }
            }
            matchTime = watch.ElapsedMilliseconds;
        }
        // --------------------------------
        // ORIGINAL FUNCTION FROM EXAMPLE
        // --------------------------------
        /// <summary>
        /// Draw the model image and observed image, the matched features and homography projection.
        /// </summary>
        /// <param name="modelImage">The model image</param>
        /// <param name="observedImage">The observed image</param>
        /// <param name="matchTime">The output total time for computing the homography matrix.</param>
        /// <returns>The model image and observed image, the matched features and homography projection.</returns>
        public static Mat Draw(Mat modelImage, Mat observedImage, out long matchTime)
        {
            Mat homography;
            VectorOfKeyPoint modelKeyPoints;
            VectorOfKeyPoint observedKeyPoints;
            using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
            {
                Mat mask;
                FindMatch(modelImage, observedImage, out matchTime, out modelKeyPoints, out observedKeyPoints, matches,
                   out mask, out homography);

                //Draw the matched keypoints
                Mat result = new Mat();
                Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
                   matches, result, new MCvScalar(255, 255, 255), new MCvScalar(255, 255, 255), mask);

                #region draw the projected region on the image

                if (homography != null)
                {
                    //draw a rectangle along the projected model
                    Rectangle rect = new Rectangle(Point.Empty, modelImage.Size);
                    PointF[] pts = new PointF[]
               {
                  new PointF(rect.Left, rect.Bottom),
                  new PointF(rect.Right, rect.Bottom),
                  new PointF(rect.Right, rect.Top),
                  new PointF(rect.Left, rect.Top)
               };
                    pts = CvInvoke.PerspectiveTransform(pts, homography);

                    Point[] points = Array.ConvertAll<PointF, Point>(pts, Point.Round);
                    using (VectorOfPoint vp = new VectorOfPoint(points))
                    {
                        CvInvoke.Polylines(result, vp, true, new MCvScalar(255, 0, 0, 255), 5);
                    }

                }

                #endregion

                return result;

            }
        }

        // ----------------------------------
        // WRITTEN BY MYSELF
        // ----------------------------------
        // Returns 4 points (usually rectangle) of similar points
        // but can't be used, since sometimes this is a line (negative 
        // points)
        public static Point[] FindPoints(Mat modelImage, Mat observedImage, out long matchTime)
        {
            Mat homography;
            VectorOfKeyPoint modelKeyPoints;
            VectorOfKeyPoint observedKeyPoints;
            using (VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch())
            {
                Mat mask;
                FindMatch(modelImage, observedImage, out matchTime, out modelKeyPoints, out observedKeyPoints, matches,
                   out mask, out homography);

                //Draw the matched keypoints
                Mat result = new Mat();
                Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
                   matches, result, new MCvScalar(255, 255, 255), new MCvScalar(255, 255, 255), mask);

                Point[] points = null;
                if (homography != null)
                {
                    //draw a rectangle along the projected model
                    Rectangle rect = new Rectangle(Point.Empty, modelImage.Size);
                    PointF[] pts = new PointF[]
               {
                  new PointF(rect.Left, rect.Bottom),
                  new PointF(rect.Right, rect.Bottom),
                  new PointF(rect.Right, rect.Top),
                  new PointF(rect.Left, rect.Top)
               };
                    pts = CvInvoke.PerspectiveTransform(pts, homography);

                    points = Array.ConvertAll<PointF, Point>(pts, Point.Round);

                }

                return points;
            }
        }
    }
}

编辑

我设法从matches对象中得到了以下几个点:

代码语言:javascript
运行
复制
Features2DToolbox.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
                   matches, result, new MCvScalar(255, 255, 255), new MCvScalar(255, 255, 255), mask);

                for (int i = 0; i < matches.Size; i++)
                {
                    var a = matches[i].ToArray();
                    foreach (var e in a)
                    {
                        Point p = new Point(e.TrainIdx, e.QueryIdx);
                        Console.WriteLine(string.Format("Point: {0}", p));
                    }
                    Console.WriteLine("-----------------------");
                }

我觉得这应该能让我得到点好处。我设法让它在python中工作,代码也没什么不同。问题是返回的点数太多了。事实上,这将返回Y上的所有点。

示例

(45,1),(67,1)

(656,2),(77,2)

..。

这并不能让我得到我想要的分数,尽管我可能离得很近。如有任何建议,敬请见谅。

编辑2这个问题:Find interest point in surf Detector Algorithm非常类似于我所需要的东西。只有一个答案,但它不能说明如何得到匹配的点坐标。这就是我所需要的,如果两个图像中都有一个对象,从这两个图像中获取对象点的坐标。

EN

回答 2

Stack Overflow用户

发布于 2016-06-11 18:45:07

坐标不是由TrainIdx和QueryIdx组成的,它们是KeyPoints的索引。这将给出模型和观测图像之间匹配的像素坐标。

代码语言:javascript
运行
复制
for (int i = 0; i < matches.Size; i++)
{
    var arrayOfMatches = matches[i].ToArray();
    if (mask.GetData(i)[0] == 0) continue;
    foreach (var match in arrayOfMatches)
    {
        var matchingModelKeyPoint = modelKeyPoints[match.TrainIdx];
        var matchingObservedKeyPoint = observedKeyPoints[match.QueryIdx];
        Console.WriteLine("Model coordinate '" + matchingModelKeyPoint.Point + "' matches observed coordinate '" + matchingObservedKeyPoint.Point + "'.");
    }
}

arrayOfMatches中的项目数等于K的值,据我所知,距离最低的匹配是最好的。

票数 5
EN

Stack Overflow用户

发布于 2016-04-07 06:39:55

FindMatch函数中,每个点都由函数VoteForUniqueness验证。此验证的结果存储在mask中。

因此,您所要做的就是检查是否验证了匹配:

代码语言:javascript
运行
复制
for (int i = 0; i < matches.Size; i++)
{
    var a = matches[i].ToArray();
    if (mask.GetData(i)[0] == 0)
        continue;
    foreach (var e in a)
    {
        Point p = new Point(e.TrainIdx, e.QueryIdx);
        Console.WriteLine(string.Format("Point: {0}", p));
    }
    Console.WriteLine("-----------------------");
}
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/36269038

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档