我正在使用OpenCV做一些检测工作,并且我需要使用距离转换。除了opencv中的距离转换函数给出的图像与我用作源的图像完全相同。有人知道我做错了什么吗?下面是我的代码的一部分:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);简而言之,我从kinect中对图像进行分级,将其转换为单通道图像,对其进行平滑处理,运行canny边缘检测器,反转值,然后进行距离转换。但转换后的图像看起来与输入图像完全相同。怎么了?
谢谢!
发布于 2012-01-19 04:04:27
我相信这里的关键是它们看起来是一样的。下面是我写的一个小程序,用来展示它们之间的区别:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
Mat before = imread("qrcode.png", 0);
Mat dist;
distanceTransform(before, dist, CV_DIST_L2, 3);
imshow("before", before);
imshow("non-normalized", dist);
normalize(dist, dist, 0.0, 1.0, NORM_MINMAX);
imshow("normalized", dist);
waitKey();
return 0;
}在非标准化图像中,您将看到以下内容:

这看起来并没有改变什么,但是与整个值范围0, 255相比,距离步长非常小,我们看不到差异,所以让我们将其标准化……
现在我们得到这样的结果:

这些值本身应该是正确的,但当显示时,您需要对图像进行标准化才能看到差异。
编辑:这里是一个来自dist矩阵左上角的10x10小样本,它显示了这些值实际上是不同的:
[10.954346, 10.540054, 10.125763, 9.7114716, 9.2971802, 8.8828888, 8.4685974, 8.054306, 7.6400146, 7.6400146;
10.540054, 9.5850525, 9.1707611, 8.7564697, 8.3421783, 7.927887, 7.5135956, 7.0993042, 6.6850128, 6.6850128;
10.125763, 9.1707611, 8.2157593, 7.8014679, 7.3871765, 6.9728851, 6.5585938, 6.1443024, 5.730011, 5.730011;
9.7114716, 8.7564697, 7.8014679, 6.8464661, 6.4321747, 6.0178833, 5.6035919, 5.1893005, 4.7750092, 4.7750092;
9.2971802, 8.3421783, 7.3871765, 6.4321747, 5.4771729, 5.0628815, 4.6485901, 4.2342987, 3.8200073, 3.8200073;
8.8828888, 7.927887, 6.9728851, 6.0178833, 5.0628815, 4.1078796, 3.6935883, 3.2792969, 2.8650055, 2.8650055;
8.4685974, 7.5135956, 6.5585938, 5.6035919, 4.6485901, 3.6935883, 2.7385864, 2.324295, 1.9100037, 1.9100037;
8.054306, 7.0993042, 6.1443024, 5.1893005, 4.2342987, 3.2792969, 2.324295, 1.3692932, 0.95500183, 0.95500183;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0;
7.6400146, 6.6850128, 5.730011, 4.7750092, 3.8200073, 2.8650055, 1.9100037, 0.95500183, 0, 0]发布于 2014-01-10 16:15:38
我刚弄明白了这件事。The OpenCV distanceTransform
计算源图像的每个像素到最接近零像素的距离。
所以它希望你的边缘图像是负的。
你所需要做的就是否定你的边缘图像:
edges = 255 - edges;发布于 2012-05-17 00:43:24
您可以在normalize函数之前使用以下代码打印此值:
for(int x=0; x<10;x++)
{
cout<<endl;
for(int y=0; y<10;y++)
cout<<std::setw(10)<<dist.at<float>(x, y);
}https://stackoverflow.com/questions/8915833
复制相似问题