首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >如何使用CImg库捕获和处理图像的每一帧?

如何使用CImg库捕获和处理图像的每一帧?
EN

Stack Overflow用户
提问于 2016-05-03 19:43:44
回答 1查看 2.7K关注 0票数 2

我正在进行一个基于实时图像处理的项目,该项目使用CImg库在Raspberrypi中进行处理。

当我使用内置的Raspicam命令时,我需要以更高的帧速率(例如至少30 fps)捕获图像。

代码语言:javascript
运行
复制
sudo raspistill -o -img_%d.jpg -tl 5 -t 1000  -a 512

/* -tl : msec -t中的时间间隔时间:总持续时间(1000 msec =1秒) -a :显示帧号*/

使用此命令,每秒显示34帧,我只能捕获最多4帧/图像(其余帧被跳过)。

代码语言:javascript
运行
复制
sudo raspistill -o -img_%d.jpg -tl 5 -tl 1000 -q 5 -md 7 -w 640 -h 480 -a 512

从上面的命令中,我可以以每秒7-8张图像的速度捕获,但是通过降低图像的分辨率和质量。

但我不想损害图像的质量,因为我将捕获一个图像,立即处理,并将删除一个图像,以节省内存。

后来,我尝试使用V4l2 (视频为Linux)驱动程序来利用相机的最佳性能,但在互联网上,关于V4l2和cimg的教程非常稀少,我找不到。

我一直在使用以下命令

代码语言:javascript
运行
复制
# Capture a JPEG image
 v4l2-ctl --set-fmt-video=width=2592,height=1944,pixelformat=3
 v4l2-ctl --stream-mmap=3 --stream-count=1 –stream-to=somefile.jpg

(资料来源:Module)

但是我无法获得足够的信息,比如(stream-mmap & stream-count),到底是什么,以及这些命令如何帮助我每秒捕获30帧/图像?

条件:

  1. 最重要的是,我不想使用OPENCV、MATLAB或任何其他图像处理软件,因为我的图像处理任务非常简单(即检测led光闪烁),而且我的目标是拥有一个轻量级的工具来以更高的性能来执行这些操作。
  2. 而且,我的编程代码应该是C或C++,而不是python或Java (因为处理速度很重要!)
  3. 请注意,我的目标不是录制一个视频,而是尽可能多地捕捉帧,并处理每个和个别的图像。

在Cimg中使用时,我从参考手册中搜索了几个文档,但我无法清楚地理解如何将它用于我的目的。

类cimg_library::CImgList表示cimg_library::CImg图像的列表。例如,它可用于存储图像序列的不同帧。(资料来源:overview.html )

  • 我找到了以下例子,但我不太确定它是否适合我的任务

从YUV图像序列文件加载列表。

代码语言:javascript
运行
复制
CImg<T>& load_yuv 
(
const char *const 
filename, 

const unsigned int 
size_x, 

const unsigned int 
size_y, 

const unsigned int 
first_frame = 0, 

const unsigned int 
last_frame = ~0U, 

const unsigned int 
step_frame = 1, 

const bool 
yuv2rgb = true 

参数文件名文件名来读取数据。图像的size_x宽度。图像的size_y高度。读取第一个图像帧的first_frame索引。最后读取图像帧的last_frame索引。在每个帧之间应用step_frame步骤。yuv2rgb在读取时将YUV应用于RGB转换。

但在这里,我需要rgb值从图像帧直接没有压缩。

现在我在OpenCv中有了下面的代码来执行我的任务,但我请求您帮助我使用CImg库(在C++中)或任何其他轻量级库或使用v4l2来实现相同的代码

代码语言:javascript
运行
复制
#include <iostream>
#include <opencv2/opencv.hpp>

using namespace std;
using namespace cv;

int main (){
    VideoCapture capture (0); //Since you have your device at /dev/video0

    /* You can edit the capture properties with "capture.set (property, value);" or in the driver with "v4l2-ctl --set-ctrl=auto_exposure=1"*/

    waitKey (200); //Wait 200 ms to ensure the device is open

    Mat frame; // create Matrix where the new frame will be stored
    if (capture.isOpened()){
        while (true){
            capture >> frame; //Put the new image in the Matrix

            imshow ("Image", frame); //function to show the image in the screen
        }
    }
}
  • 我是一个初学者的编程和Raspberry pi,请原谅,如果在上面的问题陈述有任何错误。

根据您的一些建议,我对raspicam c++ api代码进行了修改,并将其与CIMG图像处理功能相结合

代码语言:javascript
运行
复制
 #include "CImg.h"
    #include <iostream>
    #include <cstdlib>
    #include <fstream>
    #include <sstream>
    #include <sys/timeb.h>
    #include "raspicam.h"
    using namespace std;
    using namespace cimg_library;
     bool doTestSpeedOnly=false;
    size_t nFramesCaptured=100;
//parse command line
//returns the index of a command line param in argv. If not found, return -1

    int findParam ( string param,int argc,char **argv ) {
    int idx=-1;
    for ( int i=0; i<argc && idx==-1; i++ )
        if ( string ( argv[i] ) ==param ) idx=i;
    return idx;

}


//parse command line
//returns the value of a command line param. If not found, defvalue is returned
float getParamVal ( string param,int argc,char **argv,float defvalue=-1 ) {
    int idx=-1;
    for ( int i=0; i<argc && idx==-1; i++ )
        if ( string ( argv[i] ) ==param ) idx=i;

    if ( idx==-1 ) return defvalue;
    else return atof ( argv[  idx+1] );
}




raspicam::RASPICAM_EXPOSURE getExposureFromString ( string str ) {
    if ( str=="OFF" ) return raspicam::RASPICAM_EXPOSURE_OFF;
    if ( str=="AUTO" ) return raspicam::RASPICAM_EXPOSURE_AUTO;
    if ( str=="NIGHT" ) return raspicam::RASPICAM_EXPOSURE_NIGHT;
    if ( str=="NIGHTPREVIEW" ) return raspicam::RASPICAM_EXPOSURE_NIGHTPREVIEW;
    if ( str=="BACKLIGHT" ) return raspicam::RASPICAM_EXPOSURE_BACKLIGHT;
    if ( str=="SPOTLIGHT" ) return raspicam::RASPICAM_EXPOSURE_SPOTLIGHT;
    if ( str=="SPORTS" ) return raspicam::RASPICAM_EXPOSURE_SPORTS;
    if ( str=="SNOW" ) return raspicam::RASPICAM_EXPOSURE_SNOW;
    if ( str=="BEACH" ) return raspicam::RASPICAM_EXPOSURE_BEACH;
    if ( str=="VERYLONG" ) return raspicam::RASPICAM_EXPOSURE_VERYLONG;
    if ( str=="FIXEDFPS" ) return raspicam::RASPICAM_EXPOSURE_FIXEDFPS;
    if ( str=="ANTISHAKE" ) return raspicam::RASPICAM_EXPOSURE_ANTISHAKE;
    if ( str=="FIREWORKS" ) return raspicam::RASPICAM_EXPOSURE_FIREWORKS;
    return raspicam::RASPICAM_EXPOSURE_AUTO;
}


    raspicam::RASPICAM_AWB getAwbFromString ( string str ) {
    if ( str=="OFF" ) return raspicam::RASPICAM_AWB_OFF;
    if ( str=="AUTO" ) return raspicam::RASPICAM_AWB_AUTO;
    if ( str=="SUNLIGHT" ) return raspicam::RASPICAM_AWB_SUNLIGHT;
    if ( str=="CLOUDY" ) return raspicam::RASPICAM_AWB_CLOUDY;
    if ( str=="SHADE" ) return raspicam::RASPICAM_AWB_SHADE;
    if ( str=="TUNGSTEN" ) return raspicam::RASPICAM_AWB_TUNGSTEN;
    if ( str=="FLUORESCENT" ) return raspicam::RASPICAM_AWB_FLUORESCENT;
    if ( str=="INCANDESCENT" ) return raspicam::RASPICAM_AWB_INCANDESCENT;
    if ( str=="FLASH" ) return raspicam::RASPICAM_AWB_FLASH;
    if ( str=="HORIZON" ) return raspicam::RASPICAM_AWB_HORIZON;
    return raspicam::RASPICAM_AWB_AUTO;
    }


    void processCommandLine ( int argc,char **argv,raspicam::RaspiCam &Camera ) {
    Camera.setWidth ( getParamVal ( "-w",argc,argv,640 ) );
    Camera.setHeight ( getParamVal ( "-h",argc,argv,480 ) );
    Camera.setBrightness ( getParamVal ( "-br",argc,argv,50 ) );
    Camera.setSharpness ( getParamVal ( "-sh",argc,argv,0 ) );
    Camera.setContrast ( getParamVal ( "-co",argc,argv,0 ) );
    Camera.setSaturation ( getParamVal ( "-sa",argc,argv,0 ) );
    Camera.setShutterSpeed( getParamVal ( "-ss",argc,argv,0 ) );
    Camera.setISO ( getParamVal ( "-iso",argc,argv ,400 ) );
   if ( findParam ( "-vs",argc,argv ) !=-1 )
        Camera.setVideoStabilization ( true );
    Camera.setExposureCompensation ( getParamVal ( "-ec",argc,argv ,0 ) );

    if ( findParam ( "-gr",argc,argv ) !=-1 )
      Camera.setFormat(raspicam::RASPICAM_FORMAT_GRAY);
    if ( findParam ( "-yuv",argc,argv ) !=-1 ) 
      Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
    if ( findParam ( "-test_speed",argc,argv ) !=-1 )
        doTestSpeedOnly=true;
    int idx;
    if ( ( idx=findParam ( "-ex",argc,argv ) ) !=-1 )
        Camera.setExposure ( getExposureFromString ( argv[idx+1] ) );
    if ( ( idx=findParam ( "-awb",argc,argv ) ) !=-1 )
        Camera.setAWB( getAwbFromString ( argv[idx+1] ) );

    nFramesCaptured=getParamVal("-nframes",argc,argv,100);
    Camera.setAWB_RB(getParamVal("-awb_b",argc,argv ,1), getParamVal("-awb_g",argc,argv ,1));

    }


    //timer functions
    #include <sys/time.h>
    #include <unistd.h>
    class Timer{
    private:
    struct timeval _start, _end;

    public:
      Timer(){}
    void start(){
        gettimeofday(&_start, NULL);
    }
    void end(){
        gettimeofday(&_end, NULL);
    }
    double getSecs(){
    return double(((_end.tv_sec  - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
    }

    }; 

    void saveImage ( string filepath,unsigned char *data,raspicam::RaspiCam &Camera ) {
    std::ofstream outFile ( filepath.c_str(),std::ios::binary );
    if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_BGR ||  Camera.getFormat()==raspicam::RASPICAM_FORMAT_RGB ) {
        outFile<<"P6\n";
    } else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_GRAY ) {
        outFile<<"P5\n";
    } else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_YUV420 ) { //made up format
        outFile<<"P7\n";
    }
    outFile<<Camera.getWidth() <<" "<<Camera.getHeight() <<" 255\n";
    outFile.write ( ( char* ) data,Camera.getImageBufferSize() );
    }


    int main ( int argc,char **argv ) {

    int a=1,b=0,c;
    int x=444,y=129; //pixel coordinates
    raspicam::RaspiCam Camera;
    processCommandLine ( argc,argv,Camera );
    cout<<"Connecting to camera"<<endl;

    if ( !Camera.open() ) {
        cerr<<"Error opening camera"<<endl;
        return -1;
       }
     //   cout<<"Connected to camera ="<<Camera.getId() <<" bufs="<<Camera.getImageBufferSize( )<<endl;
    unsigned char *data=new unsigned char[  Camera.getImageBufferSize( )];
    Timer timer;


       // cout<<"Capturing...."<<endl;
       // size_t i=0;
    timer.start();


    for (int i=0;i<=nFramesCaptured;i++)
        {
        Camera.grab();
        Camera.retrieve ( data );
                std::stringstream fn;
                fn<<"image.jpg";
                saveImage ( fn.str(),data,Camera );
    //  cerr<<"Saving "<<fn.str()<<endl;
    CImg<float> Img("/run/shm/image.jpg");
         //Img.display("Window Title");

    // 9 PIXELS MATRIX GRAYSCALE VALUES 
    float pixvalR1 = Img(x-1,y-1);

    float pixvalR2 = Img(x,y-1);

    float pixvalR3 = Img(x+1,y-1);

    float pixvalR4 = Img(x-1,y);

    float pixvalR5 = Img(x,y);

    float pixvalR6 = Img(x+1,y);

    float pixvalR7 = Img(x-1,y+1);

    float pixvalR8 = Img(x,y+1);

    float pixvalR9 = Img(x+1,y+1);

    // std::cout<<"coordinate value :"<<pixvalR5 << endl;


    // MEAN VALUES OF RGB PIXELS
    float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;

    // DISPLAYING MEAN RGB VALUES OF 9 PIXELS
    // std::cout<<"Lightness value :"<<light << endl;


    // THRESHOLDING CONDITION
     c = (light > 130 ) ? a : b; 

    // cout<<"Data is " << c <<endl;

    ofstream fout("c.txt", ios::app);
    fout<<c;
    fout.close();


    }   

    timer.end();
       cerr<< timer.getSecs()<< " seconds for "<< nFramesCaptured << "  frames : FPS " << ( ( float ) ( nFramesCaptured ) / timer.getSecs() ) <<endl;

    Camera.release();

    std::cin.ignore();


    }
  • 从这段代码中,我想知道如何直接从camera.retrieve(数据)获取数据,而不必将其存储为图像文件,以及如何从图像缓冲区访问数据,以进一步处理和删除图像。

根据Mark的建议,我在代码中做了一些小小的修改,并且得到了很好的结果,但是,有什么方法可以提高处理性能以获得更高的帧速率呢?有了这段代码,我最多可以得到10个FPS。

代码语言:javascript
运行
复制
#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>

// Don't want any X11 display by CImg
#define cimg_display 0

#include <CImg.h>

using namespace cimg_library;
using namespace std;

#define NFRAMES     1000
#define NTHREADS    2
#define WIDTH       640
#define HEIGHT      480

// Commands/status for the worker threads
#define WAIT    0
#define GO      1
#define GOING   2
#define EXIT    3
#define EXITED  4
volatile int command[NTHREADS];

// Serialize access to cout
std::mutex cout_mutex;

// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);

////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
//                 main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {

   // If you need a "results" image of type CImg, create it here before entering
   // ... the main processing loop below - you don't want to do malloc()s in the
   // ... high-speed loop
   // CImg results...

   int wakeups=0;

   // Create a white for annotating
   unsigned char white[] = { 255,255,255 };

   while(true){
      // Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
      while((command[id]!=GO)&&(command[id]!=EXIT)){
         std::this_thread::sleep_for(std::chrono::microseconds(500));
      }
      if(command[id]==EXIT){command[id]=EXITED;break;}
      wakeups++;

      // Process frame of data - access CImg structure here
      command[id]=GOING;

      // You need to add your processing in HERE - everything from
      // ... 9 PIXELS MATRIX GRAYSCALE VALUES to
      // ... THRESHOLDING CONDITION
    int a=1,b=0,c;
    int x=330,y=84;

// CImg<float> Img("/run/shm/result.png");
float pixvalR1 = img(x-1,y-1);

float pixvalR2 = img(x,y-1);

float pixvalR3 = img(x+1,y-1);

float pixvalR4 = img(x-1,y);

float pixvalR5 = img(x,y);

float pixvalR6 = img(x+1,y);

float pixvalR7 = img(x-1,y+1);

float pixvalR8 = img(x,y+1);

float pixvalR9 = img(x+1,y+1);


// MEAN VALUES OF RGB PIXELS
float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;

// DISPLAYING MEAN RGB VALUES OF 9 PIXELS
// std::cout<<"Lightness value :"<<light << endl;


// THRESHOLDING CONDITION
 c = (light > 130 ) ? a : b; 

// cout<<"Data is " << c <<endl;

ofstream fout("c.txt", ios::app);
fout<<c;
fout.close();
      // Pretend to do some processing.
      // You need to delete the following "sleep_for" and "if(id==0...){...}"
     // std::this_thread::sleep_for(std::chrono::milliseconds(2));


    /*  if((id==0)&&(wakeups==NFRAMES)){
        //  Annotate final image and save as PNG
          img.draw_text(100,100,"Hello World",white);
         img.save_png("result.png");
      } */
   }

   cout_mutex.lock();
   std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
   cout_mutex.unlock();
}

//timer functions
#include <sys/time.h>
#include <unistd.h>
class Timer{
    private:
    struct timeval _start, _end;

public:
  Timer(){}
    void start(){
        gettimeofday(&_start, NULL);
    }
    void end(){
        gettimeofday(&_end, NULL);
    }
    double getSecs(){
    return double(((_end.tv_sec  - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
    }

}; 

int main ( int argc,char **argv ) {

Timer timer;
   raspicam::RaspiCam Camera;
   // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
   Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

   // Allowable widths: 320, 640, 1280
   // Allowable heights: 240, 480, 960
   // setCaptureSize(width,height)
   Camera.setCaptureSize(WIDTH,HEIGHT);

   std::cout << "Main: Starting"  << std::endl;
   std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
   std::cout << "Main: NFRAMES:"  << NFRAMES  << std::endl;
   std::cout << "Main: Width: "   << Camera.getWidth()  << std::endl;
   std::cout << "Main: Height: "  << Camera.getHeight() << std::endl;

   // Spawn worker threads - making sure they are initially in WAIT state
   std::thread threads[NTHREADS];
   for(int i=0; i<NTHREADS; ++i){
      command[i]=WAIT;
      threads[i] = std::thread(worker,i);
   }

   // Open camera
   cout<<"Opening Camera..."<<endl;
   if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

   // Wait until camera stabilizes
   std::cout<<"Sleeping for 3 secs"<<endl;
   std::this_thread::sleep_for(std::chrono::seconds(3));
 timer.start();
   for(int frame=0;frame<NFRAMES;frame++){
      // Capture frame
      Camera.grab();

      // Copy just the Y component to our mono CImg
      std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);

      // Notify worker threads that data is ready for processing
      for(int i=0; i<NTHREADS; ++i){
         command[i]=GO;
      }
   }
timer.end();
cerr<< timer.getSecs()<< " seconds for "<< NFRAMES << "  frames : FPS " << ( ( float ) ( NFRAMES ) / timer.getSecs() ) << endl;
   // Let workers process final frame, then tell to exit
 //  std::this_thread::sleep_for(std::chrono::milliseconds(50));

   // Notify worker threads to exit
   for(int i=0; i<NTHREADS; ++i){
      command[i]=EXIT;
   }

   // Wait for all threads to finish
   for(auto& th : threads) th.join();
}

用于执行代码的已编译命令:

代码语言:javascript
运行
复制
g++ -std=c++11 /home/pi/raspicam/src/raspicimgthread.cpp -o threadraspicimg -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util -O2 -L/usr/X11R6/lib -lm -lpthread -lX11

**RESULTS :**
Main: Starting
Main: NTHREADS:2
Main: NFRAMES:1000
Main: Width: 640
Main: Height: 480
Opening Camera...
Sleeping for 3 secs
99.9194 seconds for 1000  frames : FPS 10.0081
Thread[1]: Received 1000 wakeups
Thread[0]: Received 1000 wakeups

real    1m43.198s
user    0m2.060s
sys     0m5.850s

还有一个查询是,当我使用普通的Raspicam c++ API代码来执行相同的任务(我前面提到的代码)时,我得到了几乎相同的结果,性能得到了很小的提高(当然,我的帧率从9.4FPS提高到了10个FPS)。

但在守则1中:

我一直将图像保存在内存磁盘中以进行处理,然后删除。我没有使用任何线程进行并行处理。

在守则2中:

我们不会在磁盘中保存任何图像并直接从缓冲区中处理它。我们也在使用线程来提高处理速度。

不幸的是,虽然我们从代码1中对代码2做了一些更改,但我无法获得所需的结果(这将在30个FPS中执行)

等待您的建议和任何帮助,我们真的很感激。

提前感谢

问候BLV Lohith Kumar

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2016-05-25 15:18:45

最新答案

我在这里更新了我最初的答案,以展示如何将获取的数据复制到CImg结构中,并显示两个工作线程,它们可以处理图像,而主线程继续全速获取帧。它每秒达到60帧。

我没有在工作线程中进行任何处理,因为我不知道您想要做什么。我所做的就是将最后一帧保存到磁盘中,以显示将获取数据放入CImg中是有效的。你可以有3个工作线程。您可以在循环的基础上将一个框架传递给每个线程,或者让每个线程在每次迭代时处理该帧的一半。或者三个线程中的每一个都处理一个框架的三分之一。您可以将轮询唤醒更改为使用条件变量。

代码语言:javascript
运行
复制
#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>

// Don't want any X11 display by CImg
#define cimg_display 0

#include <CImg.h>

using namespace cimg_library;
using namespace std;

#define NFRAMES     1000
#define NTHREADS    2
#define WIDTH       1280
#define HEIGHT      960

// Commands/status for the worker threads
#define WAIT    0
#define GO      1
#define GOING   2
#define EXIT    3
#define EXITED  4
volatile int command[NTHREADS];

// Serialize access to cout
std::mutex cout_mutex;

// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);

////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
//                 main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {

   // If you need a "results" image of type CImg, create it here before entering
   // ... the main processing loop below - you don't want to do malloc()s in the
   // ... high-speed loop
   // CImg results...

   int wakeups=0;

   // Create a white for annotating
   unsigned char white[] = { 255,255,255 };

   while(true){
      // Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
      while((command[id]!=GO)&&(command[id]!=EXIT)){
         std::this_thread::sleep_for(std::chrono::microseconds(500));
      }
      if(command[id]==EXIT){command[id]=EXITED;break;}
      wakeups++;

      // Process frame of data - access CImg structure here
      command[id]=GOING;

      // You need to add your processing in HERE - everything from
      // ... 9 PIXELS MATRIX GRAYSCALE VALUES to
      // ... THRESHOLDING CONDITION

      // Pretend to do some processing.
      // You need to delete the following "sleep_for" and "if(id==0...){...}"
      std::this_thread::sleep_for(std::chrono::milliseconds(2));

      if((id==0)&&(wakeups==NFRAMES)){
         // Annotate final image and save as PNG
         img.draw_text(100,100,"Hello World",white);
         img.save_png("result.png");
      }
   }

   cout_mutex.lock();
   std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
   cout_mutex.unlock();
}

int main ( int argc,char **argv ) {

   raspicam::RaspiCam Camera;
   // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
   Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

   // Allowable widths: 320, 640, 1280
   // Allowable heights: 240, 480, 960
   // setCaptureSize(width,height)
   Camera.setCaptureSize(WIDTH,HEIGHT);

   std::cout << "Main: Starting"  << std::endl;
   std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
   std::cout << "Main: NFRAMES:"  << NFRAMES  << std::endl;
   std::cout << "Main: Width: "   << Camera.getWidth()  << std::endl;
   std::cout << "Main: Height: "  << Camera.getHeight() << std::endl;

   // Spawn worker threads - making sure they are initially in WAIT state
   std::thread threads[NTHREADS];
   for(int i=0; i<NTHREADS; ++i){
      command[i]=WAIT;
      threads[i] = std::thread(worker,i);
   }

   // Open camera
   cout<<"Opening Camera..."<<endl;
   if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

   // Wait until camera stabilizes
   std::cout<<"Sleeping for 3 secs"<<endl;
   std::this_thread::sleep_for(std::chrono::seconds(3));

   for(int frame=0;frame<NFRAMES;frame++){
      // Capture frame
      Camera.grab();

      // Copy just the Y component to our mono CImg
      std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);

      // Notify worker threads that data is ready for processing
      for(int i=0; i<NTHREADS; ++i){
         command[i]=GO;
      }
   }

   // Let workers process final frame, then tell to exit
   std::this_thread::sleep_for(std::chrono::milliseconds(50));

   // Notify worker threads to exit
   for(int i=0; i<NTHREADS; ++i){
      command[i]=EXIT;
   }

   // Wait for all threads to finish
   for(auto& th : threads) th.join();
}

关于定时的注记

您可以这样计时代码:

代码语言:javascript
运行
复制
#include <chrono>

typedef std::chrono::high_resolution_clock hrclock;

hrclock::time_point t1,t2;

t1 = hrclock::now();
// do something that needs timing
t2 = hrclock::now();

std::chrono::nanoseconds elapsed = t2-t1;
long long nanoseconds=elapsed.count();

原始答案

我一直在用Raspicam做一些实验。我从SourceForge下载了他们的代码,并稍微修改了一下,以进行一些简单的、只用于捕获的测试。我最后使用的代码如下所示:

代码语言:javascript
运行
复制
#include <ctime>
#include <fstream>
#include <iostream>
#include <raspicam/raspicam.h>
#include <unistd.h> // for usleep()
using namespace std;

#define NFRAMES 1000

int main ( int argc,char **argv ) {

    raspicam::RaspiCam Camera;
    // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
    Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);

    // Allowable widths: 320, 640, 1280
    // Allowable heights: 240, 480, 960
    // setCaptureSize(width,height)
    Camera.setCaptureSize(1280,960);

    // Open camera 
    cout<<"Opening Camera..."<<endl;
    if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}

    // Wait until camera stabilizes
    cout<<"Sleeping for 3 secs"<<endl;
    usleep(3000000);
    cout << "Grabbing " << NFRAMES << " frames" << endl;

    // Allocate memory
    unsigned long bytes=Camera.getImageBufferSize();
    cout << "Width: "  << Camera.getWidth() << endl;
    cout << "Height: " << Camera.getHeight() << endl;
    cout << "ImageBufferSize: " << bytes << endl;;
    unsigned char *data=new unsigned char[bytes];

    for(int frame=0;frame<NFRAMES;frame++){
       // Capture frame
       Camera.grab();

       // Extract the image
       Camera.retrieve ( data,raspicam::RASPICAM_FORMAT_IGNORE );

       // Wake up a thread here to process the frame with CImg
    }
    return 0;
}

我不喜欢cmake,所以我就这样编译:

代码语言:javascript
运行
复制
g++ -std=c++11 simpletest.c -o simpletest -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util

我发现,不管图像的维数,多少不管编码(RGB、BGR、GRAY),它都能达到30 fps (每秒帧)。

我唯一能做得更好的方法是做以下改变:

  • 在上面的代码中,使用RASPICAM_FORMAT_YUV420而不是其他任何东西
  • 编辑文件private_impl.cpp并更改第71行以将框架设置为90。

如果我这样做,我可以达到66 fps。

由于Raspberry Pi只是一个相当低的900 the,但有4个核心,我想您应该希望在循环外开始启动1-3个额外的线程,然后唤醒其中一个或多个线程,我在代码中已经注意到了这些线程来处理数据。他们要做的第一件事是在下一帧启动之前将数据从获取缓冲区中复制出来--或者有多个缓冲区并以循环的方式使用它们。

关于线程的说明

在下图中,绿色表示获取图像的Camera.grab(),红色表示获取图像后所做的处理。目前,您正在获取数据(绿色),然后处理它(红色),然后才能获得下一个帧。请注意,您的4个CPU中有3个什么都不做。

我建议您将处理(红色)卸载到其他CPU/线程,并尽可能快地获取新数据(绿色)。如下所示:

现在你看到的是每秒更多的帧(绿色)。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/37013039

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档