前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >websocket传输canvas图像数据给C++服务端opencv图像实现web在线实时图像处理

websocket传输canvas图像数据给C++服务端opencv图像实现web在线实时图像处理

作者头像
Pulsar-V
修改2019-04-17 19:26:53
4K8
修改2019-04-17 19:26:53
举报
文章被收录于专栏:Pulsar-VPulsar-V

前后端的耦合想了很久,上下课都在思考怎么做,然后终于憋出来了。这是之前搞的一个视觉计算的项目,boss叫对接到前端,于是就产生了这样一个诡异的需求,就是前端打开摄像头,同时需要把摄像头的数据回传到后端进行图像处理(比如美颜啊脑袋上加个装饰品之类),这就需要涉及到前端和服务端的数据编码耦合,想了想既然任何图像在内存里面都是一个uchar矩阵,于是琢磨了这个东西出来。

一般情况下,图像在内存里的表达都是个uchar串,或者说byte流,因为我经常需要写跨语言调用的玩意儿,所以一般在内存里我都是用字符串和比特流进行交互,这里我采用了同样的思想,我们把opencv的图像进行编码为png,然后再一次编码为base64,通过websocket传输给前端。大致过程如下。

首先假设我们的前端打开websocket连接后端,连接上了以后前端打开摄像头取摄像头数据传输给后端,后端通过一系列的图像处理机器学习以后编码图像回传给前端。

前端代码:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
<video id="video" style="display: none" width="480" height="320" controls></video>

<canvas id="canvas" width="480" height="320"></canvas>
<img id="target" width="480" height="320"></img>
<script>
    var video = document.getElementById('video');
    var canvas = document.getElementById('canvas');
    var image = document.getElementById('target');
    var context = canvas.getContext('2d');
    var ws = new WebSocket("ws://127.0.0.1:9002");
    ws.binaryType = "arraybuffer";

    ws.onopen = function() {
        ws.send("I'm client");
    };

    ws.onmessage = function (evt) {
        console.log("resive");
        try{
            //显示后端回传回来的base64图像
            image.src="data:image/png;base64,"+evt.data;
            console.log(evt.data);
        }catch{

        }

    };

    ws.onclose = function() {
        alert("Closed");
    };

    ws.onerror = function(err) {
        alert("Error: " + err);
    };
    function getUserMedia(constraints, success, error) {
        if (navigator.mediaDevices.getUserMedia) {
            navigator.mediaDevices.getUserMedia(constraints).then(success).catch(error);
        }
    }
    //成功回调函数
    function success(stream){
        video.srcObject=stream;
        video.play();
    }
    function error(error) {
        console.log('访问用户媒体失败:',error.name,error.message);
    }

    //这个函数是实现将canvas上面的base64图像转为图像数据流的字符串形式
    function dataURItoBlob(dataURI) {
        // convert base64/URLEncoded data component to raw binary data held in a string
        var byteString;
        if (dataURI.split(',')[0].indexOf('base64') >= 0)
            byteString = atob(dataURI.split(',')[1]);
        else
            byteString = unescape(dataURI.split(',')[1]);

        // separate out the mime component
        var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];

        // write the bytes of the string to a typed array
        var ia = new Uint8Array(byteString.length);
        for (var i = 0; i < byteString.length; i++) {
            ia[i] = byteString.charCodeAt(i);
        }

        return new Blob([ia], {type:mimeString});
    }
    if (navigator.mediaDevices.getUserMedia || navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia) {
        //调用用户媒体设备, 访问摄像头
        getUserMedia({video: {width: 480, height: 320}}, success, error);
        timer = setInterval(
            function () {
                context.drawImage(video,0,0,480,320);
                var data = canvas.toDataURL('image/jpeg', 1.0);
                newblob = dataURItoBlob(data);
                //将转换好成为字符串的图像数据发送出去
                ws.send(newblob);
            }, 100);//这里我们的前端还是需要延时的,如果我们的后端计算实时性不是很强的话,而恰好我的项目后端计算规模非常大,所以需要50ms的等待

    } else {
        alert('不支持访问用户媒体');
    }

</script>
</body>
</html>

C++服务器端(这里需要使用到websocket++读者请自行编译)

opencv_websocket_server.h

//
// Created by Pulsar on 2019/4/16.
//

#ifndef WEBSOCKETPP_OPENCV_WEBSOCKET_H
#define WEBSOCKETPP_OPENCV_WEBSOCKET_H

#include <opencv2/opencv.hpp>

#include <boost/thread/thread.hpp>
//#include <boost/bind.hpp>
#include <boost/thread/mutex.hpp>
#include <websocketpp/config/asio_no_tls.hpp>
#include <websocketpp/server.hpp>


typedef websocketpp::server<websocketpp::config::asio> WebsocketServer;
typedef WebsocketServer::message_ptr message_ptr;

class opencv_websocket {
public:
    opencv_websocket(std::string file_path)	;
    void Run(int port);
    ~opencv_websocket();
};


#endif //WEBSOCKETPP_OPENCV_WEBSOCKET_H

opencv_websocket_server.cpp

//
// Created by Pulsar on 2019/4/16.
//

#include <opencv_websocket.h>
//using websocketpp::lib::placeholders::_1;
//using websocketpp::lib::placeholders::_2;
//using websocketpp::lib::bind;
boost::shared_mutex  read_write_mutex;
boost::mutex lock;
cv::CascadeClassifier cascade;
//解码base64数据
static std::string base64Decode(const char *Data, int DataByte) {
    //解码表
    const char DecodeTable[] =
            {
                    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
                    0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
                    62, // '+'
                    0, 0, 0,
                    63, // '/'
                    52, 53, 54, 55, 56, 57, 58, 59, 60, 61, // '0'-'9'
                    0, 0, 0, 0, 0, 0, 0,
                    0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
                    13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, // 'A'-'Z'
                    0, 0, 0, 0, 0, 0,
                    26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38,
                    39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, // 'a'-'z'
            };
    std::string strDecode;
    int nValue;
    int i = 0;
    while (i < DataByte) {
        if (*Data != '\r' && *Data != '\n') {
            nValue = DecodeTable[*Data++] << 18;
            nValue += DecodeTable[*Data++] << 12;
            strDecode += (nValue & 0x00FF0000) >> 16;
            if (*Data != '=') {
                nValue += DecodeTable[*Data++] << 6;
                strDecode += (nValue & 0x0000FF00) >> 8;
                if (*Data != '=') {
                    nValue += DecodeTable[*Data++];
                    strDecode += nValue & 0x000000FF;
                }
            }
            i += 4;
        } else {
            Data++;
            i++;
        }
    }
    return strDecode;
}

//编码base64数据
static std::string base64Encode(const unsigned char *Data, int DataByte) {
    //编码表
    const char EncodeTable[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/";
    //返回值
    std::string strEncode;
    unsigned char Tmp[4] = {0};
    int LineLength = 0;
    for (int i = 0; i < (int) (DataByte / 3); i++) {
        Tmp[1] = *Data++;
        Tmp[2] = *Data++;
        Tmp[3] = *Data++;
        strEncode += EncodeTable[Tmp[1] >> 2];
        strEncode += EncodeTable[((Tmp[1] << 4) | (Tmp[2] >> 4)) & 0x3F];
        strEncode += EncodeTable[((Tmp[2] << 2) | (Tmp[3] >> 6)) & 0x3F];
        strEncode += EncodeTable[Tmp[3] & 0x3F];
        if (LineLength += 4, LineLength == 76) {
            strEncode += "\r\n";
            LineLength = 0;
        }
    }
    //对剩余数据进行编码
    int Mod = DataByte % 3;
    if (Mod == 1) {
        Tmp[1] = *Data++;
        strEncode += EncodeTable[(Tmp[1] & 0xFC) >> 2];
        strEncode += EncodeTable[((Tmp[1] & 0x03) << 4)];
        strEncode += "==";
    } else if (Mod == 2) {
        Tmp[1] = *Data++;
        Tmp[2] = *Data++;
        strEncode += EncodeTable[(Tmp[1] & 0xFC) >> 2];
        strEncode += EncodeTable[((Tmp[1] & 0x03) << 4) | ((Tmp[2] & 0xF0) >> 4)];
        strEncode += EncodeTable[((Tmp[2] & 0x0F) << 2)];
        strEncode += "=";
    }


    return strEncode;
}

//imgType 包括png bmp jpg jpeg等opencv能够进行编码解码的文件
static std::string Mat2Base64(const cv::Mat &img, std::string imgType) {
    //Mat转base64
    std::string img_data;
    std::vector<uchar> vecImg;
    std::vector<int> vecCompression_params;
    vecCompression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
    vecCompression_params.push_back(90);
    imgType = "." + imgType;
    //重点来了,它是负责把图像从opencv的Mat变成编码好的图像比特流的重要函数
    cv::imencode(imgType, img, vecImg, vecCompression_params);
    img_data = base64Encode(vecImg.data(), vecImg.size());
    return img_data;
}

//base64转Mat
static cv::Mat Base2Mat(std::string &base64_data) {
    cv::Mat img;
    std::string s_mat;
    s_mat = base64Decode(base64_data.data(), base64_data.size());
    std::vector<char> base64_img(s_mat.begin(), s_mat.end());
    img = cv::imdecode(base64_img, CV_LOAD_IMAGE_COLOR);
    return img;
}


void OnOpen(WebsocketServer *server, websocketpp::connection_hdl hdl) {
    std::cout << "have client connected" << std::endl;
}

void OnClose(WebsocketServer *server, websocketpp::connection_hdl hdl) {
    std::cout << "have client disconnected" << std::endl;
}

void OnMessage(WebsocketServer *server, websocketpp::connection_hdl hdl, message_ptr msg) {
    std::string image_str = msg->get_payload();
    std::vector<char> img_vec(image_str.begin(), image_str.end());
    try {
        //把前端传来的图像字符串进行解码
        cv::Mat img = cv::imdecode(img_vec, CV_LOAD_IMAGE_COLOR);
        if (!img.empty()) {
//            cv::imshow("", img);
            std::vector<cv::Rect> faces;
            lock.lock();
//            cascade.detectMultiScale(img, faces, 1.1, 3, 0, cv::Size(30, 30));
//            for (size_t t = 0; t < faces.size(); t++){
//                cv::rectangle(img, faces[t], cv::Scalar(0, 0, 255), 2, 8);
//            }
            lock.unlock();
            cv::Mat output = img;
            if (!output.empty()) {
                //把你处理完的图像转换为字符串返回给前端
                std::string strRespon = Mat2Base64(output, "bmp");
                server->send(hdl, strRespon, websocketpp::frame::opcode::text);
            }
//            cv::waitKey(10);
        }
    }
    catch (const std::exception &) {
        std::cout << " 解码异常" << std::endl;
    }
}

opencv_websocket::opencv_websocket(std::string file_path) {
    //训练好的文件名称,放置在可执行文件同目录下
    if(!cascade.load(file_path))perror("Load Model Error");
}

opencv_websocket::~opencv_websocket() {

}

void opencv_websocket::Run(int port) {
    WebsocketServer server;
    server.set_access_channels(websocketpp::log::alevel::all);
    server.clear_access_channels(websocketpp::log::alevel::frame_payload);

    // Initialize Asio
    server.init_asio();

    // Register our message handler
    server.set_open_handler(websocketpp::lib::bind(&OnOpen, &server, ::websocketpp::lib::placeholders::_1));
    server.set_close_handler(websocketpp::lib::bind(&OnClose, &server, websocketpp::lib::placeholders::_1));
    server.set_message_handler(websocketpp::lib::bind(OnMessage, &server, websocketpp::lib::placeholders::_1, websocketpp::lib::placeholders::_2));
    // Listen on port 9002
    server.listen(port);

    // Start the server accept loop
    server.start_accept();

    // Start the ASIO io_service run loop
    server.run();
}

int main(int argc, char **argv) {

    std::cout<<"[INFO] load model"<<std::endl;
    opencv_websocket opencv_websocket_server("haarcascade_frontalface_alt.xml");
    std::cout<<"[INFO] start server"<<std::endl;
    opencv_websocket_server.Run(9002);
    std::cout<<"[INFO] listen"<<std::endl;
    getchar();
    return 0;
}

上述工程地址:

https://gitee.com/Luciferearth/websocketpp

example\opencv_websocket_server下

注意websocket在Windows下需要改动编译依赖

去掉

iostream_server

testee_server

testee_client

utility_client

的Cmake(直接全部注释)

CmakeLists.txt

set (WEBSOCKETPP_LIB ${WEBSOCKETPP_BUILD_ROOT}/lib)

后面加入以下编译命令

#########################################OpenSSL#######################################
set(OPENSSL_INCLUDE_DIR D:/pgsql/include)
set(OPENSSL_LIBRARIES D:/pgsql/lib/ssleay32MD.lib;D:/pgsql/lib/libeay32MD.lib)
#######################################################################################
##########################Windows 下对Boost的引用######################################
set(BUILD_EXAMPLES ON)
set(BUILD_EXAMPLES ON)

set(Boost_FOUND TRUE)
set(Boost_INCLUDE_DIRS E:/local/boost_1_67_0)
set(Boost_INCLUDE_DIR E:/local/boost_1_67_0)
set(Boost_LIBRARY_DIRS E:/local/boost_1_67_0/lib64-msvc-14.0 )
set(Boost_LIBRARIES
        boost_filesystem-vc140-mt-x64-1_67.lib
        boost_filesystem-vc140-mt-gd-x64-1_67.lib

        libboost_zlib-vc140-mt-gd-x64-1_67.lib
        libboost_zlib-vc140-mt-x64-1_67.lib

        boost_system-vc140-mt-gd-x64-1_67.lib
        boost_system-vc140-mt-x64-1_67.lib

        libboost_chrono-vc140-mt-s-x64-1_67.lib
        libboost_chrono-vc140-mt-gd-x64-1_67.lib

        boost_thread-vc140-mt-gd-x64-1_67.lib
        boost_thread-vc140-mt-x64-1_67.lib
        )
###################################################

opencv-server

file (GLOB SOURCE_FILES *.cpp)
file (GLOB HEADER_FILES *.hpp)

set(OPENCV_INCLUDE_DIR F:/Smart_Classroom/3rdparty/ALLPLATHFORM/opencv/include)
message(${OPENCV_INCLUDE_DIR})
set(OPENCV_LIB_DIR F:/Smart_Classroom/3rdparty/ALLPLATHFORM/opencv/x64/vc14/lib)
message(${OPENCV_LIB_DIR})
include_directories(${OPENCV_INCLUDE_DIR})
link_directories(${OPENCV_LIB_DIR})
init_target (opencv_websocket_server)

build_executable (${TARGET_NAME} ${SOURCE_FILES} ${HEADER_FILES})
file(COPY haarcascade_frontalface_alt.xml DESTINATION ${CMAKE_BINARY_DIR}/bin/)
#
link_boost ()
final_target ()
target_link_libraries(opencv_websocket_server
        opencv_world341.lib
        opencv_world341d.lib
        )
#
set_target_properties(${TARGET_NAME} PROPERTIES FOLDER "examples")

代码难免在打字的时候打错,有什么问题联系笔者。整个服务端的实现难点无非在于编码与解码的方法保持客户端和服务端数据耦合性,这个东西也琢磨了我好几天才琢磨透,再接再厉把,io真的是一个神奇的东西,当你把它深刻的理解到内存的时候,它就像个听话的孩子。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
图像处理
图像处理基于腾讯云深度学习等人工智能技术,提供综合性的图像优化处理服务,包括图像质量评估、图像清晰度增强、图像智能裁剪等。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档