前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >浅析WebRtc中视频数据的收集和发送流程

浅析WebRtc中视频数据的收集和发送流程

作者头像
BennuCTech
发布2021-12-10 15:30:58
9510
发布2021-12-10 15:30:58
举报
文章被收录于专栏:BennuCTechBennuCTech

前言

本文是基于PineAppRtc开源项目https://github.com/thfhongfeng/PineAppRtc

因为一个需求,我们需要将一个视频流通过WebRtc发送出去,所以就研究一下WebRtc是如何采集视频数据并进行处理发送的,于是有了这篇文章。

采集发送

在使用webrtc进行即时通话时,双方连接上后,会根据参数创建一个PeerConnection连接对象,具体代码在PeerConnectionClient类中,这个是需要自己来实现的。这个连接的作用来进行推拉流的。

然后创建一个MediaStream对象,并添加给PeerConnection

代码语言:javascript
复制
mPeerConnection.addStream(mMediaStream);

这个MediaStream就是处理流的,可以给MediaStream对象添加多个轨道,比如声音轨道、视频轨道

代码语言:javascript
复制
mMediaStream.addTrack(createVideoTrack(mVideoCapturer));

这里mVideoCapturer是一个VideoCapturer对象,用来处理视频收集的,实际上就是封装了相机

VideoCapturer是一个接口,有很多实现类。这里以CameraCapturer及子类Camera1Capturer为例子

继续看createVideoTrack这个函数

代码语言:javascript
复制
private VideoTrack createVideoTrack(VideoCapturer capturer) {
    mVideoSource = mFactory.createVideoSource(capturer);
    capturer.startCapture(mVideoWidth, mVideoHeight, mVideoFps);

    mLocalVideoTrack = mFactory.createVideoTrack(VIDEO_TRACK_ID, mVideoSource);
    mLocalVideoTrack.setEnabled(mRenderVideo);
    mLocalVideoTrack.addRenderer(new VideoRenderer(mLocalRender));
    return mLocalVideoTrack;
}

可以看到通过createVideoSource函数将VideoCapturer封装到VideoSource对象中,然后利用VideoSource创建出轨道的VideoTrack

来看看createVideoSource函数

代码语言:javascript
复制
public VideoSource createVideoSource(VideoCapturer capturer) {
    org.webrtc.EglBase.Context eglContext = this.localEglbase == null ? null : this.localEglbase.getEglBaseContext();
    SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("VideoCapturerThread", eglContext);
    long nativeAndroidVideoTrackSource = nativeCreateVideoSource(this.nativeFactory, surfaceTextureHelper, capturer.isScreencast());
    CapturerObserver capturerObserver = new AndroidVideoTrackSourceObserver(nativeAndroidVideoTrackSource);
    capturer.initialize(surfaceTextureHelper, ContextUtils.getApplicationContext(), capturerObserver);
    return new VideoSource(nativeAndroidVideoTrackSource);
}

可以看到这里新建了一个AndroidVideoTrackSourceObserver对象,它是CaptureObserver接口的实现,然后调用了VideoCapturerinitialize函数 在CameraCapturer实现的initialize函数中将AndroidVideoTrackSourceObserver对象赋值给了VideoCapturercapturerObserver属性。

回过头再看看PeerConnectionClient类中,还调用了VideoCapturerstartCapture函数,看看它在CameraCapturer中的实现

代码语言:javascript
复制
public void startCapture(int width, int height, int framerate) {
    Logging.d("CameraCapturer", "startCapture: " + width + "x" + height + "@" + framerate);
    if (this.applicationContext == null) {
        throw new RuntimeException("CameraCapturer must be initialized before calling startCapture.");
    } else {
        Object var4 = this.stateLock;
        synchronized(this.stateLock) {
            if (!this.sessionOpening && this.currentSession == null) {
                this.width = width;
                this.height = height;
                this.framerate = framerate;
                this.sessionOpening = true;
                this.openAttemptsRemaining = 3;
                this.createSessionInternal(0, (MediaRecorder)null);
            } else {
                Logging.w("CameraCapturer", "Session already open");
            }
        }
    }
}

最后执行了createSessionInternal

代码语言:javascript
复制
private void createSessionInternal(int delayMs, final MediaRecorder mediaRecorder) {
    this.uiThreadHandler.postDelayed(this.openCameraTimeoutRunnable, (long)(delayMs + 10000));
    this.cameraThreadHandler.postDelayed(new Runnable() {
        public void run() {
            CameraCapturer.this.createCameraSession(CameraCapturer.this.createSessionCallback, CameraCapturer.this.cameraSessionEventsHandler, CameraCapturer.this.applicationContext, CameraCapturer.this.surfaceHelper, mediaRecorder, CameraCapturer.this.cameraName, CameraCapturer.this.width, CameraCapturer.this.height, CameraCapturer.this.framerate);
        }
    }, (long)delayMs);
}

又执行了createCameraSession,在Camera1Capturer中该函数代码如下

代码语言:javascript
复制
protected void createCameraSession(CreateSessionCallback createSessionCallback, Events events, Context applicationContext, SurfaceTextureHelper surfaceTextureHelper, MediaRecorder mediaRecorder, String cameraName, int width, int height, int framerate) {
    Camera1Session.create(createSessionCallback, events, this.captureToTexture || mediaRecorder != null, applicationContext, surfaceTextureHelper, mediaRecorder, Camera1Enumerator.getCameraIndex(cameraName), width, height, framerate);
}

可以看到创建了一个Camera1Session,这个类就是实际操作相机的,在这个类里就看到了熟悉的Camera,在listenForBytebufferFrames函数中

代码语言:javascript
复制
private void listenForBytebufferFrames() {
    this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {
        public void onPreviewFrame(byte[] data, Camera callbackCamera) {
            Camera1Session.this.checkIsOnCameraThread();
            if (callbackCamera != Camera1Session.this.camera) {
                Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");
            } else if (Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {
                Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");
            } else {
                long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
                if (!Camera1Session.this.firstFrameReported) {
                    int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);
                    Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);
                    Camera1Session.this.firstFrameReported = true;
                }

                Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);
                Camera1Session.this.camera.addCallbackBuffer(data);
            }
        }
    });
}

可以看到在通过预览回调onPreviewFrame拿到视频数据后,调用了events.onByteBufferFrameCaptured,这个events就是create时传入的,回溯上面的流程可以发现这个events就是CameraCapturer中的cameraSessionEventsHandler,它的onByteBufferFrameCaptured函数如下:

代码语言:javascript
复制
public void onByteBufferFrameCaptured(CameraSession session, byte[] data, int width, int height, int rotation, long timestamp) {
    CameraCapturer.this.checkIsOnCameraThread();
    synchronized(CameraCapturer.this.stateLock) {
        if (session != CameraCapturer.this.currentSession) {
            Logging.w("CameraCapturer", "onByteBufferFrameCaptured from another session.");
        } else {
            if (!CameraCapturer.this.firstFrameObserved) {
                CameraCapturer.this.eventsHandler.onFirstFrameAvailable();
                CameraCapturer.this.firstFrameObserved = true;
            }

            CameraCapturer.this.cameraStatistics.addFrame();
            CameraCapturer.this.capturerObserver.onByteBufferFrameCaptured(data, width, height, rotation, timestamp);
        }
    }
}

这里调用了capturerObserver.onByteBufferFrameCaptured,这个capturerObserver就是前面initialize时传入的AndroidVideoTrackSourceObserver对象,它的onByteBufferFrameCaptured函数

代码语言:javascript
复制
public void onByteBufferFrameCaptured(byte[] data, int width, int height, int rotation, long timeStamp) {
    this.nativeOnByteBufferFrameCaptured(this.nativeSource, data, data.length, width, height, rotation, timeStamp);
}

调用了native函数。这样整个流程就结束了,应该在native中对数据进行处理并发送。

其实这里关键就是VideoCapturer,除了CameraCapturer及子类,还有FileVideoCapturer等。如果我们需要直接发送byte[]原生数据,可以自定义实现一个VideoCapturer,获取他的capturerObserver变量,主动调用它的onByteBufferFrameCaptured函数即可。

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2021-11-26,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 BennuCTech 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 前言
  • 采集发送
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档