教程 这一篇教程是摄像头采集数据和渲染,包括了三部分内容,渲染部分-OpenGL ES,摄像头采集图像部分-AVFoundation和图像数据创建纹理部分-GPUImage。
AVFoundation的常用类介绍: AVCaptureDevice 输入设备,包括摄像头、麦克风。 AVCaptureInput 输入数据源 AVCaptureOutput 输出数据源 AVCaptureSession 会话,协调输入与输出之间的数据流 AVCaptureVideoPreviewLayer 预览效果的layer
采集流程:
self.mCaptureSession = [[AVCaptureSession alloc] init];
self.mCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;
mProcessQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
AVCaptureDevice *inputCamera = nil;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices)
{
if ([device position] == AVCaptureDevicePositionFront)
{
inputCamera = device;
}
}
self.mCaptureDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:inputCamera error:nil];
if ([self.mCaptureSession canAddInput:self.mCaptureDeviceInput]) {
[self.mCaptureSession addInput:self.mCaptureDeviceInput];
}
self.mCaptureDeviceOutput = [[AVCaptureVideoDataOutput alloc] init];
[self.mCaptureDeviceOutput setAlwaysDiscardsLateVideoFrames:NO];
self.mGLView.isFullYUVRange = NO;
[self.mCaptureDeviceOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[self.mCaptureDeviceOutput setSampleBufferDelegate:self queue:mProcessQueue];
if ([self.mCaptureSession canAddOutput:self.mCaptureDeviceOutput]) {
[self.mCaptureSession addOutput:self.mCaptureDeviceOutput];
}
AVCaptureConnection *connection = [self.mCaptureDeviceOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
思考1:这里的AVCaptureConnection有何作用?
[self.mCaptureSession startRunning];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[self.mGLView displayPixelBuffer:pixelBuffer];
这一部分的代码参考自GPUImage的GPUImageVideoCamera类,YUV视频帧分为亮度和色度两个纹理,分别用GL_LUMINANCE格式和GL_LUMINANCE_ALPHA格式读取。
glActiveTexture(GL_TEXTURE0);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_LUMINANCE,
frameWidth,
frameHeight,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
0,
&_lumaTexture);
glBindTexture(CVOpenGLESTextureGetTarget(_lumaTexture), CVOpenGLESTextureGetName(_lumaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glActiveTexture(GL_TEXTURE1);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_LUMINANCE_ALPHA,
frameWidth / 2,
frameHeight / 2,
GL_LUMINANCE_ALPHA,
GL_UNSIGNED_BYTE,
1,
&_chromaTexture);
glBindTexture(CVOpenGLESTextureGetTarget(_chromaTexture), CVOpenGLESTextureGetName(_chromaTexture));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
思考2:这里为何输出的是YUV帧?如何配置输出的视频帧格式?
OpenGL ES的渲染流程在前文多有介绍,这里不再赘述。讲讲自己遇到的问题。
CVOpenGLESTextureCacheCreateTextureFromImage failed (error: -6683)
纹理创建失败导致的黑屏,在正确配置好颜色格式,解决;
解决所有报错后,仍常黑屏;
检查纹理代码,正常;
检查颜色缓冲区代码,正常;
检查顶点坐标代码,正常;
检查纹理坐标代码,正常;
采用最后的手段,capture GPU Frame,查看GPU的状态信息。
发现,present的颜色缓冲区无效;惊讶之余,添加下面的代码,断点。if ([EAGLContext currentContext] == _context) {
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
竟然没有调用,发现问题所在。
添加以下代码之后,问题解决。
[EAGLContext setCurrentContext:_context];
疑惑:为何之前调用过一次设置context之后,会需要再次调用context?代码其他地方并无设置context 的地方。
解疑:因为处于新的线程!!!
检查了创建纹理的过程,没有发现错误;
修改颜色空间,会导致颜色更加异常;
检查是否顶点着色器的偏移有误差,没有问题;
最后发现图片偏绿,在顶点着色器找到问题代码:
yuv.yz = (texture2D(SamplerUV, texCoordVarying).rg - vec2(0.5, 0.5));
正确的取值应该是ra,我写成了rg,导致图像偏绿。
旋转图像的的数据是个耗性能的操作,如果是用AVAssetWriter写QuickTime movie文件,更好的做法是设置AVAssetWriterInput的transform属性,而不是修改AVCaptureVideoDataOutput,真正的去修改图像数据。 光看教程是学不会OpenGL ES的,下载教程自己改改代码,自己感兴趣的想法就去实现它。 还有就是,遇到问题多尝试,多查资料。如果绝望,那么就洗洗睡,明天说不定就解决了。 你也喜欢这种脑袋一直有问题在思考的感觉吗?
思考1:AVCaptureConnection可以使录制出来的图像上下颠倒; 参考GPUImage 的注释: From the iOS 5.0 release notes: In previous iOS versions, the front-facing camera would always deliver buffers in VCaptureVideoOrientationLandscapeLeft and the back-facing camera would always deliver buffers in AVCaptureVideoOrientationLandscapeRight.
思考2:在前面的kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange表明了输出的颜色格式为YUV视频帧,并且颜色空间为(luma=[16,235] chroma=[16,240])。 iOS通常支持三种格式: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange kCVPixelFormatType_32BGRA 如果遇到了
Failed to create IOSurface image (texture)
CVOpenGLESTextureCacheCreateTextureFromImage failed (error: -6683)
这两个错误,一般是配置的颜色输出格与 CVOpenGLESTextureCacheCreateTextureFromImage的参数不对应;
代码地址 - 你的star和fork是我最大的源动力,你的意见能让我走得更远。