这似乎是一项简单的任务,但它快把我逼疯了。是否可以将包含AVCaptureVideoPreviewLayer作为子层的UIView转换为要保存的图像?我想创建增强现实覆盖图,并使用按钮将图片保存到相机胶卷中。按住电源按钮+Home键可以将屏幕截图捕捉到相机胶卷上,这意味着我的所有捕捉逻辑都在工作,并且任务是可能的。但我似乎不能让它以编程方式工作。
我正在使用AVCaptureVideoPreviewLayer捕捉摄像机图像的实时预览。我所有渲染图像的尝试都失败了:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
//start the session, etc...
//this saves a white screen
- (IBAction)saveOverlay:(id)sender {
NSLog(@"saveOverlay");
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
UIGraphicsBeginImageContext(scrollView.frame.size);
[previewLayer.presentationLayer renderInContext:UIGraphicsGetCurrentContext()];
// [appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
@selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//这将渲染除预览层之外的所有内容,预览层为空。
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
我在某处读到这可能是由于iPhone的安全问题。这是真的吗?
我只想说清楚:我不想为相机保存图像。我想保存叠加在另一个图像上的透明预览层,从而创建透明度。但由于某些原因,我无法使其工作。
发布于 2013-06-09 21:59:08
我喜欢@Roma关于使用GPU Image的建议--好主意。。。。但是,如果您想要一种纯CocoaTouch方法,下面是要做的:
实现AVCaptureVideoDataOutputSampleBufferDelegate的
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage+Orientation from the sample buffer data
if (_captureFrame)
{
[captureSession stopRunning];
_captureFrame = NO;
UIImage *image = [ImageTools imageFromSampleBuffer:sampleBuffer];
image = [image rotate:UIImageOrientationRight];
_frameCaptured = YES;
if (delegate != nil)
{
[delegate cameraPictureTaken:image];
}
}
}
如下所示的捕获:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
将UIImage与overlay混合
捕获新的UIView
+ (UIImage*)imageWithView:(UIView*)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, [UIScreen mainScreen].scale);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
发布于 2013-06-09 21:41:35
我可以建议你试试GPU Image。
https://github.com/BradLarson/GPUImage
它使用openGL,所以速度相当快。它可以处理来自相机的图片,并为它们添加过滤器(有很多),包括边缘检测,运动检测等等
它类似于OpenCV,但根据我自己的经验,GPU image更容易与你的项目连接,而且语言是objective-c。
如果您决定将box2d用于物理- is也使用openGl,则可能会出现问题,您将需要花费一些时间,直到这两个框架停止战斗))
https://stackoverflow.com/questions/9012397
复制相似问题