在我的应用程序中,我需要捕获视频并在该视频上添加水印。水印应为文本(时间和备注)。我看到一个使用"QTKit“框架的代码。然而,我读到该框架不适用于iPhone。
提前谢谢。
发布于 2011-08-26 22:25:13
使用AVFoundation
。我建议用AVCaptureVideoDataOutput
抓取帧,然后用水印图像覆盖捕获的帧,最后将捕获和处理的帧写入文件用户AVAssetWriter
。
在堆栈溢出周围搜索,有一大堆奇妙的例子详细说明了如何做我提到的每件事。我还没有看到任何能给出你想要的效果的代码示例,但你应该能够很容易地混合和匹配。
编辑:
看看这些链接:
iPhone: AVCaptureSession capture output crashing (AVCaptureVideoDataOutput) -这篇文章可能会有帮助,因为它包含了相关的代码。
AVCaptureDataOutput
将以CMSampleBufferRef
%s的形式返回图像。使用以下代码将它们转换为CGImageRef
%s:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
从那里你可以转换成一个UIImage,
UIImage *img = [UIImage imageWithCGImage:yourCGImage];
然后使用
[img drawInRect:CGRectMake(x,y,height,width)];
要将帧绘制到上下文中,请在其上绘制水印的PNG,然后使用AVAssetWriter
将处理后的图像添加到输出视频中。我建议你实时添加它们,这样你就不会被大量的UIImages填满内存。
How do I export UIImage array as a movie? -这篇文章展示了如何在给定的持续时间内将处理过的UIImages添加到视频中。
这应该会让你很好地为你的视频添加水印。记住要实践良好的内存管理,因为以20-30fps的速度泄漏图像是导致应用程序崩溃的一个很好的方法。
发布于 2012-01-10 10:43:34
添加水印要简单得多。您只需要使用CALayer和AVVideoCompositionCoreAnimationTool。代码可以按照相同的顺序复制和汇编。我只是试图在两者之间插入一些评论,以便更好地理解。
假设您已经录制了视频,因此我们将首先创建AVURLAsset:
AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:outputFileURL options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:clipVideoTrack
atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]];
只需使用此代码,您就可以导出视频,但我们希望首先添加带有水印的图层。请注意,一些代码看起来可能是多余的,但它是一切工作所必需的。
首先,我们创建包含水印图像的图层:
UIImage *myImage = [UIImage imageNamed:@"icon.png"];
CALayer *aLayer = [CALayer layer];
aLayer.contents = (id)myImage.CGImage;
aLayer.frame = CGRectMake(5, 25, 57, 57); //Needed for proper display. We are using the app icon (57x57). If you use 0,0 you will not see it
aLayer.opacity = 0.65; //Feel free to alter the alpha here
如果我们不想要图像而想要文本:
CATextLayer *titleLayer = [CATextLayer layer];
titleLayer.string = @"Text goes here";
titleLayer.font = @"Helvetica";
titleLayer.fontSize = videoSize.height / 6;
//?? titleLayer.shadowOpacity = 0.5;
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, videoSize.width, videoSize.height / 6); //You may need to adjust this for proper display
下面的代码按正确的顺序对层进行排序:
CGSize videoSize = [videoAsset naturalSize];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
videoLayer.frame = CGRectMake(0, 0, videoSize.width, videoSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:aLayer];
[parentLayer addSublayer:titleLayer]; //ONLY IF WE ADDED TEXT
现在我们正在创建合成,并添加插入图层的指令:
AVMutableVideoComposition* videoComp = [[AVMutableVideoComposition videoComposition] retain];
videoComp.renderSize = videoSize;
videoComp.frameDuration = CMTimeMake(1, 30);
videoComp.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
/// instruction
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComp.instructions = [NSArray arrayWithObject: instruction];
现在我们可以开始导出了:
_assetExport = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];//AVAssetExportPresetPassthrough
_assetExport.videoComposition = videoComp;
NSString* videoName = @"mynewwatermarkedvideo.mov";
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL *exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
[[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}
_assetExport.outputFileType = AVFileTypeQuickTimeMovie;
_assetExport.outputURL = exportUrl;
_assetExport.shouldOptimizeForNetworkUse = YES;
[strRecordedFilename setString: exportPath];
[_assetExport exportAsynchronouslyWithCompletionHandler:
^(void ) {
[_assetExport release];
//YOUR FINALIZATION CODE HERE
}
];
[audioAsset release];
[videoAsset release];
发布于 2013-03-06 13:04:43
https://stackoverflow.com/questions/7205820
复制相似问题