我试着用AVAssetReader来处理IOS设备上录制的视频,从中获取每一帧。我遇到了一个问题,从AVAssetReaderTrackOutput.copyNextSampleBuffer()方法返回的AVAssetReaderTrackOutput.copyNextSampleBuffer()不能转换为CVImageBuffer,我需要将它作为一个框架来处理。该方法返回nil。
我想我可能在初始化对象时出错了,请考虑代码。或者,如果您知道其他的方法来处理一个视频从磁盘作为一个帧流,请分享这个想法。
// Variables out of this snippet
// videoURL = <some video on disk path>
// context = CIContext()
let asset = AVURLAsset(url: videoURL! as URL , options: nil)
guard let assetReader = try? AVAssetReader(asset: asset) else { return }
let videoTracks = asset.tracks(withMediaType: AVMediaType.video)
let videoOutput = AVAssetReaderTrackOutput(track: videoTracks[0], outputSettings: nil)
guard assetReader.canAdd(videoOutput) else { return }
assetReader.add(videoOutput)
assetReader.startReading()
guard let sampleBuffer = videoOutput.copyNextSampleBuffer() else {
self.dismiss(animated: true, completion: nil)
return
}
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
// fall down here
// won't convert sampleBuffer to CMSampleBufferGetImageBuffer
return
}
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }我在代码片段中尝试了不同的修改:
AVAssetReaderVideoCompositionOutput而不是AVAssetReaderTrackOutput --这在.startReading()AVOutputSettingsAssistant(preset: AVOutputSettingsPreset.preset1920x1080)?.videoSettings上无法获得传递outputSettings而不是nil的选项。运行时出现此错误,抱怨不支持的选项发布于 2022-04-01 05:54:03
正如我所想的,问题就在outputSettings
我采用了目标C参数从WWDC 415“与媒体在AVFoundation工作”到Swift。这有助于获得可转换为图像对象的示例缓冲区。
let videoOptions = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
kCVPixelBufferWidthKey as String: 640,
kCVPixelBufferHeightKey as String: 480,
]
let videoOutput = AVAssetReaderTrackOutput(track: videoTracks[0], outputSettings: videoOptions)https://stackoverflow.com/questions/71701725
复制相似问题