很抱歉,我复制了这个问题How to build AVDepthData manually,因为它没有我想要的答案,并且我没有足够的代表在那里发表评论。如果你不介意,我可以在将来删除我的问题,并请某人将该主题的未来答案移至该主题。
因此,我的目标是创建深度数据并将其附加到任意图像。有一篇关于如何做https://developer.apple.com/documentation/avfoundation/avdepthdata/creating_auxiliary_depth_data_manually的文章,但我不知道如何实现其中的任何一步。我不会一下子发布所有的问题,然后从第一个问题开始。
作为第一步,必须将每个像素的深度图像从灰度转换为深度或视差值。我从前面提到的主题中摘录了这段代码:
func buildDepth(image: UIImage) -> AVDepthData? {
let width = Int(image.size.width)
let height = Int(image.size.height)
var maybeDepthMapPixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_DisparityFloat32, nil, &maybeDepthMapPixelBuffer)
guard status == kCVReturnSuccess, let depthMapPixelBuffer = maybeDepthMapPixelBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
guard let baseAddress = CVPixelBufferGetBaseAddress(depthMapPixelBuffer) else {
return nil
}
let buffer = unsafeBitCast(baseAddress, to: UnsafeMutablePointer<Float32>.self)
for i in 0..<width * height {
buffer[i] = 0 // disparity must be calculated somehow, but set to 0 for testing purposes
}
CVPixelBufferUnlockBaseAddress(depthMapPixelBuffer, .init(rawValue: 0))
let info: [AnyHashable: Any] = [kCGImagePropertyPixelFormat: kCVPixelFormatType_DisparityFloat32,
kCGImagePropertyWidth: image.size.width,
kCGImagePropertyHeight: image.size.height,
kCGImagePropertyBytesPerRow: CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)]
let metadata = generateMetadata(image: image)
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
// I get an error when converting baseAddress to CFData
kCGImageAuxiliaryDataInfoData: baseAddress as! CFData,
kCGImageAuxiliaryDataInfoMetadata: metadata]
guard let depthData = try? AVDepthData(fromDictionaryRepresentation: dic) else {
return nil
}
return depthData
}然后,文章说加载像素缓冲区的基址(其中是视差图)作为CFData,并将其作为kCGImageAuxiliaryDataInfoData值传递到CFDictionary中。但在将baseAddress转换为CFData时出现错误。我也尝试过转换像素缓冲区本身,但没有成功。我必须以kCGImageAuxiliaryDataInfoData的身份传递什么?首先,我是否正确地创建了视差缓冲区?
除了这个问题,如果有人能指导我如何做整个事情的一些示例代码,那就太酷了。
发布于 2021-08-24 17:06:48
你的问题真的帮助我从cvPixelBuffer到AVDepthData,所以谢谢你。在那里大约有95%的路程。
为了解决你的(和我的)问题,我添加了以下内容:
let bytesPerRow = CVPixelBufferGetBytesPerRow(depthMapPixelBuffer)
let size = bytesPerRow * height;
... code code code ...
CVPixelBufferLockBaseAddress(depthMapPixelBuffer!, .init(rawValue: 0))
let baseAddress = CVPixelBufferGetBaseAddressOfPlane(depthMapPixelBuffer!, 0)
let data = NSData(bytes: baseAddress, length: size);
... code code code ...
let dic: [AnyHashable: Any] = [kCGImageAuxiliaryDataInfoDataDescription: info,
kCGImageAuxiliaryDataInfoData: data,
kCGImageAuxiliaryDataInfoMetadata: metadata]https://stackoverflow.com/questions/56341262
复制相似问题