当使用ARSessionDelegate在ARKit中处理原始相机图像时.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let currentFrame = session.currentFrame else { return }
let capturedImage = currentFrame.capturedImage
debugPrint("Display size", UIScreen.main.bounds.size)
debugPrint("Camera frame resolution", CVPixelBufferGetWidth(capturedImage), CVPixelBufferGetHeight(capturedImage))
// ...
}..。正如所记录的,相机图像数据与屏幕大小不匹配,例如,在iPhone X上,我得到:
现在有了将摄像机坐标转换为视图坐标的displayTransform(for:viewportSize:) API。当像这样使用API时:
let ciimage = CIImage(cvImageBuffer: capturedImage)
let transform = currentFrame.displayTransform(for: .portrait, viewportSize: UIScreen.main.bounds.size)
var transformedImage = ciimage.transformed(by: transform)
debugPrint("Transformed size", transformedImage.extent.size)我得到的尺寸为2340x1920,似乎不正确,结果应该是375:812 (~0.46)。我在这里错过了什么/使用这个API将相机图像转换为“由ARSCNView显示”的图像的正确方法是什么?
(示例项目:ARKitCameraImage)
发布于 2019-11-12 11:10:32
这是非常复杂的,因为displayTransform(for:viewportSize)需要标准化的图像坐标,似乎只能在纵向模式下翻转坐标,图像不仅需要转换,而且还需要裁剪。下面的代码对我来说很有用。如能提出改进的建议,将不胜感激。
guard let frame = session.currentFrame else { return }
let imageBuffer = frame.capturedImage
let imageSize = CGSize(width: CVPixelBufferGetWidth(imageBuffer), height: CVPixelBufferGetHeight(imageBuffer))
let viewPort = sceneView.bounds
let viewPortSize = sceneView.bounds.size
let interfaceOrientation : UIInterfaceOrientation
if #available(iOS 13.0, *) {
interfaceOrientation = self.sceneView.window!.windowScene!.interfaceOrientation
} else {
interfaceOrientation = UIApplication.shared.statusBarOrientation
}
let image = CIImage(cvImageBuffer: imageBuffer)
// The camera image doesn't match the view rotation and aspect ratio
// Transform the image:
// 1) Convert to "normalized image coordinates"
let normalizeTransform = CGAffineTransform(scaleX: 1.0/imageSize.width, y: 1.0/imageSize.height)
// 2) Flip the Y axis (for some mysterious reason this is only necessary in portrait mode)
let flipTransform = (interfaceOrientation.isPortrait) ? CGAffineTransform(scaleX: -1, y: -1).translatedBy(x: -1, y: -1) : .identity
// 3) Apply the transformation provided by ARFrame
// This transformation converts:
// - From Normalized image coordinates (Normalized image coordinates range from (0,0) in the upper left corner of the image to (1,1) in the lower right corner)
// - To view coordinates ("a coordinate space appropriate for rendering the camera image onscreen")
// See also: https://developer.apple.com/documentation/arkit/arframe/2923543-displaytransform
let displayTransform = frame.displayTransform(for: interfaceOrientation, viewportSize: viewPortSize)
// 4) Convert to view size
let toViewPortTransform = CGAffineTransform(scaleX: viewPortSize.width, y: viewPortSize.height)
// Transform the image and crop it to the viewport
let transformedImage = image.transformed(by: normalizeTransform.concatenating(flipTransform).concatenating(displayTransform).concatenating(toViewPortTransform)).cropped(to: viewPort)https://stackoverflow.com/questions/58809070
复制相似问题