首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >没有音频输入的AVCapturePhoto SemanticSegmentationMatte nil?

没有音频输入的AVCapturePhoto SemanticSegmentationMatte nil?
EN

Stack Overflow用户
提问于 2020-07-21 05:54:54
回答 1查看 124关注 0票数 1

当我将音频输入添加到捕获会话时,输出(_photoOutput: AVCapturePhotoOutput,didFinishProcessingPhoto照片: AVCapturePhoto,错误:错误?)回调正确地返回语义分割遮罩。如果没有音频输入,则返回的遮罩为零。是否可以避免添加音频输入和请求用户为获取遮罩而授予麦克风权限?

代码语言:javascript
运行
复制
    // MARK: - Session

private func setupSession() {
    captureSession = AVCaptureSession()
    captureSession?.sessionPreset = .photo
    setupInputOutput()
    setupPreviewLayer(view)
    captureSession?.startRunning()
}
// MARK: - Settings

private func setupCamera() {
    
    settings = AVCapturePhotoSettings()
    
    let supportsHEVC = AVAssetExportSession.allExportPresets().contains(AVAssetExportPresetHEVCHighestQuality)

    settings = supportsHEVC ? AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc]) : AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
    
    settings!.flashMode = .auto
    settings!.isHighResolutionPhotoEnabled = true
    settings!.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String: settings!.__availablePreviewPhotoPixelFormatTypes.first ?? NSNumber()]
    settings!.isDepthDataDeliveryEnabled = true
    settings!.isPortraitEffectsMatteDeliveryEnabled = true
    if self.photoOutput?.enabledSemanticSegmentationMatteTypes.isEmpty == false {
        settings!.enabledSemanticSegmentationMatteTypes = self.photoOutput?.enabledSemanticSegmentationMatteTypes ?? [AVSemanticSegmentationMatte.MatteType]()
    }

    settings!.photoQualityPrioritization = self.photoQualityPrioritizationMode
}

private func setupInputOutput() {
    photoOutput = AVCapturePhotoOutput()
    
    guard let captureSession = captureSession  else { return }
    guard let photoOutput = photoOutput else { return }
    
    do {
        captureSession.beginConfiguration()
        captureSession.sessionPreset = .photo
        let devices = self.videoDeviceDiscoverySession.devices
        currentDevice = devices.first(where: { $0.position == .front && $0.deviceType == .builtInTrueDepthCamera })

        guard let videoDevice = currentDevice else {
            captureSession.commitConfiguration()
            return
        }
        
        videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)

        if captureSession.canAddInput(videoDeviceInput) {
            captureSession.addInput(videoDeviceInput)
        } else {
            captureSession.commitConfiguration()
            return
        }
        
        currentDevice = AVCaptureDevice.default(for: .audio)
        captureDeviceInput = try AVCaptureDeviceInput(device: currentDevice!)

        if captureSession.canAddInput(captureDeviceInput) {
            captureSession.addInput(captureDeviceInput)
        } else {
            captureSession.commitConfiguration()
            return
        }
    } catch {
        errorMessage = error.localizedDescription
        print(error.localizedDescription)
        captureSession.commitConfiguration()
        return
    }

    if captureSession.canAddOutput(photoOutput) {
        captureSession.addOutput(photoOutput)

        photoOutput.isHighResolutionCaptureEnabled = true
        photoOutput.isLivePhotoCaptureEnabled = photoOutput.isLivePhotoCaptureSupported
        photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
        photoOutput.isPortraitEffectsMatteDeliveryEnabled = photoOutput.isPortraitEffectsMatteDeliverySupported
        photoOutput.enabledSemanticSegmentationMatteTypes = photoOutput.availableSemanticSegmentationMatteTypes
      
        photoOutput.maxPhotoQualityPrioritization = .balanced
    }
    captureSession.commitConfiguration()
}

private func setupPreviewLayer(_ view: UIView) {
    self.cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession ?? AVCaptureSession())
    self.cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
    self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
    self.cameraPreviewLayer?.frame = view.frame
    view.layer.insertSublayer(self.cameraPreviewLayer ?? AVCaptureVideoPreviewLayer(), at: 0)
}
EN

Stack Overflow用户

发布于 2021-06-11 07:46:37

在设置/不设置音频输入的情况下,我根本无法返回语义分割遮罩(SSM)。我目前正在iPhone X上进行开发。在挣扎了一段时间后,我在WWDC2021期间的一次一对一的实验室会议上问了苹果这个问题。我被告知API只会使肖像效果遮罩对我的设备可见。iPhone 11和更高版本将能够获得皮肤,牙齿和头发。他们最近在没有宣布的情况下偷偷带上的新眼镜ssm需要iPhone 12。

票数 1
EN
查看全部 1 条回答
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63004377

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档