首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >AVExportSession中CALayer中的视频重力

AVExportSession中CALayer中的视频重力
EN

Stack Overflow用户
提问于 2017-06-10 16:05:26
回答 1查看 477关注 0票数 4

我的应用程序首先录制一段视频,然后添加一些效果,然后使用AVExportSession导出输出。

首先,视频录制过程中的视频重力问题,通过将AVCaptureVideoPreviewLayer内部的videoGravity属性更改为AVLayerVideoGravityResizeAspectFill来解决。

第二个问题是显示录制的视频,该问题已通过将AVPlayerLayer内的VideoGravity属性更改为AVLayerVideoGravityResizeAspectFill来解决

但是,现在的问题是,当我想用AVExportSession添加一些效果后导出视频时,又出现了一些视频重力问题。即使在CALayer中更改contentsGravity属性也不会影响输出。我应该提一下,这个问题在iPad中是显而易见的。

下面是我想要在添加一些效果之前显示视频的图像:

正如你所看到的,我的指尖在屏幕的顶部(因为我已经修复了应用程序内部图层中的重力问题)

但导出并保存到图库后,我看到的内容如下所示:

我知道问题出在地心引力上,但我不知道如何解决它。我不知道在录制时是否应该对视频进行任何更改,或者在导出时更改以下代码:

代码语言:javascript
运行
复制
    let composition = AVMutableComposition()
    let asset = AVURLAsset(url: videoUrl, options: nil)

    let tracks =  asset.tracks(withMediaType : AVMediaTypeVideo)
    let videoTrack:AVAssetTrack = tracks[0] as AVAssetTrack
    let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let viewSize = parentView.bounds.size
    let trackSize = videoTrack.naturalSize

    let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

    do {
        try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
    } catch {
        print(error)
    }

    let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())

    for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
        do {
            try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
        } catch {
            print(error)
        }
    }

    let videolayer = CALayer()
    videolayer.frame.size = viewSize
    videolayer.contentsGravity = kCAGravityResizeAspectFill

    let parentlayer = CALayer()
    parentlayer.frame.size = viewSize
    parentlayer.contentsGravity = kCAGravityResizeAspectFill

    parentlayer.addSublayer(videolayer)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(1, 30)
    layercomposition.renderSize = viewSize
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()

    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
    let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

    let trackTransform = videoTrack.preferredTransform
    let xScale = viewSize.height / trackSize.width
    let yScale = viewSize.width / trackSize.height

    var exportTransform : CGAffineTransform!
    if (getVideoOrientation(transform: videoTrack.preferredTransform).1 == .up) {
        exportTransform = videoTrack.preferredTransform.translatedBy(x: trackTransform.ty * -1 , y: 0).scaledBy(x: xScale , y: yScale)
    } else {
        exportTransform = CGAffineTransform.init(translationX: viewSize.width, y: 0).rotated(by: .pi/2).scaledBy(x: xScale, y: yScale)
    }

    layerinstruction.setTransform(exportTransform, at: kCMTimeZero)

    instruction.layerInstructions = [layerinstruction]
    layercomposition.instructions = [instruction]

    let filePath = FileHelper.getVideoTimeStampName()
    let exportedUrl = URL(fileURLWithPath: filePath)

    guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {delegate?.exportFinished(status: .failed, outputUrl: exportedUrl); return}

    assetExport.videoComposition = layercomposition
    assetExport.outputFileType = AVFileTypeMPEG4
    assetExport.outputURL = exportedUrl
    assetExport.exportAsynchronously(completionHandler: {
        switch assetExport.status {
        case .completed:
            print("video exported successfully")
            self.delegate?.exportFinished(status: .completed, outputUrl: exportedUrl)
            break
        case .failed:
            self.delegate?.exportFinished(status: .failed, outputUrl: exportedUrl)
            print("exporting video failed: \(String(describing: assetExport.error))")
            break
        default :
            print("the video export status is \(assetExport.status)")
            self.delegate?.exportFinished(status: assetExport.status, outputUrl: exportedUrl)
            break
        }
    })

如果有人能帮上忙,我将不胜感激。

EN

回答 1

Stack Overflow用户

发布于 2017-06-14 17:26:01

如果您使用的是AVLayerVideoGravityResizeAspectFill,视频捕获屏幕将自动调整为CALayer。因此,实际发生的情况是,摄像机实际上捕捉到了您提供的第二张图像。您可以通过以下步骤解决此问题:

  1. 获取图像作为UIImage
  2. 裁剪与您正在使用的CALayer大小相同的图像
  3. 将裁剪后的图像上传到服务器,显示给用户,等等。

要裁剪图像,您可以使用以下命令:

代码语言:javascript
运行
复制
extension UIImage {
    func crop(to:CGSize) -> UIImage {
            guard let cgimage = self.cgImage else { return self }

            let contextImage: UIImage = UIImage(cgImage: cgimage)

            let contextSize: CGSize = contextImage.size

            //Set to square
            var posX: CGFloat = 0.0
            var posY: CGFloat = 0.0
            let cropAspect: CGFloat = to.width / to.height

            var cropWidth: CGFloat = to.width
            var cropHeight: CGFloat = to.height

            if to.width > to.height { //Landscape
                cropWidth = contextSize.width
                cropHeight = contextSize.width / cropAspect
                posY = (contextSize.height - cropHeight) / 2
            } else if to.width < to.height { //Portrait
                cropHeight = contextSize.height
                cropWidth = contextSize.height * cropAspect
                posX = (contextSize.width - cropWidth) / 2
            } else { //Square
                if contextSize.width >= contextSize.height { //Square on landscape (or square)
                    cropHeight = contextSize.height
                    cropWidth = contextSize.height * cropAspect
                    posX = (contextSize.width - cropWidth) / 2
                }else{ //Square on portrait
                    cropWidth = contextSize.width
                    cropHeight = contextSize.width / cropAspect
                    posY = (contextSize.height - cropHeight) / 2
                }
            }

            let rect: CGRect = CGRect(x: posX, y: posY, width: cropWidth, height: cropHeight)
            // Create bitmap image from context using the rect
            let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!

            // Create a new image based on the imageRef and rotate back to the original orientation
            let cropped: UIImage = UIImage(cgImage: imageRef, scale: self.scale, orientation: self.imageOrientation)

            UIGraphicsBeginImageContextWithOptions(to, true, self.scale)
            cropped.draw(in: CGRect(x: 0, y: 0, width: to.width, height: to.height))
            let resized = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()

            return resized!
        }
}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44471040

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档