zoukankan      html  css  js  c++  java
  • 第6月第4天 AVMutableComposition AVMutableVideoComposition

    1.

    AVMutableComposition is a mutable subclass of AVComposition you use when you want to create a new composition from existing assets. You can add and remove tracks, and you can add, remove, and scale time ranges.

     

    AVMutableComposition可以添加删除tracks。

     

            // Check if a composition already exists, else create a composition using the input asset
            self.mutableComposition = [AVMutableComposition composition];
            
            // Insert the video and audio tracks from AVAsset
            if (assetVideoTrack != nil) {
                AVMutableCompositionTrack *compositionVideoTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
                [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
                [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:[asset duration] error:&error];
    //            [compositionVideoTrack removeTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(1.0, CMTimeGetSeconds([asset duration])))];
            }
            if (assetAudioTrack != nil) {
                AVMutableCompositionTrack *compositionAudioTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
                [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
            }

    The AVMutableVideoComposition class is a mutable subclass of AVVideoComposition.

    A video composition describes, for any time in the aggregate time range of its instructions, the number and IDs of video tracks that are to be used in order to produce a composed video frame corresponding to that time. When AV Foundation’s built-in video compositor is used, the instructions an AVVideoComposition comprises can specify a spatial transformation, an opacity value, and a cropping rectangle for each video source, and these can vary over time via simple linear ramping functions.

    An AVMutableVideoCompositionInstruction object represents an operation to be performed by a compositor.

    An AVVideoComposition object maintains an array of instructions to perform its composition.

    An array of instances of AVVideoCompositionLayerInstruction that specify how video frames from source tracks should be layered and composed.

    Tracks are layered in the composition according to the top-to-bottom order of the layerInstructions array; the track with trackID of the first instruction in the array will be layered on top, with the track with the trackID of the second instruction immediately underneath, and so on.

    If the property value is nil, the output is a fill of the background color.

     

    AVMutableVideoCompositionLayerInstruction is a mutable subclass of AVVideoCompositionLayerInstruction that is used to modify the transform, cropping, and opacity ramps to apply to a given track in a composition. 

     

     

    真正修改track的transform,opacity的是AVMutableVideoCompositionLayerInstruction。AVMutableVideoCompositionInstruction里的一组AVMutableVideoCompositionLayerInstruction,从上到下排列tracks。AVVideoComposition包含一组AVMutableVideoCompositionInstruction。

     

     

                // build a pass through video composition
                self.mutableVideoComposition = [AVMutableVideoComposition videoComposition];
                self.mutableVideoComposition.frameDuration = CMTimeMake(1, 30); // 30 fps
                self.mutableVideoComposition.renderSize = assetVideoTrack.naturalSize;
                
                AVMutableVideoCompositionInstruction *passThroughInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
                passThroughInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeAdd([asset duration],([asset duration])));
                
                AVAssetTrack *videoTrack = [self.mutableComposition tracksWithMediaType:AVMediaTypeVideo][0];
                AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:assetVideoTrack];
                [passThroughLayer setOpacity:0 atTime:CMTimeMakeWithSeconds(0.1,600)];
                [passThroughLayer setOpacity:1 atTime:CMTimeMakeWithSeconds(2,600)];
                
    //            [passThroughLayer setTransformRampFromStartTransform:CGAffineTransformIdentity toEndTransform:CGAffineTransformMakeTranslation(-[UIScreen mainScreen].bounds.size.width, 0) timeRange:CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(1.0, CMTimeGetSeconds([asset duration])))];
    //            
                passThroughInstruction.layerInstructions = @[passThroughLayer];
                self.mutableVideoComposition.instructions = @[passThroughInstruction];

     2.

    https://developer.apple.com/library/content/samplecode/RosyWriter/Introduction/Intro.html

    1)视频拍摄

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection

    {

     

    2)图像数据

    CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

    renderedPixelBuffer = [_renderer copyRenderedPixelBuffer:sourcePixelBuffer];

     

    3)转换数据 写入

    	CMSampleBufferRef sampleBuffer = NULL;
    	
    	CMSampleTimingInfo timingInfo = {0,};
    	timingInfo.duration = kCMTimeInvalid;
    	timingInfo.decodeTimeStamp = kCMTimeInvalid;
    	timingInfo.presentationTimeStamp = presentationTime;
    	
    	OSStatus err = CMSampleBufferCreateForImageBuffer( kCFAllocatorDefault, pixelBuffer, true, NULL, NULL, _videoTrackSourceFormatDescription, &timingInfo, &sampleBuffer );
    	if ( sampleBuffer ) {
    		[self appendSampleBuffer:sampleBuffer ofMediaType:AVMediaTypeVideo];
    		CFRelease( sampleBuffer );
    	}
    
    ...
    			AVAssetWriterInput *input = ( mediaType == AVMediaTypeVideo ) ? _videoInput : _audioInput;
    			
    			if ( input.readyForMoreMediaData )
    			{
    				BOOL success = [input appendSampleBuffer:sampleBuffer];
    				if ( ! success ) {
    					NSError *error = _assetWriter.error;
    					@synchronized( self ) {
    						[self transitionToStatus:MovieRecorderStatusFailed error:error];
    					}
    				}
    			}
    

      

  • 相关阅读:
    1.解决有冲突的分支,切换分支时,文件不在了 2.冲突解决到底???
    Github到了一个工作区里面。嵌套了
    【计蒜课】【数据结构】【栈的复习】
    【计蒜课】【数据结构】【队列的复习】
    【计蒜课】【数据结构】【邻接矩阵使用的复习】
    【计蒜课】【数据结构】【链表的创建、插入、遍历操作的复习】
    【计蒜课】【数据结构】【顺序表查找、删除、遍历操作的复习答案】
    【计蒜课】【数据结构】【顺序表的构造、插入、扩容操作习题】
    实验六
    6.6实验五
  • 原文地址:https://www.cnblogs.com/javastart/p/6500789.html
Copyright © 2011-2022 走看看