zoukankan      html  css  js  c++  java
  • ffmpeg结构体(6)之AVFrame及其相关函数

    目录

    一、AVFrame结构体的定义

    二、AVFrame相关的函数

             2.1 av_frame_alloc()

    2.2 avpicture_fill()

     2.3 av_frame_free()

    2.4 av_frame_ref()

    2.5 av_frame_clone()        

         AVFrame中存储的是经过解码后的原始数据。在解码中,AVFrame是解码器的输出;在编码中,AVFrame是编码器的输入。下图中,"decoded frames"的数据类型就是AVFrame:

          AVFrame结构体描述了解码后的(原始)音频或视频数据。
          AVFrame通常被分配一次,然后多次重复使用以持有不同的数据(例如,单个AVFrame持有从解码器接收的帧)。 在这种情况下,av_frame_unref()将释放帧所持有引用,再次重用之前将其重置为初始状态。
          AVFrame描述的数据通常通过AVBuffer API引用计数。 底层缓冲区引用存储在AVFrame.buf或AVFrame.extended_buf中。 如果设置了至少一个引用,即AVFrame.buf [0]!= NULL,则AVFrame被认为是引用计数。 在这种情况下,每"平面(plane)"数据必须包含在AVFrame.buf或AVFrame.extended_buf中的一个缓冲区中。 所有数据可能只有一个缓冲区,或者每个平面可能有一个单独的缓冲区,或者介于两者之间的任何内容。

    一、AVFrame结构体的定义
          AVFrame结构体定义在文件libavutilframe.h中,如下所示:

    /**
     * This structure describes decoded (raw) audio or video data.
     *
     * AVFrame must be allocated using av_frame_alloc(). Note that this only
     * allocates the AVFrame itself, the buffers for the data must be managed
     * through other means (see below).
     * AVFrame must be freed with av_frame_free().
     *
     * AVFrame is typically allocated once and then reused multiple times to hold
     * different data (e.g. a single AVFrame to hold frames received from a
     * decoder). In such a case, av_frame_unref() will free any references held by
     * the frame and reset it to its original clean state before it
     * is reused again.
     *
     * The data described by an AVFrame is usually reference counted through the
     * AVBuffer API. The underlying buffer references are stored in AVFrame.buf /
     * AVFrame.extended_buf. An AVFrame is considered to be reference counted if at
     * least one reference is set, i.e. if AVFrame.buf[0] != NULL. In such a case,
     * every single data plane must be contained in one of the buffers in
     * AVFrame.buf or AVFrame.extended_buf.
     * There may be a single buffer for all the data, or one separate buffer for
     * each plane, or anything in between.
     *
     * sizeof(AVFrame) is not a part of the public ABI, so new fields may be added
     * to the end with a minor bump.
     *
     * Fields can be accessed through AVOptions, the name string used, matches the
     * C structure field name for fields accessible through AVOptions. The AVClass
     * for AVFrame can be obtained from avcodec_get_frame_class()
     */
    typedef struct AVFrame {
    #define AV_NUM_DATA_POINTERS 8
        /**
         * pointer to the picture/channel planes.
         * This might be different from the first allocated byte
         *
         * Some decoders access areas outside 0,0 - width,height, please
         * see avcodec_align_dimensions2(). Some filters and swscale can read
         * up to 16 bytes beyond the planes, if these filters are to be used,
         * then 16 extra bytes must be allocated.
         *
         * NOTE: Except for hwaccel formats, pointers not needed by the format
         * MUST be set to NULL.
         */
        uint8_t *data[AV_NUM_DATA_POINTERS];
     
        /**
         * For video, size in bytes of each picture line.
         * For audio, size in bytes of each plane.
         *
         * For audio, only linesize[0] may be set. For planar audio, each channel
         * plane must be the same size.
         *
         * For video the linesizes should be multiples of the CPUs alignment
         * preference, this is 16 or 32 for modern desktop CPUs.
         * Some code requires such alignment other code can be slower without
         * correct alignment, for yet other it makes no difference.
         *
         * @note The linesize may be larger than the size of usable data -- there
         * may be extra padding present for performance reasons.
         */
        int linesize[AV_NUM_DATA_POINTERS];
     
        /**
         * pointers to the data planes/channels.
         *
         * For video, this should simply point to data[].
         *
         * For planar audio, each channel has a separate data pointer, and
         * linesize[0] contains the size of each channel buffer.
         * For packed audio, there is just one data pointer, and linesize[0]
         * contains the total size of the buffer for all channels.
         *
         * Note: Both data and extended_data should always be set in a valid frame,
         * but for planar audio with more channels that can fit in data,
         * extended_data must be used in order to access all channels.
         */
        uint8_t **extended_data;
     
        /**
         * @name Video dimensions
         * Video frames only. The coded dimensions (in pixels) of the video frame,
         * i.e. the size of the rectangle that contains some well-defined values.
         *
         * @note The part of the frame intended for display/presentation is further
         * restricted by the @ref cropping "Cropping rectangle".
         * @{
         */
        int width, height;
        /**
         * @}
         */
     
        /**
         * number of audio samples (per channel) described by this frame
         */
        int nb_samples;
     
        /**
         * format of the frame, -1 if unknown or unset
         * Values correspond to enum AVPixelFormat for video frames,
         * enum AVSampleFormat for audio)
         */
        int format;
     
        /**
         * 1 -> keyframe, 0-> not
         */
        int key_frame;
     
        /**
         * Picture type of the frame.
         */
        enum AVPictureType pict_type;
     
        /**
         * Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.
         */
        AVRational sample_aspect_ratio;
     
        /**
         * Presentation timestamp in time_base units (time when frame should be shown to user).
         */
        int64_t pts;
     
    #if FF_API_PKT_PTS
        /**
         * PTS copied from the AVPacket that was decoded to produce this frame.
         * @deprecated use the pts field instead
         */
        attribute_deprecated
        int64_t pkt_pts;
    #endif
     
        /**
         * DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn't used)
         * This is also the Presentation time of this AVFrame calculated from
         * only AVPacket.dts values without pts values.
         */
        int64_t pkt_dts;
     
        /**
         * picture number in bitstream order
         */
        int coded_picture_number;
        /**
         * picture number in display order
         */
        int display_picture_number;
     
        /**
         * quality (between 1 (good) and FF_LAMBDA_MAX (bad))
         */
        int quality;
     
        /**
         * for some private data of the user
         */
        void *opaque;
     
    #if FF_API_ERROR_FRAME
        /**
         * @deprecated unused
         */
        attribute_deprecated
        uint64_t error[AV_NUM_DATA_POINTERS];
    #endif
     
        /**
         * When decoding, this signals how much the picture must be delayed.
         * extra_delay = repeat_pict / (2*fps)
         */
        int repeat_pict;
     
        /**
         * The content of the picture is interlaced.
         */
        int interlaced_frame;
     
        /**
         * If the content is interlaced, is top field displayed first.
         */
        int top_field_first;
     
        /**
         * Tell user application that palette has changed from previous frame.
         */
        int palette_has_changed;
     
        /**
         * reordered opaque 64 bits (generally an integer or a double precision float
         * PTS but can be anything).
         * The user sets AVCodecContext.reordered_opaque to represent the input at
         * that time,
         * the decoder reorders values as needed and sets AVFrame.reordered_opaque
         * to exactly one of the values provided by the user through AVCodecContext.reordered_opaque
         */
        int64_t reordered_opaque;
     
        /**
         * Sample rate of the audio data.
         */
        int sample_rate;
     
        /**
         * Channel layout of the audio data.
         */
        uint64_t channel_layout;
     
        /**
         * AVBuffer references backing the data for this frame. If all elements of
         * this array are NULL, then this frame is not reference counted. This array
         * must be filled contiguously -- if buf[i] is non-NULL then buf[j] must
         * also be non-NULL for all j < i.
         *
         * There may be at most one AVBuffer per data plane, so for video this array
         * always contains all the references. For planar audio with more than
         * AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in
         * this array. Then the extra AVBufferRef pointers are stored in the
         * extended_buf array.
         */
        AVBufferRef *buf[AV_NUM_DATA_POINTERS];
     
        /**
         * For planar audio which requires more than AV_NUM_DATA_POINTERS
         * AVBufferRef pointers, this array will hold all the references which
         * cannot fit into AVFrame.buf.
         *
         * Note that this is different from AVFrame.extended_data, which always
         * contains all the pointers. This array only contains the extra pointers,
         * which cannot fit into AVFrame.buf.
         *
         * This array is always allocated using av_malloc() by whoever constructs
         * the frame. It is freed in av_frame_unref().
         */
        AVBufferRef **extended_buf;
        /**
         * Number of elements in extended_buf.
         */
        int        nb_extended_buf;
     
        AVFrameSideData **side_data;
        int            nb_side_data;
     
    /**
     * @defgroup lavu_frame_flags AV_FRAME_FLAGS
     * @ingroup lavu_frame
     * Flags describing additional frame properties.
     *
     * @{
     */
     
    /**
     * The frame data may be corrupted, e.g. due to decoding errors.
     */
    #define AV_FRAME_FLAG_CORRUPT       (1 << 0)
    /**
     * A flag to mark the frames which need to be decoded, but shouldn't be output.
     */
    #define AV_FRAME_FLAG_DISCARD   (1 << 2)
    /**
     * @}
     */
     
        /**
         * Frame flags, a combination of @ref lavu_frame_flags
         */
        int flags;
     
        /**
         * MPEG vs JPEG YUV range.
         * - encoding: Set by user
         * - decoding: Set by libavcodec
         */
        enum AVColorRange color_range;
     
        enum AVColorPrimaries color_primaries;
     
        enum AVColorTransferCharacteristic color_trc;
     
        /**
         * YUV colorspace type.
         * - encoding: Set by user
         * - decoding: Set by libavcodec
         */
        enum AVColorSpace colorspace;
     
        enum AVChromaLocation chroma_location;
     
        /**
         * frame timestamp estimated using various heuristics, in stream time base
         * - encoding: unused
         * - decoding: set by libavcodec, read by user.
         */
        int64_t best_effort_timestamp;
     
        /**
         * reordered pos from the last AVPacket that has been input into the decoder
         * - encoding: unused
         * - decoding: Read by user.
         */
        int64_t pkt_pos;
     
        /**
         * duration of the corresponding packet, expressed in
         * AVStream->time_base units, 0 if unknown.
         * - encoding: unused
         * - decoding: Read by user.
         */
        int64_t pkt_duration;
     
        /**
         * metadata.
         * - encoding: Set by user.
         * - decoding: Set by libavcodec.
         */
        AVDictionary *metadata;
     
        /**
         * decode error flags of the frame, set to a combination of
         * FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there
         * were errors during the decoding.
         * - encoding: unused
         * - decoding: set by libavcodec, read by user.
         */
        int decode_error_flags;
    #define FF_DECODE_ERROR_INVALID_BITSTREAM   1
    #define FF_DECODE_ERROR_MISSING_REFERENCE   2
    #define FF_DECODE_ERROR_CONCEALMENT_ACTIVE  4
    #define FF_DECODE_ERROR_DECODE_SLICES       8
     
        /**
         * number of audio channels, only used for audio.
         * - encoding: unused
         * - decoding: Read by user.
         */
        int channels;
     
        /**
         * size of the corresponding packet containing the compressed
         * frame.
         * It is set to a negative value if unknown.
         * - encoding: unused
         * - decoding: set by libavcodec, read by user.
         */
        int pkt_size;
     
    #if FF_API_FRAME_QP
        /**
         * QP table
         */
        attribute_deprecated
        int8_t *qscale_table;
        /**
         * QP store stride
         */
        attribute_deprecated
        int qstride;
     
        attribute_deprecated
        int qscale_type;
     
        attribute_deprecated
        AVBufferRef *qp_table_buf;
    #endif
        /**
         * For hwaccel-format frames, this should be a reference to the
         * AVHWFramesContext describing the frame.
         */
        AVBufferRef *hw_frames_ctx;
     
        /**
         * AVBufferRef for free use by the API user. FFmpeg will never check the
         * contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when
         * the frame is unreferenced. av_frame_copy_props() calls create a new
         * reference with av_buffer_ref() for the target frame's opaque_ref field.
         *
         * This is unrelated to the opaque field, although it serves a similar
         * purpose.
         */
        AVBufferRef *opaque_ref;
     
        /**
         * @anchor cropping
         * @name Cropping
         * Video frames only. The number of pixels to discard from the the
         * top/bottom/left/right border of the frame to obtain the sub-rectangle of
         * the frame intended for presentation.
         * @{
         */
        size_t crop_top;
        size_t crop_bottom;
        size_t crop_left;
        size_t crop_right;
        /**
         * @}
         */
     
        /**
         * AVBufferRef for internal use by a single libav* library.
         * Must not be used to transfer data between libraries.
         * Has to be NULL when ownership of the frame leaves the respective library.
         *
         * Code outside the FFmpeg libs should never check or change the contents of the buffer ref.
         *
         * FFmpeg calls av_buffer_unref() on it when the frame is unreferenced.
         * av_frame_copy_props() calls create a new reference with av_buffer_ref()
         * for the target frame's private_ref field.
         */
        AVBufferRef *private_ref;
    } AVFrame;
    只要使用FFmpeg做解码,必然会使用到AVFrame结构体,它比较重要的字段有:
          @ data[AV_NUM_DATA_POINTERS]:存放解码后的原始媒体数据的指针数组。
          对于视频数据而言,planar(YUV420)格式的数据,Y、U、V分量会被分别存放在data[0]、data[1]、data[2]……中。packet格式的数据会被存放在data[0]中。
          对于音频数据而言,data数组中,存放的是channel的数据,例如,data[0]、data[1]、data[2]分别对应channel 1,channel 2 等。
         @ linesize[AV_NUM_DATA_POINTERS]:视频或音频帧数据的行宽数组。
         对video而言:每个图片行的字节大小。linesize大小应该是CPU对齐的倍数,对于现代pc的CPU而言,即32或64的倍数。
         对audio而言:代表每个平面的字节大小。只会使用linesize[0]。 对于plane音频,每个通道 的plane必须大小相同。
        @ extended_data:
        对于视频数据:只是简单的指向data[]。
        对于音频数据:planar音频,每个通道都有一个单独的数据指针,而linesize [0]包含每个通道缓冲区的大小。 对于packet音频,只有一个数据指针,linesize [0]包含所有通道的缓冲区总大小。
        @ key_frame:当前帧是否为关键帧,1表示是,0表示不是。
        @ pts:以time_base为单位的呈现时间戳(应向用户显示帧的时间)。
        @ pkt_dts: 此frame对应的packet中的解码时间戳。是从对应packet(解码生成此frame)中拷贝DTS得到此值。如果对应的packet中只有dts而未设置pts,则此值也是此frame的pts。
        @ pict_type: 视频帧类型(I、B、P等).
        @ coded_picture_number: 在编码流中当前图像的序号。
        @ display_picture_number: 在显示序列中当前图像的序号。
        @ interlaced_frame: 图像逐行/隔行模式标识。
        @ buf[AV_NUM_DATA_POINTERS]: 
        此帧的数据可以由AVBufferRef管理,AVBufferRef提供AVBuffer引用机制。这里涉及到缓冲区引用计数概念:
        AVBuffer是FFmpeg中很常用的一种缓冲区,缓冲区使用引用计数(reference-counted)机制。
        AVBufferRef则对AVBuffer缓冲区提供了一层封装,最主要的是作引用计数处理,实现了一种安全机制。用户不应直接访问AVBuffer,应通过AVBufferRef来访问AVBuffer,以保证安全。
    FFmpeg中很多基础的数据结构都包含了AVBufferRef成员,来间接使用AVBuffer缓冲区。
    相关内容参考“FFmpeg数据结构AVBuffer”
         ????帧的数据缓冲区AVBuffer就是前面的data成员,用户不应直接使用data成员,应通过buf成员间接使用data成员。那extended_data又是做什么的呢????
         如果buf[]的所有元素都为NULL,则此帧不会被引用计数。必须连续填充buf[] - 如果buf[i]为非NULL,则对于所有j<i,buf[j]也必须为非NULL。
         每个plane最多可以有一个AVBuffer,一个AVBufferRef指针指向一个AVBuffer,一个AVBuffer引用指的就是一个AVBufferRef指针。
         对于视频来说,buf[]包含所有AVBufferRef指针。对于具有多于AV_NUM_DATA_POINTERS个声道的planar音频来说,可能buf[]存不下所有的AVBbufferRef指针,多出的AVBufferRef指针存储在extended_buf数组中。
         @ extended_buf & nb_extended_buf:
         对于具有多于AV_NUM_DATA_POINTERS个声道的planar音频来说,可能buf[]存不下所有的AVBbufferRef指针,多出的AVBufferRef指针存储在extended_buf数组中。
         注意此处的extended_buf和AVFrame.extended_data的不同,AVFrame.extended_data包含所有指向各plane的指针,而extended_buf只包含AVFrame.buf中装不下的指针。
         extended_buf是构造frame时av_frame_alloc()中自动调用av_malloc()来分配空间的。调用av_frame_unref会释放掉extended_buf。
         nb_extended_buf是extended_buf中的元素数目。
         @ pkt_pos: 记录最后一个扔进解码器的packet在输入文件中的位置偏移量。

    二、AVFrame相关的函数
          AVFrame的初始化函数是av_frame_alloc(),销毁函数是av_frame_free()。在这里有一点需要注意,旧版的FFmpeg都是使用avcodec_alloc_frame()初始化AVFrame的,但是我在写这篇文章的时候,avcodec_alloc_frame()已经被标记为“过时的”了,为了保证与时俱进,决定分析新的API——av_frame_alloc()。

    2.1 av_frame_alloc()
         av_frame_alloc()的声明位于libavutilframe.h,如下所示。

    /**
     * Allocate an AVFrame and set its fields to default values.  The resulting
     * struct must be freed using av_frame_free().
     *
     * @return An AVFrame filled with default values or NULL on failure.
     *
     * @note this only allocates the AVFrame itself, not the data buffers. Those
     * must be allocated through other means, e.g. with av_frame_get_buffer() or
     * manually.
     */
    AVFrame *av_frame_alloc(void);
    av_frame_alloc()的定义位于libavutilframe.c。代码如下所示。

    AVFrame *av_frame_alloc(void)
    {
        AVFrame *frame = av_mallocz(sizeof(*frame));
     
        if (!frame)
            return NULL;
     
        frame->extended_data = NULL;
        get_frame_defaults(frame);
     
        return frame;
    }
          从代码可以看出,av_frame_alloc()首先调用av_mallocz()为AVFrame结构体分配内存。而后调用了一个函数get_frame_defaults()用于设置一些默认参数。get_frame_defaults()定义如下。

    static void get_frame_defaults(AVFrame *frame)
    {
        if (frame->extended_data != frame->data)
            av_freep(&frame->extended_data);
     
        memset(frame, 0, sizeof(*frame));
     
        frame->pts                   =
        frame->pkt_dts               = AV_NOPTS_VALUE;
    #if FF_API_PKT_PTS
    FF_DISABLE_DEPRECATION_WARNINGS
        frame->pkt_pts               = AV_NOPTS_VALUE;
    FF_ENABLE_DEPRECATION_WARNINGS
    #endif
        frame->best_effort_timestamp = AV_NOPTS_VALUE;
        frame->pkt_duration        = 0;
        frame->pkt_pos             = -1;
        frame->pkt_size            = -1;
        frame->key_frame           = 1;
        frame->sample_aspect_ratio = (AVRational){ 0, 1 };
        frame->format              = -1; /* unknown */
        frame->extended_data       = frame->data;
        frame->color_primaries     = AVCOL_PRI_UNSPECIFIED;
        frame->color_trc           = AVCOL_TRC_UNSPECIFIED;
        frame->colorspace          = AVCOL_SPC_UNSPECIFIED;
        frame->color_range         = AVCOL_RANGE_UNSPECIFIED;
        frame->chroma_location     = AVCHROMA_LOC_UNSPECIFIED;
        frame->flags               = 0;
    }
          从av_frame_alloc()的代码我们可以看出,该函数并没有为AVFrame的像素数据分配空间。因此AVFrame中的像素数据的空间需要自行分配空间,例如使用avpicture_fill(),av_image_fill_arrays()等函数。

    2.2 avpicture_fill()
    libavcodecavcodec.h

    /**
     * @deprecated use av_image_fill_arrays() instead.
     */
    attribute_deprecated
    int avpicture_fill(AVPicture *picture, const uint8_t *ptr,
                       enum AVPixelFormat pix_fmt, int width, int height);
                       
     
    libavutilimgutils.h               
    /**
     * Setup the data pointers and linesizes based on the specified image
     * parameters and the provided array.
     *
     * The fields of the given image are filled in by using the src
     * address which points to the image data buffer. Depending on the
     * specified pixel format, one or multiple image data pointers and
     * line sizes will be set.  If a planar format is specified, several
     * pointers will be set pointing to the different picture planes and
     * the line sizes of the different planes will be stored in the
     * lines_sizes array. Call with src == NULL to get the required
     * size for the src buffer.
     *
     * To allocate the buffer and fill in the dst_data and dst_linesize in
     * one call, use av_image_alloc().
     *
     * @param dst_data      data pointers to be filled in
     * @param dst_linesize  linesizes for the image in dst_data to be filled in
     * @param src           buffer which will contain or contains the actual image data, can be NULL
     * @param pix_fmt       the pixel format of the image
     * @param width         the width of the image in pixels
     * @param height        the height of the image in pixels
     * @param align         the value used in src for linesize alignment
     * @return the size in bytes required for src, a negative error code
     * in case of failure
     */
    int av_image_fill_arrays(uint8_t *dst_data[4], int dst_linesize[4],
                             const uint8_t *src,
                             enum AVPixelFormat pix_fmt, int width, int height, int align);
     
    libavutilimgutils.c
    int av_image_fill_arrays(uint8_t *dst_data[4], int dst_linesize[4],
                             const uint8_t *src, enum AVPixelFormat pix_fmt,
                             int width, int height, int align)
    {
        int ret, i;
     
        ret = av_image_check_size(width, height, 0, NULL);
        if (ret < 0)
            return ret;
     
        ret = av_image_fill_linesizes(dst_linesize, pix_fmt, width);
        if (ret < 0)
            return ret;
     
        for (i = 0; i < 4; i++)
            dst_linesize[i] = FFALIGN(dst_linesize[i], align);
     
        return av_image_fill_pointers(dst_data, pix_fmt, height, (uint8_t *)src, dst_linesize);
    }
          av_image_fill_arrays()函数中包含3个函数:av_image_check_size(),av_image_fill_linesizes(),av_image_fill_pointers()。av_image_check_size()用于检查输入的宽高参数是否合理,即不能太大或者为负数。av_image_fill_linesizes()用于填充dst_linesize。av_image_fill_pointers()则用于填充dst_data。它们的定义相对比较简单,不再详细分析。
          av_image_check_size()代码如下所示。

    int av_image_check_size(unsigned int w, unsigned int h, int log_offset, void *log_ctx)
    {
        return av_image_check_size2(w, h, INT64_MAX, AV_PIX_FMT_NONE, log_offset, log_ctx);
    }
     
    int av_image_check_size2(unsigned int w, unsigned int h, int64_t max_pixels, enum AVPixelFormat pix_fmt, int log_offset, void *log_ctx)
    {
        ImgUtils imgutils = {
            .class      = &imgutils_class,
            .log_offset = log_offset,
            .log_ctx    = log_ctx,
        };
        int64_t stride = av_image_get_linesize(pix_fmt, w, 0);
        if (stride <= 0)
            stride = 8LL*w;
        stride += 128*8;
     
        if ((int)w<=0 || (int)h<=0 || stride >= INT_MAX || stride*(uint64_t)(h+128) >= INT_MAX) {
            av_log(&imgutils, AV_LOG_ERROR, "Picture size %ux%u is invalid ", w, h);
            return AVERROR(EINVAL);
        }
     
        if (max_pixels < INT64_MAX) {
            if (w*(int64_t)h > max_pixels) {
                av_log(&imgutils, AV_LOG_ERROR,
                        "Picture size %ux%u exceeds specified max pixel count %"PRId64", see the documentation if you wish to increase it ",
                        w, h, max_pixels);
                return AVERROR(EINVAL);
            }
        }
     
        return 0;
    }
          av_image_fill_linesizes()代码如下所示。

    int av_image_fill_linesizes(int linesizes[4], enum AVPixelFormat pix_fmt, int width)
    {
        int i, ret;
        const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt);
        int max_step     [4];       /* max pixel step for each plane */
        int max_step_comp[4];       /* the component for each plane which has the max pixel step */
     
        memset(linesizes, 0, 4*sizeof(linesizes[0]));
     
        if (!desc || desc->flags & AV_PIX_FMT_FLAG_HWACCEL)
            return AVERROR(EINVAL);
     
        av_image_fill_max_pixsteps(max_step, max_step_comp, desc);
        for (i = 0; i < 4; i++) {
            if ((ret = image_get_linesize(width, i, max_step[i], max_step_comp[i], desc)) < 0)
                return ret;
            linesizes[i] = ret;
        }
     
        return 0;
    }
          av_image_fill_pointers()代码如下所示。

    int av_image_fill_pointers(uint8_t *data[4], enum AVPixelFormat pix_fmt, int height,
                               uint8_t *ptr, const int linesizes[4])
    {
        int i, total_size, size[4] = { 0 }, has_plane[4] = { 0 };
     
        const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt);
        memset(data     , 0, sizeof(data[0])*4);
     
        if (!desc || desc->flags & AV_PIX_FMT_FLAG_HWACCEL)
            return AVERROR(EINVAL);
     
        data[0] = ptr;
        if (linesizes[0] > (INT_MAX - 1024) / height)
            return AVERROR(EINVAL);
        size[0] = linesizes[0] * height;
     
        if (desc->flags & AV_PIX_FMT_FLAG_PAL ||
            desc->flags & FF_PSEUDOPAL) {
            data[1] = ptr + size[0]; /* palette is stored here as 256 32 bits words */
            return size[0] + 256 * 4;
        }
     
        for (i = 0; i < 4; i++)
            has_plane[desc->comp[i].plane] = 1;
     
        total_size = size[0];
        for (i = 1; i < 4 && has_plane[i]; i++) {
            int h, s = (i == 1 || i == 2) ? desc->log2_chroma_h : 0;
            data[i] = data[i-1] + size[i-1];
            h = (height + (1 << s) - 1) >> s;
            if (linesizes[i] > INT_MAX / h)
                return AVERROR(EINVAL);
            size[i] = h * linesizes[i];
            if (total_size > INT_MAX - size[i])
                return AVERROR(EINVAL);
            total_size += size[i];
        }
     
        return total_size;
    }
     2.3 av_frame_free()
    av_frame_free ()的声明位于libavutilframe.h,如下所示。

    /**
     * Free the frame and any dynamically allocated objects in it,
     * e.g. extended_data. If the frame is reference counted, it will be
     * unreferenced first.
     *
     * @param frame frame to be freed. The pointer will be set to NULL.
     */
    void av_frame_free(AVFrame **frame);
     
    libavutilframe.c
    void av_frame_free(AVFrame **frame)
    {
        if (!frame || !*frame)
            return;
     
        av_frame_unref(*frame);
        av_freep(frame);
    }
    2.4 av_frame_ref()
    /**
     * Set up a new reference to the data described by the source frame.
     *
     * Copy frame properties from src to dst and create a new reference for each
     * AVBufferRef from src.
     *
     * If src is not reference counted, new buffers are allocated and the data is
     * copied.
     *
     * @warning: dst MUST have been either unreferenced with av_frame_unref(dst),
     *           or newly allocated with av_frame_alloc() before calling this
     *           function, or undefined behavior will occur.
     *
     * @return 0 on success, a negative AVERROR on error
     */
    int av_frame_ref(AVFrame *dst, const AVFrame *src);
          为src中的数据建立一个新的引用。
          将src中帧的各属性拷到dst中,并且为src中每个AVBufferRef创建一个新的引用。
         如果src未使用引用计数,则dst中会分配新的数据缓冲区,将将src中缓冲区的数据拷贝到dst中的缓冲区。

    2.5 av_frame_clone()        
    /**
     * Create a new frame that references the same data as src.
     *
     * This is a shortcut for av_frame_alloc()+av_frame_ref().
     *
     * @return newly created AVFrame on success, NULL on error.
     */
    AVFrame *av_frame_clone(const AVFrame *src);
           创建一个新的frame,新的frame和src使用同一数据缓冲区,缓冲区管理使用引用计数机制。
           本函数相当于av_frame_alloc()+av_frame_ref()

    参考文献:
    https://www.cnblogs.com/leisure_chn/p/10404502.html
    https://blog.csdn.net/leixiaohua1020/article/details/14214577
    https://www.ffmpeg.org/doxygen/4.1/index.html
    https://blog.csdn.net/qq_25333681/article/details/89743660
    ————————————————
    版权声明:本文为CSDN博主「yangguoyu8023」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/yangguoyu8023/article/details/107496339

  • 相关阅读:
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(88)-Excel导入和导出-主从表结构导出
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(89)-EF执行SQL语句与存储过程
    ASP.NET MVC5+EF6+EasyUI 后台管理系统-WebApi的用法与调试
    ASP.NET MVC5+EF6+EasyUI 仓库管理系统
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(91)-EF 连接 MySql
    样例功能截图
    Fastreport.net 如何在开发MVC应用程序时使用报表
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(90)-EF 扩展操作
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(87)-MVC Excel导入和导出
    ASP.NET MVC5+EF6+EasyUI 后台管理系统(86)-日程管理-fullcalendar插件用法
  • 原文地址:https://www.cnblogs.com/lidabo/p/15040754.html
Copyright © 2011-2022 走看看