zoukankan      html  css  js  c++  java
  • Android4.2.2的Stagefright维护编解码器的数据流

    这里是他们自己的源代码阅读点滴总结属性,转请注明出处,谢谢。

    欢迎和大家分享。qq:1037701636 email:gzzaigcn2012@gmail.com

    Android源代码版本号Version:4.2.2; 硬件平台 全志A31

    前沿:

    在前面的博文中,基本提到的是stagefright相关的控制流,详细分析了android架构中的MediaExtractor、AwesomePlayer、StagefrightPlayer、OMXCodec等的创建。底层OMXNodinstance实例的创建。

    分析了OMX最底层插件库、编解码器组件的架构以及怎样创建属于我们自己的OMX Plugin。

    分析源代码架构的还有一个关键是数据流的分析,从这里開始。我们将对stagefright中的编解码缓存区进行分析:

    1.

    回到OMXCodec的创建过程的源代码:

    status_t AwesomePlayer::initVideoDecoder(uint32_t flags) {
    .......
        mVideoSource = OMXCodec::Create(
                mClient.interface(), mVideoTrack->getFormat(),//提取视频流的格式, mClient:BpOMX;mVideoTrack->getFormat()
                false, // createEncoder,不创建编码器false
                mVideoTrack,
                NULL, flags, USE_SURFACE_ALLOC ? mNativeWindow : NULL);//创建一个解码器mVideoSource
    
        if (mVideoSource != NULL) {
            int64_t durationUs;
            if (mVideoTrack->getFormat()->findInt64(kKeyDuration, &durationUs)) {
                Mutex::Autolock autoLock(mMiscStateLock);
                if (mDurationUs < 0 || durationUs > mDurationUs) {
                    mDurationUs = durationUs;
                }
            }
    
            status_t err = mVideoSource->start();//启动解码器OMXCodec。完毕解码器的init初始化操作
    .............
    }

    Android4.2.2下Stagefright多媒体架构中的A31的OMX插件和Codec组件 博文我们对于OMXCodec::create已经做了具体的分析。这里来关注mVideoSource->start的相关功能,即OMXCodec::start的处理:

    status_t OMXCodec::start(MetaData *meta) {
        Mutex::Autolock autoLock(mLock);
    ........
        return init();//进行初始化操作
    }

    这里调用init()的过程。将会进行buffer的申请操作。为兴许的流操作打下基础:

    status_t OMXCodec::init() {
        // mLock is held.
    .........
        err = allocateBuffers();//缓存区的分配
        if (err != (status_t)OK) {
            return err;
        }
    
        if (mQuirks & kRequiresLoadedToIdleAfterAllocation) {
            err = mOMX->sendCommand(mNode, OMX_CommandStateSet, OMX_StateIdle);
            CHECK_EQ(err, (status_t)OK);
    
            setState(LOADED_TO_IDLE);
        }
    ............
    }

    我们来看allocateBuffers的实现

    2.关注allocateBuffersOnPort的实现

    status_t OMXCodec::allocateBuffers() {
        status_t err = allocateBuffersOnPort(kPortIndexInput);//输入缓存input口分配
    
        if (err != OK) {
            return err;
        }
    
        return allocateBuffersOnPort(kPortIndexOutput);//输出缓存input口分配
    }

    这里分别将对输入和输出口进行Buffer的申请与分配。对于解码器,须要输入口来存储待解码的数据源,须要将解码后的数据源存储到输出口,而这也符合硬件的实现逻辑。

    以输入缓存区分配为例展开分析:

    status_t OMXCodec::allocateBuffersOnPort(OMX_U32 portIndex) {
    .......
        OMX_PARAM_PORTDEFINITIONTYPE def;
        InitOMXParams(&def);
        def.nPortIndex = portIndex;//输入口
    
        err = mOMX->getParameter(
                mNode, OMX_IndexParamPortDefinition, &def, sizeof(def));//获取输入口參数到def
    ..........
                    err = mOMX->allocateBuffer(
                            mNode, portIndex, def.nBufferSize, &buffer,
                            &info.mData);
    ........
          info.mBuffer = buffer;//获取相应的buffer_id。有保存有底层的buffer的相关信息
            info.mStatus = OWNED_BY_US;
            info.mMem = mem;
            info.mMediaBuffer = NULL;
     ...........
            mPortBuffers[portIndex].push(info);//把当前的buffer恢复到mPortBuffers[2]中去
    

    上述过程主要分为:

    step1:先是获取底层解码器组件的当前的參数熟悉,一般这些參数都在建立OMX_Codec时完毕的初始配置,前一博文中已经提到过。

    step2:进行allocateBuffer的处理,这个函数的调用终于交给底层的OMX组件来完毕,相关的实现将集成到A31的底层OMX编解码组件的处理流中进行分析。

    step3:完毕对分配好的buffer信息info。维护在mPortBuffers[0]这个port中。

    上述过程完毕了输入与输出的Buffer分配。为兴许解码操作buffer打下了基础。

    3.mediaplay启动播放器

    通过start的API调用。进入MediaplayerService::Client,再依次经过stagefrightplayer,AwesomePlayer。

    触发play的videoevent的发生.

    void AwesomePlayer::postVideoEvent_l(int64_t delayUs) {
        ATRACE_CALL();
    
        if (mVideoEventPending) {
            return;
        }
    
        mVideoEventPending = true;
        mQueue.postEventWithDelay(mVideoEvent, delayUs < 0 ? 10000 : delayUs);
    }

    依据前一博文的分析可知,该事件相应的处理函数为AwesomePlayer::onVideoEvent(),该部分代码量较大。提取核心内容read的处理进行分析:

       status_t err = mVideoSource->read(&mVideoBuffer, &options);//循环读数据实际的OMX_CODEC::read,读取到mVideoBuffer

    read的核心是获取能够用于render的视频数据,这表明了read函数主要完毕了从视频源读取元数据,并调用解码器完毕解码生成可送显的数据。

    4. read函数的实现

    能够想象read函数的应该是一个比較复杂的过程。我们从OMX_Codec的read函数入手来分析:

    status_t OMXCodec::read(
            MediaBuffer **buffer, const ReadOptions *options) {
        status_t err = OK;
        *buffer = NULL;
    
        Mutex::Autolock autoLock(mLock);
    
            drainInputBuffers();//buffer,填充数据源
    
            if (mState == EXECUTING) {
                // Otherwise mState == RECONFIGURING and this code will trigger
                // after the output port is reenabled.
                fillOutputBuffers();
            }
        }
    
    ...........
    }

    read的核心逻辑总结为drainInputBuffers()和fillOutputBuffers(),我们对其依次进行深入的分析

    5. drainInputBuffers()读取待解码的视频数据源到解码器的Inport

    这里贴出其较为复杂的处理过程代码。主要分为下面3个部分进行分析:

    (1)

    bool OMXCodec::drainInputBuffer(BufferInfo *info) {
       if (mCodecSpecificDataIndex < mCodecSpecificData.size()) {
            CHECK(!(mFlags & kUseSecureInputBuffers));
    
            const CodecSpecificData *specific =
                mCodecSpecificData[mCodecSpecificDataIndex];
    
            size_t size = specific->mSize;
    
            if (!strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mMIME)
                    && !(mQuirks & kWantsNALFragments)) {
                static const uint8_t kNALStartCode[4] =
                        { 0x00, 0x00, 0x00, 0x01 };
    
                CHECK(info->mSize >= specific->mSize + 4);
    
                size += 4;
    
                memcpy(info->mData, kNALStartCode, 4);
                memcpy((uint8_t *)info->mData + 4,
                       specific->mData, specific->mSize);
            } else {
                CHECK(info->mSize >= specific->mSize);
                memcpy(info->mData, specific->mData, specific->mSize);//copy前面的数据字段
            }
    
            mNoMoreOutputData = false;
    
            CODEC_LOGV("calling emptyBuffer with codec specific data");
    
            status_t err = mOMX->emptyBuffer(
                    mNode, info->mBuffer, 0, size,
                    OMX_BUFFERFLAG_ENDOFFRAME | OMX_BUFFERFLAG_CODECCONFIG,
                    0);//处理buffer
            CHECK_EQ(err, (status_t)OK);
    
            info->mStatus = OWNED_BY_COMPONENT;
    
            ++mCodecSpecificDataIndex;
            return true;
        }
    ...............(1)

    这部分的内容主要是提取一部分解码器字段,填充到info->mData的存储空间中去。这部分主要基于视频源的格式,如mp4等在创建OXMCodec病configureCodec时就完毕了这个mCodecSpecificData字段的加入,应该些解码须要的特殊字段吧。

    是否须要要看其视频源的格式。获取完这个字段信息后就是正式读取视频源的数据了。

    (2)

      for (;;) {
            MediaBuffer *srcBuffer;
            if (mSeekTimeUs >= 0) {
                if (mLeftOverBuffer) {
                    mLeftOverBuffer->release();
                    mLeftOverBuffer = NULL;
                }
    
                MediaSource::ReadOptions options;
                options.setSeekTo(mSeekTimeUs, mSeekMode);
    
                mSeekTimeUs = -1;
                mSeekMode = ReadOptions::SEEK_CLOSEST_SYNC;
                mBufferFilled.signal();
    
                err = mSource->read(&srcBuffer, &options);//读取视频源中的真实数据这里是MPEG4Source的read
    
                if (err == OK) {
                    int64_t targetTimeUs;
                    if (srcBuffer->meta_data()->findInt64(
                                kKeyTargetTime, &targetTimeUs)
                            && targetTimeUs >= 0) {
                        CODEC_LOGV("targetTimeUs = %lld us", targetTimeUs);
                        mTargetTimeUs = targetTimeUs;
                    } else {
                        mTargetTimeUs = -1;
                    }
                }
            } else if (mLeftOverBuffer) {
                srcBuffer = mLeftOverBuffer;
                mLeftOverBuffer = NULL;
    
                err = OK;
            } else {
                err = mSource->read(&srcBuffer);
            }
    
            if (err != OK) {
                signalEOS = true;
                mFinalStatus = err;
                mSignalledEOS = true;
                mBufferFilled.signal();
                break;
            }
    
            if (mFlags & kUseSecureInputBuffers) {
                info = findInputBufferByDataPointer(srcBuffer->data());
                CHECK(info != NULL);
            }
    
            size_t remainingBytes = info->mSize - offset;//buffer中剩余的能够存储视频数据的空间
    
            if (srcBuffer->range_length() > remainingBytes) {//当前读取的数据已经达到解码的数据量
                if (offset == 0) {
                    CODEC_LOGE(
                         "Codec's input buffers are too small to accomodate "
                         "buffer read from source (info->mSize = %d, srcLength = %d)",
                         info->mSize, srcBuffer->range_length());
    
                    srcBuffer->release();
                    srcBuffer = NULL;
    
                    setState(ERROR);
                    return false;
                }
    
                mLeftOverBuffer = srcBuffer;//把没读取的buffer记录下来
                break;
            }
    
            bool releaseBuffer = true;
            if (mFlags & kStoreMetaDataInVideoBuffers) {
                    releaseBuffer = false;
                    info->mMediaBuffer = srcBuffer;
            }
    
            if (mFlags & kUseSecureInputBuffers) {
                    // Data in "info" is already provided at this time.
    
                    releaseBuffer = false;
    
                    CHECK(info->mMediaBuffer == NULL);
                    info->mMediaBuffer = srcBuffer;
            } else {
                CHECK(srcBuffer->data() != NULL) ;
                memcpy((uint8_t *)info->mData + offset,
                        (const uint8_t *)srcBuffer->data()
                            + srcBuffer->range_offset(),
                        srcBuffer->range_length());//copy数据源数据到输入缓存,数据容量srcBuffer->range_length()
            }
    
            int64_t lastBufferTimeUs;
            CHECK(srcBuffer->meta_data()->findInt64(kKeyTime, &lastBufferTimeUs));
            CHECK(lastBufferTimeUs >= 0);
            if (mIsEncoder && mIsVideo) {
                mDecodingTimeList.push_back(lastBufferTimeUs);
            }
    
            if (offset == 0) {
                timestampUs = lastBufferTimeUs;
            }
    
            offset += srcBuffer->range_length();//添加偏移量
    
            if (!strcasecmp(MEDIA_MIMETYPE_AUDIO_VORBIS, mMIME)) {
                CHECK(!(mQuirks & kSupportsMultipleFramesPerInputBuffer));
                CHECK_GE(info->mSize, offset + sizeof(int32_t));
    
                int32_t numPageSamples;
                if (!srcBuffer->meta_data()->findInt32(
                            kKeyValidSamples, &numPageSamples)) {
                    numPageSamples = -1;
                }
    
                memcpy((uint8_t *)info->mData + offset,
                       &numPageSamples,
                       sizeof(numPageSamples));
    
                offset += sizeof(numPageSamples);
            }
    
            if (releaseBuffer) {
                srcBuffer->release();
                srcBuffer = NULL;
            }
    
            ++n;
    
            if (!(mQuirks & kSupportsMultipleFramesPerInputBuffer)) {
                break;
            }
    
            int64_t coalescedDurationUs = lastBufferTimeUs - timestampUs;
    
            if (coalescedDurationUs > 250000ll) {
                // Don't coalesce more than 250ms worth of encoded data at once.
                break;
            }
        }...........

    该部分是提取视频源数据的关键,主要通过 err = mSource->read(&srcBuffer, &options)来完毕,mSource是在创建编解码器传入的,实际是一个相应于视频源格式的一个解析器MediaExtractor。比方在建立MP4的解析器MPEG4Extractor,通过新建一个new MPEG4Source。故终于这里调用的是MPEG4Source的read成员函数,事实上际也维护着整个待解码的原始视频流。

    我们能够看大在read函数后。会将待解码的数据流以for循环依次读入究竟层的buffer空间中。仅仅有当满足当前读取的原始数据片段比底层的input口的buffer剩余空间小srcBuffer->range_length() > remainingBytes。那就能够继续读取,否则直接break后,去进行下一步操作。

    或者假设一次待解码的数据时张是大于250ms也直接跳出。

    这处理体现了处理的高效性。

    终于视频原始数据存储在info->mData的底层输入空间中。

    (3)

        err = mOMX->emptyBuffer(
                mNode, info->mBuffer, 0, offset,
                flags, timestampUs);

    触发底层的解码器组件进行处理。这部分留在兴许对A31的底层编解码API操作时进行分析。

    6.fillOutputBuffers对输出buffer口的填充,即实现解码过程:

    void OMXCodec::fillOutputBuffers() {
        CHECK_EQ((int)mState, (int)EXECUTING);
    ...........
         Vector<BufferInfo> *buffers = &mPortBuffers[kPortIndexOutput];输出port
        for (size_t i = 0; i < buffers->size(); ++i) {
            BufferInfo *info = &buffers->editItemAt(i);
            if (info->mStatus == OWNED_BY_US) {
                fillOutputBuffer(&buffers->editItemAt(i));
            }
        }
    }
    void OMXCodec::fillOutputBuffer(BufferInfo *info) {
        CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
    
        if (mNoMoreOutputData) {
            CODEC_LOGV("There is no more output data available, not "
                 "calling fillOutputBuffer");
            return;
        }
    
        CODEC_LOGV("Calling fillBuffer on buffer %p", info->mBuffer);
        status_t err = mOMX->fillBuffer(mNode, info->mBuffer);
    
        if (err != OK) {
            CODEC_LOGE("fillBuffer failed w/ error 0x%08x", err);
    
            setState(ERROR);
            return;
        }
    
        info->mStatus = OWNED_BY_COMPONENT;
    }
    

    从上面的代码看来,fillOutputBuffer的实现比drainInputBuffers简单了非常多。

    但同样的是。两者终于都讲控制权交给底层的解码器来完毕。

    7.等待解码数据被fill到outbuffer中,OMXCodecObserver完毕回调处理

    等待解码完毕的这部分内容在read函数中通过下面函数来实现:

        while (mState != ERROR && !mNoMoreOutputData && mFilledBuffers.empty()) {
            if ((err = waitForBufferFilled_l()) != OK) {//进入等待buffer被填充
                return err;
            }
        }

    上述表明,仅仅要mFilledBuffers为空则进入等待填充pthread_cond_timedwait。而这个线程被唤醒是通过底层的组件回调来完毕的。回调函数的注冊哎底层编解码器Node完毕的。实际终于的回调是交给OMXCodecObserver来完毕的:

    struct OMXCodecObserver : public BnOMXObserver {
        OMXCodecObserver() {
        }
    
        void setCodec(const sp<OMXCodec> &target) {
            mTarget = target;
        }
    
        // from IOMXObserver
        virtual void onMessage(const omx_message &msg) {
            sp<OMXCodec> codec = mTarget.promote();
    
            if (codec.get() != NULL) {
                Mutex::Autolock autoLock(codec->mLock);
                codec->on_message(msg);//OMX_Codec的on_message处理
                codec.clear();
            }
        }
    

    终于能够看到是由OMX_Codec->on_message来进行消息的处理。这部分的内容主要包含EMPTY_BUFFER_DONE和FILL_BUFFER_DONE两个message处理。对FILL_BUFFER_DONE完毕后的消息回调进行分析:

    void OMXCodec::on_message(const omx_message &msg) {
        if (mState == ERROR) {
            /*
             * only drop EVENT messages, EBD and FBD are still
             * processed for bookkeeping purposes
             */
            if (msg.type == omx_message::EVENT) {
                ALOGW("Dropping OMX EVENT message - we're in ERROR state.");
                return;
            }
        }
    
        switch (msg.type) {                                                                                                                                         case omx_message::FILL_BUFFER_DONE://底层回调callback告知当前                        ..............
                    mFilledBuffers.push_back(i);//当前的输出buffer信息维护在mFilledBuffers
                    mBufferFilled.signal();//发出信息用于渲染
    

    能够看到这里对read线程进行了唤醒。


    8.提取一个可用的解码后的数据帧

        size_t index = *mFilledBuffers.begin();
        mFilledBuffers.erase(mFilledBuffers.begin());
    
        BufferInfo *info = &mPortBuffers[kPortIndexOutput].editItemAt(index);//从获取解码后的视频源
        CHECK_EQ((int)info->mStatus, (int)OWNED_BY_US);
        info->mStatus = OWNED_BY_CLIENT;
    
        info->mMediaBuffer->add_ref();//
        if (mSkipCutBuffer != NULL) {
            mSkipCutBuffer->submit(info->mMediaBuffer);
        }
        *buffer = info->mMediaBuffer;
    

    获得了线程唤醒后的buffer,从这里获取到输出port相应的Bufferinfo。作为终于的BufferInfo信息返回给read函数

    9

    经过5、6、7、8的处理过程。read终于返回可用于显示的mVideoBuffer,接下去就是怎样送显的过程了。

    能够看到以下的代码。将会创建一个渲染器mVideoRenderer来完毕这个解码后视频源的显示:

              

        if ((mNativeWindow != NULL)             && (mVideoRendererIsPreview || mVideoRenderer == NULL)) {//首次创建渲染器         mVideoRendererIsPreview = false;

            initRenderer_l();//初始化渲染器。新建一个AwesomeLocalRenderer     }

        if (mVideoRenderer != NULL) {         mSinceLastDropped++;         mVideoRenderer->render(mVideoBuffer);//启动渲染。即显示当前buffer         if (!mVideoRenderingStarted) {             mVideoRenderingStarted = true;             notifyListener_l(MEDIA_INFO, MEDIA_INFO_RENDERING_START);         }

        }

    void AwesomePlayer::initRenderer_l() {
        ATRACE_CALL();
    
        if (mNativeWindow == NULL) {
            return;
        }
    
        sp<MetaData> meta = mVideoSource->getFormat();
    
        int32_t format;
        const char *component;
        int32_t decodedWidth, decodedHeight;
        CHECK(meta->findInt32(kKeyColorFormat, &format));
        CHECK(meta->findCString(kKeyDecoderComponent, &component));
        CHECK(meta->findInt32(kKeyWidth, &decodedWidth));
        CHECK(meta->findInt32(kKeyHeight, &decodedHeight));
    
        int32_t rotationDegrees;
        if (!mVideoTrack->getFormat()->findInt32(
                    kKeyRotation, &rotationDegrees)) {
            rotationDegrees = 0;
        }
    
        mVideoRenderer.clear();
    
        // Must ensure that mVideoRenderer's destructor is actually executed
        // before creating a new one.
        IPCThreadState::self()->flushCommands();
    
        // Even if set scaling mode fails, we will continue anyway
        setVideoScalingMode_l(mVideoScalingMode);
        if (USE_SURFACE_ALLOC
                && !strncmp(component, "OMX.", 4)
                && strncmp(component, "OMX.google.", 11)
                && strcmp(component, "OMX.Nvidia.mpeg2v.decode")) {//使用硬件渲染器。除去上述的解码器
            // Hardware decoders avoid the CPU color conversion by decoding
            // directly to ANativeBuffers, so we must use a renderer that
            // just pushes those buffers to the ANativeWindow.
            mVideoRenderer =
                new AwesomeNativeWindowRenderer(mNativeWindow, rotationDegrees);//通常是使用硬件渲染机制
        } else {
            // Other decoders are instantiated locally and as a consequence
            // allocate their buffers in local address space.  This renderer
            // then performs a color conversion and copy to get the data
            // into the ANativeBuffer.
            mVideoRenderer = new AwesomeLocalRenderer(mNativeWindow, meta);
        }
    }

    能够看到这里有2个渲染器的创建分支,OMX和OMX.google说明底层的解码器用的是软解码。那么他渲染器也使用所谓的本地渲染器实际是软渲染器。故这里我们使用的是AwesomeNativeWindowRenderer渲染器,其结构例如以下所述:

    struct AwesomeNativeWindowRenderer : public AwesomeRenderer {
        AwesomeNativeWindowRenderer(
                const sp<ANativeWindow> &nativeWindow,
                int32_t rotationDegrees)
            : mNativeWindow(nativeWindow) {
            applyRotation(rotationDegrees);
        }
    
        virtual void render(MediaBuffer *buffer) {
            ATRACE_CALL();
            int64_t timeUs;
            CHECK(buffer->meta_data()->findInt64(kKeyTime, &timeUs));
            native_window_set_buffers_timestamp(mNativeWindow.get(), timeUs * 1000);
            status_t err = mNativeWindow->queueBuffer(
                    mNativeWindow.get(), buffer->graphicBuffer().get(), -1);//直接使用queuebuffer进行渲染显示
            if (err != 0) {
                ALOGE("queueBuffer failed with error %s (%d)", strerror(-err),
                        -err);
                return;
            }
    
            sp<MetaData> metaData = buffer->meta_data();
            metaData->setInt32(kKeyRendered, 1);
        }
    

    不是非常复杂,仅仅是实现了AwesomeRenderer渲染接口render。终于调用这个函数来实现对buffer的显示。这里看到非常熟悉的queueBuffer,大家能够回看我的博文Android4.2.2 SurfaceFlinger之图形渲染queueBuffer实现和VSYNC的存在感 ,这是通过应用程序的本地窗体mNativeWindow(由于播放器videoview继承了sufaceview,surfaceview类会创建一个本地的surface,其继承了本地窗体类)将当前buffer提交给SurfaceFlinger服务进行显示。具体内容不在展开。

    至此我们完毕了stagefright下的编解码的数据流的相关操作,程序上复杂主要体如今emptybuffer和fillbuffer为主。

    当然由于能力有限。在非常多细节上也没有进行非常具体的分析。也希望大家多交流。多学习。




     


     

    版权声明:本文博主原创文章。博客,未经同意不得转载。

  • 相关阅读:
    url protocol
    wpf webbrowser取消js报错
    c#端口扫描器wpf+socket
    c#协变 抗变
    MTK刷机快捷键
    iTextCharp c#
    wince可用的7-zip
    直播平台搭建与相关资料
    pyinstall
    面向对象常见的术语
  • 原文地址:https://www.cnblogs.com/hrhguanli/p/4827324.html
Copyright © 2011-2022 走看看