zoukankan      html  css  js  c++  java
  • ⑥NuPlayer播放源码分析之DecoderBase分析

    NuPlayer播放源码分析之DecoderBase分析

    [时间:2017-02] [状态:Open]
    [关键词:android,nuplayer,开源播放器,播放框架,DecoderBase,MediaCodec]

    0 导读

    DecoderBase是AHandler的一个子类,主要功能是负责解码,按照MediaPlayer的框架,一般是调用MediaCodec完成解码,功能类似FFmpeg的libavcodec。其主要功能是解码器初始化、解码以及和其他模块的交互,比如Source、Renderer等。

    NuPlayer::Decoder是DecoderBase的子类。

    本文是我的NuPlayer播放框架的第六篇。

    1 NuPlayer中关于DecoderBase的调用

    NuPlayer的成员变量中有两个DecoderBase指针,如下:

        sp<DecoderBase> mVideoDecoder;
        sp<DecoderBase> mAudioDecoder;
    

    这里以视频解码器mVideoDecoder说明相关调用逻辑。

    // case kWhatSetVideoSurface:
    mVideoDecoder->setVideoSurface(surface);
    
    // case kWhatConfigPlayback:
    if (mVideoDecoder != NULL) {
        float rate = getFrameRate();
        if (rate > 0) {
            sp<AMessage> params = new AMessage();
            params->setFloat("operating-rate", rate * mPlaybackSettings.mSpeed);
            mVideoDecoder->setParameters(params);
        }
    }
    
    // 很多地方调用
    instantiateDecoder(false, &mVideoDecoder);
    
    // case kWhatVideoNotify: -- case kWhatAudioNotify: 
    mVideoDecoder.clear();
    
    // NuPlayer::onStart()
    mVideoDecoder->setRenderer(mRenderer);
    
    mVideoDecoder->getStats();
    mVideoDecoder->signalResume(needNotify);
    

    这里最主要的初始化位于instantiateDecoder中,其简化版的实现如下(删除音频和字幕相关初始化参数):

    status_t NuPlayer::instantiateDecoder(
            bool audio, sp<DecoderBase> *decoder, bool checkAudioModeChange) {
    	// 从Source中获得原始的音视频格式
        sp<AMessage> format = mSource->getFormat(audio);
    
        if (format == NULL) {
            return UNKNOWN_ERROR;
        } else {
            status_t err;
            if (format->findInt32("err", &err) && err) {
                return err;
            }
        }
        format->setInt32("priority", 0 /* realtime */);
    
        {
            sp<AMessage> notify = new AMessage(kWhatVideoNotify, this);
            ++mVideoDecoderGeneration;
            notify->setInt32("generation", mVideoDecoderGeneration);
    
            *decoder = new Decoder(
                    notify, mSource, mPID, mRenderer, mSurface, mCCDecoder);
        }
        (*decoder)->init();
        (*decoder)->configure(format);
    
        return OK;
    }
    

    2 NuPlayer::DecoderBase类分析

    DecoderBase类声明如下:

    struct NuPlayer::DecoderBase : public AHandler {
        DecoderBase(const sp<AMessage> &notify);
    
        void configure(const sp<AMessage> &format);
        void init();
        void setParameters(const sp<AMessage> &params);
    
        // Synchronous call to ensure decoder will not request or send out data.
        void pause();
    
        void setRenderer(const sp<Renderer> &renderer);
        virtual status_t setVideoSurface(const sp<Surface> &) { return INVALID_OPERATION; }
    
        status_t getInputBuffers(Vector<sp<ABuffer> > *dstBuffers) const;
        void signalFlush();
        void signalResume(bool notifyComplete);
        void initiateShutdown();
    
        virtual sp<AMessage> getStats() const;
    
    protected:
        virtual ~DecoderBase();
        virtual void onMessageReceived(const sp<AMessage> &msg);
    
        virtual void onConfigure(const sp<AMessage> &format) = 0;
        virtual void onSetParameters(const sp<AMessage> &params) = 0;
        virtual void onSetRenderer(const sp<Renderer> &renderer) = 0;
        virtual void onGetInputBuffers(Vector<sp<ABuffer> > *dstBuffers) = 0;
        virtual void onResume(bool notifyComplete) = 0;
        virtual void onFlush() = 0;
        virtual void onShutdown(bool notifyComplete) = 0;
    
        void onRequestInputBuffers();
        virtual bool doRequestBuffers() = 0;
        virtual void handleError(int32_t err);
    
        sp<AMessage> mNotify;
        int32_t mBufferGeneration;
        bool mPaused;
        sp<AMessage> mStats;
    
    private:
        sp<ALooper> mDecoderLooper;
        bool mRequestInputBuffersPending;
    
        DISALLOW_EVIL_CONSTRUCTORS(DecoderBase);
    };
    
    

    从声明来看,DecoderBase主要是基于AHandler-ALoop搭建一个解码器框架和消息循环泵。将对public接口的调用直接转移到onXXX的调用上。
    下面以configure接口调用为例简单说明下DecoderBase中的调用逻辑。其实现代码如下:

    void NuPlayer::DecoderBase::configure(const sp<AMessage> &format) {
        sp<AMessage> msg = new AMessage(kWhatConfigure, this);
        msg->setMessage("format", format);
        msg->post();
    }
    

    这个函数直接发送了kWhatConfigure消息,对应的消息处理如下:

    void NuPlayer::DecoderBase::onMessageReceived(const sp<AMessage> &msg) {
    
        switch (msg->what()) {
            case kWhatConfigure:
            {
                sp<AMessage> format;
                CHECK(msg->findMessage("format", &format));
                onConfigure(format);
                break;
            }
    		// ...
    

    这就直接调用到DecoderBase::onConfigure函数上, 注意这是一个纯虚函数,没有实现的。
    其他对外接口的实现逻辑和configure非常类似。

    在NuPlayer中DecoderBase有两个子类:Decoder和DecoderPassThrough(从NuPlayer调用来看,这个类仅跟音频解码有关)。
    我们还是重点关注一个类NuPlayer::Decoder。

    3 NuPlayer::Decoder接口及主要成员分析

    这个类内部是真正调用MediaCodec实现解码的类,这里仅关注主要的对外及protected接口,和核心的成员变量。代码如下:

    struct NuPlayer::Decoder : public DecoderBase {
        Decoder(const sp<AMessage> &notify,
                const sp<Source> &source, pid_t pid,
                const sp<Renderer> &renderer = NULL, const sp<Surface> &surface = NULL,
                const sp<CCDecoder> &ccDecoder = NULL);
    
        virtual sp<AMessage> getStats() const;
    
        // sets the output surface of video decoders.
        virtual status_t setVideoSurface(const sp<Surface> &surface);
    
    protected:
        virtual ~Decoder();
    
        virtual void onMessageReceived(const sp<AMessage> &msg);
    
        virtual void onConfigure(const sp<AMessage> &format);
        virtual void onSetParameters(const sp<AMessage> &params);
        virtual void onSetRenderer(const sp<Renderer> &renderer);
        virtual void onGetInputBuffers(Vector<sp<ABuffer> > *dstBuffers);
        virtual void onResume(bool notifyComplete);
        virtual void onFlush();
        virtual void onShutdown(bool notifyComplete);
        virtual bool doRequestBuffers();
    
    private:
        sp<Surface> mSurface;
        sp<Source> mSource;
        sp<Renderer> mRenderer;
    
        sp<AMessage> mInputFormat;
        sp<AMessage> mOutputFormat;
        sp<MediaCodec> mCodec; // 解码
        sp<ALooper> mCodecLooper;
    
        List<sp<AMessage> > mPendingInputMessages;
    
        Vector<sp<ABuffer> > mInputBuffers; // 输入数据
        Vector<sp<ABuffer> > mOutputBuffers; // 输出数据
        Vector<bool> mInputBufferIsDequeued;
        Vector<MediaBuffer *> mMediaBuffers;
        Vector<size_t> mDequeuedInputBuffers;
    
        DISALLOW_EVIL_CONSTRUCTORS(Decoder);
    };
    

    接口多数是继承自DecoderBase的public和protected成员函数,这里最主要的成员是mCodec,接下来的流程梳理也是围绕这一点展开。

    4 DecoderBase/Decoder实现解析

    构造函数和析构函数

    从代码来看这两个函数主要是创建和销毁Looper,同时释放MediaCodec相关资源,代码如下:

    NuPlayer::Decoder::Decoder(
            const sp<AMessage> &notify,
            const sp<Source> &source,
            pid_t pid,
            const sp<Renderer> &renderer,
            const sp<Surface> &surface,
            const sp<CCDecoder> &ccDecoder)
        : DecoderBase(notify),
          mSurface(surface),
          mSource(source),
          mRenderer(renderer),
    {
        mCodecLooper = new ALooper;
        mCodecLooper->setName("NPDecoder-CL");
        mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
    }
    
    void NuPlayer::Decoder::releaseAndResetMediaBuffers() {
        for (size_t i = 0; i < mMediaBuffers.size(); i++) {
            if (mMediaBuffers[i] != NULL) {
                mMediaBuffers[i]->release();
                mMediaBuffers.editItemAt(i) = NULL;
            }
        }
        mMediaBuffers.resize(mInputBuffers.size());
        for (size_t i = 0; i < mMediaBuffers.size(); i++) {
            mMediaBuffers.editItemAt(i) = NULL;
        }
        mInputBufferIsDequeued.clear();
        mInputBufferIsDequeued.resize(mInputBuffers.size());
        for (size_t i = 0; i < mInputBufferIsDequeued.size(); i++) {
            mInputBufferIsDequeued.editItemAt(i) = false;
        }
    
        mPendingInputMessages.clear();
        mDequeuedInputBuffers.clear();
        mSkipRenderingUntilMediaTimeUs = -1;
    }
    
    NuPlayer::Decoder::~Decoder() {
        mCodec->release();
        releaseAndResetMediaBuffers();
    }
    

    init()和configure()实现

    init()实现比较简单,就是把Looper和Handler关联起来,代码如下:

    void NuPlayer::DecoderBase::init() {
        mDecoderLooper->registerHandler(this);
    }
    

    configure()的最终实现是在onConfigure中,代码如下:

    void NuPlayer::Decoder::onConfigure(const sp<AMessage> &format) {
        CHECK(mCodec == NULL);
    
        AString mime;
        CHECK(format->findString("mime", &mime)); // 需要找到音视频的具体类型
    
        mIsAudio = !strncasecmp("audio/", mime.c_str(), 6);
        mIsVideoAVC = !strcasecmp(MEDIA_MIMETYPE_VIDEO_AVC, mime.c_str());
    
        mComponentName = mime;
        mComponentName.append(" decoder");
    	// 根据mime,创建MediaCodec
        mCodec = MediaCodec::CreateByType(
                mCodecLooper, mime.c_str(), false /* encoder */, NULL /* err */, mPid);
        int32_t secure = 0;
        if (format->findInt32("secure", &secure) && secure != 0) {
            if (mCodec != NULL) {
                mCodec->getName(&mComponentName);
                mComponentName.append(".secure");
                mCodec->release();
                mCodec = MediaCodec::CreateByComponentName(
                        mCodecLooper, mComponentName.c_str(), NULL /* err */, mPid);
            }
        }
        if (mCodec == NULL) {
            handleError(UNKNOWN_ERROR);
            return;
        }
        mIsSecure = secure;
    
        mCodec->getName(&mComponentName);
    
        status_t err;
        if (mSurface != NULL) {
            // disconnect from surface as MediaCodec will reconnect
            err = native_window_api_disconnect(
                    mSurface.get(), NATIVE_WINDOW_API_MEDIA);
            // We treat this as a warning, as this is a preparatory step.
            // Codec will try to connect to the surface, which is where
            // any error signaling will occur.
            ALOGW_IF(err != OK, "failed to disconnect from surface: %d", err);
        }
        err = mCodec->configure(
                format, mSurface, NULL /* crypto */, 0 /* flags */);
        if (err != OK) {
            ALOGE("Failed to configure %s decoder (err=%d)", mComponentName.c_str(), err);
            mCodec->release();
            mCodec.clear();
            handleError(err);
            return;
        }
        rememberCodecSpecificData(format);
    
        // the following should work in configured state
        CHECK_EQ((status_t)OK, mCodec->getOutputFormat(&mOutputFormat));
        CHECK_EQ((status_t)OK, mCodec->getInputFormat(&mInputFormat));
    
        mStats->setString("mime", mime.c_str());
        mStats->setString("component-name", mComponentName.c_str());
    
        if (!mIsAudio) {
            int32_t width, height;
            if (mOutputFormat->findInt32("width", &width)
                    && mOutputFormat->findInt32("height", &height)) {
                mStats->setInt32("width", width);
                mStats->setInt32("height", height);
            }
        }
    
        sp<AMessage> reply = new AMessage(kWhatCodecNotify, this);
        mCodec->setCallback(reply);
    
        err = mCodec->start();
        if (err != OK) {
            ALOGE("Failed to start %s decoder (err=%d)", mComponentName.c_str(), err);
            mCodec->release();
            mCodec.clear();
            handleError(err);
            return;
        }
    
        releaseAndResetMediaBuffers();
    
        mPaused = false;
        mResumePending = false;
    }
    

    setParameters、setRenderer、setVideoSurface

    setParameters实现相对简单,直接将参数传递给MediaCodec,代码如下:

    void NuPlayer::Decoder::onSetParameters(const sp<AMessage> &params) {
        if (mCodec == NULL) {
            ALOGW("onSetParameters called before codec is created.");
            return;
        }
        mCodec->setParameters(params);
    }
    

    setRenderer接口主要的目的是启动解码流程,内部实现事件轮巡。代码如下:

    void NuPlayer::Decoder::onSetRenderer(const sp<Renderer> &renderer) {
        bool hadNoRenderer = (mRenderer == NULL);
        mRenderer = renderer;
        if (hadNoRenderer && mRenderer != NULL) {
            // this means that the widevine legacy source is ready
            onRequestInputBuffers();
        }
    }
    
    void NuPlayer::DecoderBase::onRequestInputBuffers() {
        if (mRequestInputBuffersPending) {
            return;
        }
    
        // doRequestBuffers() return true if we should request more data
        if (doRequestBuffers()) {
            mRequestInputBuffersPending = true;
    
            sp<AMessage> msg = new AMessage(kWhatRequestInputBuffers, this);
            msg->post(10 * 1000ll); // 这里重发了同一个消息
        }
    }
    
    // 代码来自NuPlayer::DecoderBase::onMessageReceived函数
    case kWhatRequestInputBuffers:
    {
        mRequestInputBuffersPending = false;
        onRequestInputBuffers();
        break;
    }
    

    setVideoSurface的实现如下:

    status_t NuPlayer::Decoder::setVideoSurface(const sp<Surface> &surface) {
        sp<AMessage> msg = new AMessage(kWhatSetVideoSurface, this);
    
        msg->setObject("surface", surface);
        sp<AMessage> response;
        status_t err = msg->postAndAwaitResponse(&response);
        if (err == OK && response != NULL) {
            CHECK(response->findInt32("err", &err));
        }
        return err;
    }
    

    对应的消息处理代码如下,其中调用了native_window相关的代码:

    case kWhatSetVideoSurface:
    {
        sp<AReplyToken> replyID;
        CHECK(msg->senderAwaitsResponse(&replyID));
    
        sp<RefBase> obj;
        CHECK(msg->findObject("surface", &obj));
        sp<Surface> surface = static_cast<Surface *>(obj.get()); // non-null
        int32_t err = INVALID_OPERATION;
        // NOTE: in practice mSurface is always non-null, but checking here for completeness
        if (mCodec != NULL && mSurface != NULL) {
            // TODO: once AwesomePlayer is removed, remove this automatic connecting
            // to the surface by MediaPlayerService.
            // at this point MediaPlayerService::client has already connected to the
            // surface, which MediaCodec does not expect
            err = native_window_api_disconnect(surface.get(), NATIVE_WINDOW_API_MEDIA);
            if (err == OK) {
                err = mCodec->setSurface(surface);
                if (err == OK) {
                    // reconnect to the old surface as MPS::Client will expect to
                    // be able to disconnect from it.
                    (void)native_window_api_connect(mSurface.get(), NATIVE_WINDOW_API_MEDIA);
                    mSurface = surface;
                }
            }
            if (err != OK) {
                // reconnect to the new surface on error as MPS::Client will expect to
                // be able to disconnect from it.
                (void)native_window_api_connect(surface.get(), NATIVE_WINDOW_API_MEDIA);
            }
        }
    
        sp<AMessage> response = new AMessage;
        response->setInt32("err", err);
        response->postReply(replyID);
        break;
    }
    

    signalFlush、signalResume、initiateShutdown

    signalFlush实现代码如下,主要调用Renderer和MediaCodec的flush接口:

    void NuPlayer::Decoder::onFlush() {
        doFlush(true);
    
        sp<AMessage> notify = mNotify->dup();
        notify->setInt32("what", kWhatFlushCompleted);
        notify->post();
    }
    
    void NuPlayer::Decoder::doFlush(bool notifyComplete) {
        if (mCCDecoder != NULL) {
            mCCDecoder->flush();
        }
    
        if (mRenderer != NULL) {
            mRenderer->flush(mIsAudio, notifyComplete);
            mRenderer->signalTimeDiscontinuity();
        }
    
        status_t err = OK;
        if (mCodec != NULL) {
            err = mCodec->flush();
            mCSDsToSubmit = mCSDsForCurrentFormat; // copy operator
            ++mBufferGeneration;
        }
    
        if (err != OK) {
            ALOGE("failed to flush %s (err=%d)", mComponentName.c_str(), err);
            handleError(err);
            // finish with posting kWhatFlushCompleted.
            // we attempt to release the buffers even if flush fails.
        }
        releaseAndResetMediaBuffers();
        mPaused = true;
    }
    

    signalResume相对简单,直接调用MediaCodec接口,代码如下:

    void NuPlayer::Decoder::onResume(bool notifyComplete) {
        mPaused = false;
    
        if (notifyComplete) {
            mResumePending = true;
        }
        mCodec->start();
    }
    

    initiateShutdown主要是关闭解码器,实现如下,

    void NuPlayer::Decoder::onShutdown(bool notifyComplete) {
        status_t err = OK;
    
        if (mCodec != NULL) {
            err = mCodec->release();
            mCodec = NULL;
            ++mBufferGeneration;
    
            if (mSurface != NULL) {
                // reconnect to surface as MediaCodec disconnected from it
                status_t error = native_window_api_connect(mSurface.get(), NATIVE_WINDOW_API_MEDIA);       
            }
            mComponentName = "decoder";
        }
        releaseAndResetMediaBuffers();
    
        if (err != OK) {
            handleError(err);
            // finish with posting kWhatShutdownCompleted.
        }
    
        if (notifyComplete) {
            sp<AMessage> notify = mNotify->dup();
            notify->setInt32("what", kWhatShutdownCompleted);
            notify->post();
            mPaused = true;
        }
    }
    

    getInputBuffers、getStats

    getInputBuffers实现直接调用MediaCodec的接口,如下:

    void NuPlayer::Decoder::onGetInputBuffers(
            Vector<sp<ABuffer> > *dstBuffers) {
        CHECK_EQ((status_t)OK, mCodec->getWidevineLegacyBuffers(dstBuffers));
    }
    

    getStats实现如下,获取解码帧数、输入输出的丢帧数目等:

    sp<AMessage> NuPlayer::Decoder::getStats() const {
        mStats->setInt64("frames-total", mNumFramesTotal);
        mStats->setInt64("frames-dropped-input", mNumInputFramesDropped);
        mStats->setInt64("frames-dropped-output", mNumOutputFramesDropped);
        return mStats;
    }
    

    5 Decoder解码流程分析

    前面是以接口为界进行代码功能分析,实际上最为主要的解码流程并没在这里。
    实际解码过程中,无外乎获取输入的压缩数据,MediaCodec解码,返回解码之后的数据,渲染。
    在Part 4中介绍onConfigure函数实现时有下面代码:

        mCodec->setCallback(reply);
    

    这里就是将MediaCodec的消息发送给当前类Decoder消息队列中,然后在onMessageReceived中处理。

    实际的解码开始是从setRenderer开始,Part 4中对这个函数的实现也有介绍,循环解码的逻辑来在于onRequestInputBuffers函数,其中会调用doRequestBuffers,其实现如下:

    // returns true if we should request more data
    bool NuPlayer::Decoder::doRequestBuffers() {
        // mRenderer is only NULL if we have a legacy widevine source that
        // is not yet ready. In this case we must not fetch input.
        if (isDiscontinuityPending() || mRenderer == NULL) {
            return false;
        }
        status_t err = OK;
        while (err == OK && !mDequeuedInputBuffers.empty()) {
            size_t bufferIx = *mDequeuedInputBuffers.begin();
            sp<AMessage> msg = new AMessage();
            msg->setSize("buffer-ix", bufferIx);
            err = fetchInputData(msg); // 取一个输入缓冲
            if (err != OK && err != ERROR_END_OF_STREAM) {
                // if EOS, need to queue EOS buffer
                break;
            }
            mDequeuedInputBuffers.erase(mDequeuedInputBuffers.begin());
    
            if (!mPendingInputMessages.empty()
                    || !onInputBufferFetched(msg)) {
                mPendingInputMessages.push_back(msg); // 实际取出的数据放到这个缓冲消息队列中
            }
        }
    
        return err == -EWOULDBLOCK
                && mSource->feedMoreTSData() == OK;
    }
    

    如果doRequestBuffers返回true的话,kWhatRequestInputBuffers会循环发送kWhatRequestInputBuffers消息,驱动正常解码逻辑。

    下面是关于MediaCodec返回消息的处理逻辑,代码主要集中在onMessageReceived中,如下:

    case kWhatCodecNotify:
    {
        int32_t cbID;
        CHECK(msg->findInt32("callbackID", &cbID));
    
        if (mPaused) {
            break;
        }
    
        switch (cbID) {
            case MediaCodec::CB_INPUT_AVAILABLE:
            {// 可以填充输入数据了
                int32_t index;
                CHECK(msg->findInt32("index", &index));
    
                handleAnInputBuffer(index);
                break;
            }
    
            case MediaCodec::CB_OUTPUT_AVAILABLE:
            {// 解码成功的消息,输出的数据在这里
                int32_t index;
                size_t offset;
                size_t size;
                int64_t timeUs;
                int32_t flags;
    
                CHECK(msg->findInt32("index", &index));
                CHECK(msg->findSize("offset", &offset));
                CHECK(msg->findSize("size", &size));
                CHECK(msg->findInt64("timeUs", &timeUs));
                CHECK(msg->findInt32("flags", &flags));
    
                handleAnOutputBuffer(index, offset, size, timeUs, flags);
                break;
            }
    
            case MediaCodec::CB_OUTPUT_FORMAT_CHANGED:
            { // 通知输出格式变化
                sp<AMessage> format;
                CHECK(msg->findMessage("format", &format));
    
                handleOutputFormatChange(format);
                break;
            }
    
            case MediaCodec::CB_ERROR:
            {// 发生未知的错误,需要处理下
                status_t err;
                CHECK(msg->findInt32("err", &err));
                handleError(err);
                break;
            }
    
            default:
            {// 其他消息不作处理
                TRESPASS();
                break;
            }
        }
    
        break;
    }
    

    先看如何向MediaCodec输入数据。代码如下:

    bool NuPlayer::Decoder::handleAnInputBuffer(size_t index) {
        if (isDiscontinuityPending()) {
            return false;
        }
    
        sp<ABuffer> buffer;
        mCodec->getInputBuffer(index, &buffer); // 先从MediaCodec中获取一个可用的输入缓冲
    
        if (buffer == NULL) {
            handleError(UNKNOWN_ERROR);
            return false;
        }
    
        if (index >= mInputBuffers.size()) {
            for (size_t i = mInputBuffers.size(); i <= index; ++i) {
                mInputBuffers.add();
                mMediaBuffers.add();
                mInputBufferIsDequeued.add();
                mMediaBuffers.editItemAt(i) = NULL;
                mInputBufferIsDequeued.editItemAt(i) = false;
            }
        }
        mInputBuffers.editItemAt(index) = buffer;
    
        //CHECK_LT(bufferIx, mInputBuffers.size());
    
        if (mMediaBuffers[index] != NULL) {
            mMediaBuffers[index]->release();
            mMediaBuffers.editItemAt(index) = NULL;
        }
        mInputBufferIsDequeued.editItemAt(index) = true;
    
        if (!mCSDsToSubmit.isEmpty()) {
            sp<AMessage> msg = new AMessage();
            msg->setSize("buffer-ix", index);
    
            sp<ABuffer> buffer = mCSDsToSubmit.itemAt(0);
            ALOGI("[%s] resubmitting CSD", mComponentName.c_str());
            msg->setBuffer("buffer", buffer);
            mCSDsToSubmit.removeAt(0);
            if (!onInputBufferFetched(msg)) {
                handleError(UNKNOWN_ERROR);
                return false;
            }
            return true;
        }
    
        while (!mPendingInputMessages.empty()) {
            sp<AMessage> msg = *mPendingInputMessages.begin();
            if (!onInputBufferFetched(msg)) { // 这里是完成数据填充的地方
                break;
            }
            mPendingInputMessages.erase(mPendingInputMessages.begin());
        }
    
        if (!mInputBufferIsDequeued.editItemAt(index)) {
            return true;
        }
    
        mDequeuedInputBuffers.push_back(index);
    
        onRequestInputBuffers();
        return true;
    }
    

    那么看看onInputBufferFetched的实现,这里会操作MediaCodec:

    bool NuPlayer::Decoder::onInputBufferFetched(const sp<AMessage> &msg) {
        size_t bufferIx;
        CHECK(msg->findSize("buffer-ix", &bufferIx));
        CHECK_LT(bufferIx, mInputBuffers.size());
        sp<ABuffer> codecBuffer = mInputBuffers[bufferIx];
    
        sp<ABuffer> buffer;
        bool hasBuffer = msg->findBuffer("buffer", &buffer);
        if (buffer == NULL /* includes !hasBuffer */) {
            int32_t streamErr = ERROR_END_OF_STREAM;
            CHECK(msg->findInt32("err", &streamErr) || !hasBuffer);
    
            CHECK(streamErr != OK);
    
            // attempt to queue EOS,通知MediaCodec结束标志EOS
            status_t err = mCodec->queueInputBuffer(
                    bufferIx, 0, 0, 0, MediaCodec::BUFFER_FLAG_EOS);
            if (err == OK) {
                mInputBufferIsDequeued.editItemAt(bufferIx) = false;
            } else if (streamErr == ERROR_END_OF_STREAM) {
                streamErr = err;
                // err will not be ERROR_END_OF_STREAM
            }
    
            if (streamErr != ERROR_END_OF_STREAM) {
                handleError(streamErr);
            }
        } else {
            sp<AMessage> extra;
            if (buffer->meta()->findMessage("extra", &extra) && extra != NULL) {
                int64_t resumeAtMediaTimeUs;
                if (extra->findInt64("resume-at-mediaTimeUs", &resumeAtMediaTimeUs)) {
                    mSkipRenderingUntilMediaTimeUs = resumeAtMediaTimeUs;
                }
            }
    
            int64_t timeUs = 0;
            uint32_t flags = 0;
            CHECK(buffer->meta()->findInt64("timeUs", &timeUs));
    
            int32_t eos, csd;
            // we do not expect SYNCFRAME for decoder
            if (buffer->meta()->findInt32("eos", &eos) && eos) {
                flags |= MediaCodec::BUFFER_FLAG_EOS;
            } else if (buffer->meta()->findInt32("csd", &csd) && csd) {
                flags |= MediaCodec::BUFFER_FLAG_CODECCONFIG;
            }
    
            // copy into codec buffer
            if (buffer != codecBuffer) {
                if (buffer->size() > codecBuffer->capacity()) {
                    handleError(ERROR_BUFFER_TOO_SMALL);
                    mDequeuedInputBuffers.push_back(bufferIx);
                    return false;
                }
                codecBuffer->setRange(0, buffer->size());
                memcpy(codecBuffer->data(), buffer->data(), buffer->size()); // 实际复制数据的地方
            }
    		// 实际交给解码器解码的位置
            status_t err = mCodec->queueInputBuffer(
                            bufferIx,
                            codecBuffer->offset(),
                            codecBuffer->size(),
                            timeUs,
                            flags);
            if (err != OK) {
                if (mediaBuffer != NULL) {
                    mediaBuffer->release();
                }
                handleError(err);
            } else {
                mInputBufferIsDequeued.editItemAt(bufferIx) = false;
                if (mediaBuffer != NULL) {
                    CHECK(mMediaBuffers[bufferIx] == NULL);
                    mMediaBuffers.editItemAt(bufferIx) = mediaBuffer;
                }
            }
        }
        return true;
    }
    

    这样解码前的数据准备就完成了。
    解码后的数据处理函数是handleAnOutputBuffer,响应MediaCodec::CB_OUTPUT_AVAILABLE消息,代码如下:

    bool NuPlayer::Decoder::handleAnOutputBuffer(size_t index, size_t offset,
             size_t size, int64_t timeUs, int32_t flags) {
        sp<ABuffer> buffer;
        mCodec->getOutputBuffer(index, &buffer);
    
        if (index >= mOutputBuffers.size()) {
            for (size_t i = mOutputBuffers.size(); i <= index; ++i) {
                mOutputBuffers.add();
            }
        }
    
        mOutputBuffers.editItemAt(index) = buffer;
    
        buffer->setRange(offset, size);
        buffer->meta()->clear();
        buffer->meta()->setInt64("timeUs", timeUs);
    
        bool eos = flags & MediaCodec::BUFFER_FLAG_EOS;
        // we do not expect CODECCONFIG or SYNCFRAME for decoder
    
        sp<AMessage> reply = new AMessage(kWhatRenderBuffer, this);
        reply->setSize("buffer-ix", index);
        reply->setInt32("generation", mBufferGeneration);
    
        if (eos) {
            buffer->meta()->setInt32("eos", true);
            reply->setInt32("eos", true);
        } else if (mSkipRenderingUntilMediaTimeUs >= 0) {
            if (timeUs < mSkipRenderingUntilMediaTimeUs) {
                reply->post();
                return true;
            }
    
            mSkipRenderingUntilMediaTimeUs = -1;
        }
    
        mNumFramesTotal += !mIsAudio;
    
        // wait until 1st frame comes out to signal resume complete
        notifyResumeCompleteIfNecessary();
    
        if (mRenderer != NULL) {
            // send the buffer to renderer. 这里是将缓冲区给Renderer处理
            mRenderer->queueBuffer(mIsAudio, buffer, reply);
            if (eos && !isDiscontinuityPending()) {
                mRenderer->queueEOS(mIsAudio, ERROR_END_OF_STREAM);
            }
        }
    
        return true;
    }
    

    还有最后一个消息MediaCodec::CB_OUTPUT_AVAILABLE,处理函数是handleOutputFormatChange,代码如下:

    void NuPlayer::Decoder::handleOutputFormatChange(const sp<AMessage> &format) {
        if (!mIsAudio) {
            int32_t width, height;
            if (format->findInt32("width", &width)
                    && format->findInt32("height", &height)) {
                mStats->setInt32("width", width);
                mStats->setInt32("height", height);
            }
            sp<AMessage> notify = mNotify->dup();
            notify->setInt32("what", kWhatVideoSizeChanged);
            notify->setMessage("format", format);
            notify->post();// 通知NuPlayer视频分辨率改变
        } else if (mRenderer != NULL) { // 音频的情况,重置AudioSink
            uint32_t flags;
            int64_t durationUs;
            bool hasVideo = (mSource->getFormat(false /* audio */) != NULL);
            if (getAudioDeepBufferSetting() // override regardless of source duration
                    || (!hasVideo
                            && mSource->getDuration(&durationUs) == OK
                            && durationUs > AUDIO_SINK_MIN_DEEP_BUFFER_DURATION_US)) {
                flags = AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
            } else {
                flags = AUDIO_OUTPUT_FLAG_NONE;
            }
    
            status_t err = mRenderer->openAudioSink(
                    format, false /* offloadOnly */, hasVideo, flags, NULL /* isOffloaed */);
            if (err != OK) {
                handleError(err);
            }
        }
    }
    

    还剩最后一部分,显示是如何处理的。那就是handleAnOutputBuffer发送的一个kWhatRenderBuffer消息的处理,代码如下:

    case kWhatRenderBuffer:
    {
        if (!isStaleReply(msg)) {
            onRenderBuffer(msg);
        }
        break;
    }
    
    void NuPlayer::Decoder::onRenderBuffer(const sp<AMessage> &msg) {
        status_t err;
        int32_t render;
        size_t bufferIx;
        int32_t eos;
        CHECK(msg->findSize("buffer-ix", &bufferIx));
    
        if (!mIsAudio) {
            int64_t timeUs;
            sp<ABuffer> buffer = mOutputBuffers[bufferIx];
            buffer->meta()->findInt64("timeUs", &timeUs);
        }
    	// 显示和释放缓冲
        if (msg->findInt32("render", &render) && render) {
            int64_t timestampNs;
            CHECK(msg->findInt64("timestampNs", &timestampNs));
            err = mCodec->renderOutputBufferAndRelease(bufferIx, timestampNs);
        } else {
            mNumOutputFramesDropped += !mIsAudio;
            err = mCodec->releaseOutputBuffer(bufferIx);
        }
        if (err != OK) {
            handleError(err);
        }
        if (msg->findInt32("eos", &eos) && eos  && isDiscontinuityPending()) {
            finishHandleDiscontinuity(true /* flushOnTimeChange */);
        }
    }
    

    至此整个流程分析完毕。

    6 总结

    回顾下,本文主要介绍了NuPlayer::DecoderBase及其子类NuPlayer::Decoder的实现代码,从demuxer之后的数据,到通过MediaCodec解码,得到原始数据,之后渲染。
    作为后续学习NuPlayer代码的参考部分。

  • 相关阅读:
    随笔
    随笔
    随笔1
    随笔2
    intellij-maven-imports-have-broken-classpath
    如何使用idea把web项目打成war包
    spring-wind 搭建过程问题记录
    windows 64位 安装mvn提示 不是内部或外部命令
    面试碰到“为何从上家离职”...
    nginx 两台机器 出现退款失败问题
  • 原文地址:https://www.cnblogs.com/tocy/p/6-nuplayer-decoderbase-code-analysis.html
Copyright © 2011-2022 走看看