zoukankan      html  css  js  c++  java
  • ④NuPlayer播放框架之Renderer源码分析

    [时间:2016-11] [状态:Open]
    [关键词:android,nuplayer,开源播放器,播放框架,渲染器,render]

    0 导读

    之前我们分析了NuPlayer的实现代码,本文将重点聚焦于其中的一部分——渲染器(Renderer)。
    从功能安排来说,Renderer的主要功能有:

    • 音视频原始数据缓存操作
    • 音频播放(到声卡)
    • 视频显示(到显卡)
    • 音视频同步
    • 其他辅助播放器控制的操作
    • 其他获取渲染状态/属性的接口

    接下来主要从Renderer的对外接口和实现说明下其中的处理逻辑。

    本文是我的NuPlayer播放框架的第四篇。

    1 NuPlayer::Renderer对外接口及主要成员

    // code frome ~/frameworks/av/media/libmediaplayerservice/nuplayer/NuPlayerRenderer.h
    struct NuPlayer::Renderer : public AHandler {
        Renderer(const sp<MediaPlayerBase::AudioSink> &sink,
                 const sp<AMessage> &notify, uint32_t flags = 0);
    
        static size_t AudioSinkCallback(MediaPlayerBase::AudioSink *audioSink,
                void *data, size_t size, void *me,
                MediaPlayerBase::AudioSink::cb_event_t event);
    	// 缓冲音视频原始数据
        void queueBuffer(bool audio,
                const sp<ABuffer> &buffer, const sp<AMessage> &notifyConsumed);
    
        void queueEOS(bool audio, status_t finalResult);
    
        status_t setPlaybackSettings(const AudioPlaybackRate &rate /* sanitized */);
        status_t getPlaybackSettings(AudioPlaybackRate *rate /* nonnull */);
        status_t setSyncSettings(const AVSyncSettings &sync, float videoFpsHint);
        status_t getSyncSettings(AVSyncSettings *sync /* nonnull */, float *videoFps /* nonnull */);
    
        void flush(bool audio, bool notifyComplete);
    
        void signalTimeDiscontinuity();
    
        void signalAudioSinkChanged();
    
        void signalDisableOffloadAudio();
        void signalEnableOffloadAudio();
    
        void pause();
        void resume();
    
        void setVideoFrameRate(float fps);
    
        status_t getCurrentPosition(int64_t *mediaUs);
        int64_t getVideoLateByUs();
    
        status_t openAudioSink( const sp<AMessage> &format, bool offloadOnly, 
    		bool hasVideo, uint32_t flags, bool *isOffloaded);
        void closeAudioSink();
    
    private:
    	struct QueueEntry {
            sp<ABuffer> mBuffer;
            sp<AMessage> mNotifyConsumed;
            size_t mOffset;
            status_t mFinalResult;
            int32_t mBufferOrdinal;
        };
    
        static const int64_t kMinPositionUpdateDelayUs;
    
        sp<MediaPlayerBase::AudioSink> mAudioSink;
        bool mUseVirtualAudioSink;
        sp<AMessage> mNotify;
        Mutex mLock;
        uint32_t mFlags;
        List<QueueEntry> mAudioQueue; // 音频缓冲
        List<QueueEntry> mVideoQueue; // 视频缓冲
        uint32_t mNumFramesWritten;
        sp<VideoFrameScheduler> mVideoScheduler;
    	sp<MediaClock> mMediaClock;
        float mPlaybackRate; // audio track rate
    
    }
    

    首先看到的是Renderer本身是AHandler的子类。还记得之前的AHandler和ALooper配合使用的机制嘛?其中ALooper位于NuPlayer中,变量名为mRendererLooper。

    2 NuPlayer中调用的Renderer接口

    先回顾下NuPlayer源码解析中的调用接口。

    • 构造/析构函数
    • 设置播放控制参数——setPlaybackSettings/getPlaybackSettings/setVideoFrameRate/setSyncSettings/getSyncSettings
    • AudioSink相关——openAudioSink/closeAudioSink
    • 控制接口——pause/flush/resume/queueEOS
    • 音频状态更新——signalEnableOffloadAudio/signalDisableOffloadAudio
    • 音视频原始数据输入——queueBuffer
      NuPlayer中并未显示调用,而是将Renderer设置给ADecoder使用
    if (mVideoDecoder != NULL) {
        mVideoDecoder->setRenderer(mRenderer);
    }
    if (mAudioDecoder != NULL) {
        mAudioDecoder->setRenderer(mRenderer); 
    }
    

    3 Renderer具体接口分析

    构造函数喝析构函数

    构造函数最主要的是创建一个MediaClock,用于同步和计时。主要代码如下:

        mMediaClock = new MediaClock;
        mPlaybackRate = mPlaybackSettings.mSpeed;
        mMediaClock->setPlaybackRate(mPlaybackRate);
    

    由于AHandler是智能指针,可以不考虑析构函数。不过可以看下代码中实现:

    NuPlayer::Renderer::~Renderer() {
        if (offloadingAudio()) {
            mAudioSink->stop(); // 主要是针对AudioSink的处理
            mAudioSink->flush();
            mAudioSink->close();
        }
    }
    

    设置播放控制参数类接口

    音频回放参数设置-setPlaybackSettings/getPlaybackSettings

    主要接口定义及参数如下:

    status_t setPlaybackSettings(const AudioPlaybackRate &rate /* sanitized */);
    status_t getPlaybackSettings(AudioPlaybackRate *rate /* nonnull */);
    
    struct AudioPlaybackRate {
        float mSpeed; // 播放倍速
        float mPitch; // 声调参数
        enum AudioTimestretchStretchMode  mStretchMode; // 拉伸模式
        enum AudioTimestretchFallbackMode mFallbackMode; // 备用模式
    };
    

    从实际接口含义来看主要控制音频播放速率。最终设置函数将参数传递给mMediaClock->setPlaybackRate函数。

    视频播放帧率参数-setVideoFrameRate

    函数原型如下:void setVideoFrameRate(float fps);。只有一个参数视频播放帧率fps,最终实现函数将该参数设置给mVideoScheduler。实现如下:

    void NuPlayer::Renderer::onSetVideoFrameRate(float fps) {
        if (mVideoScheduler == NULL) {
            mVideoScheduler = new VideoFrameScheduler();
        }
        mVideoScheduler->init(fps);
    }
    

    音视频同步参数-setSyncSettings/getSyncSettings

    接口声明及主要参数如下:

    status_t setSyncSettings(const AVSyncSettings &sync, float videoFpsHint);
    status_t getSyncSettings(AVSyncSettings *sync /* nonnull */, float *videoFps /* nonnull */);
    
    // from ~/frameworks/av/include/media/AVSyncSettings.h
    struct AVSyncSettings {
        AVSyncSource mSource; // 同步基准
        AVSyncAudioAdjustMode mAudioAdjustMode; // 音频调整方式
        float mTolerance; // 最大容忍的调速时间
        AVSyncSettings()
            : mSource(AVSYNC_SOURCE_DEFAULT),
              mAudioAdjustMode(AVSYNC_AUDIO_ADJUST_MODE_DEFAULT),
              mTolerance(.044f) { }
    };
    

    看代码实现就会发现,Renderer中并没有实现setSyncSettings,只是判断了必须使用必须使用默认的同步方式,判断逻辑如下:

    status_t NuPlayer::Renderer::onConfigSync(const AVSyncSettings &sync, float videoFpsHint __unused) {
        if (sync.mSource != AVSYNC_SOURCE_DEFAULT) {
            return BAD_VALUE;
        }
        // TODO: support sync sources
        return INVALID_OPERATION;
    }
    

    至于这里涉及的MediaClock、AudioSink、VideoFrameSchedule后续有专门介绍。

    AudioSink相关-openAudioSink/closeAudioSink

    主要用于创建和关闭AudioSink,声明如下:

    status_t openAudioSink(
            const sp<AMessage> &format,
            bool offloadOnly,
            bool hasVideo,
            uint32_t flags,
            bool *isOffloaded);
    void closeAudioSink();
    

    后续会解释两个接口。

    控制接口-pause/flush/resume/queueEOS

    pause/resume接口

    暂停和恢复接口,实现类似,pause接口最终实现是在onPause中:

    void NuPlayer::Renderer::onPause() {
        if (mPaused) {
            return;
        }
    
        {
            Mutex::Autolock autoLock(mLock);
            // we do not increment audio drain generation so that we fill audio buffer during pause.
            ++mVideoDrainGeneration;
            prepareForMediaRenderingStart_l();
            mPaused = true;
            mMediaClock->setPlaybackRate(0.0); // 设置成0.0,后面解释为什么
        }
    
        mDrainAudioQueuePending = false;
        mDrainVideoQueuePending = false;
    
        // Note: audio data may not have been decoded, and the AudioSink may not be opened.
        mAudioSink->pause();
        startAudioOffloadPauseTimeout();
    }
    

    其最终通过mMediaClock->setPlaybackRate和mAudioSink->pause接口实现暂停功能。
    resume接口最终实现是在onResume中,代码如下:

    void NuPlayer::Renderer::onResume() {
        if (!mPaused) {
            return;
        }
    
        // Note: audio data may not have been decoded, and the AudioSink may not be opened.
        cancelAudioOffloadPauseTimeout();
        if (mAudioSink->ready()) {
            status_t err = mAudioSink->start();
            if (err != OK) {
                ALOGE("cannot start AudioSink err %d", err);
                notifyAudioTearDown(kDueToError);
            }
        }
    
        {
            Mutex::Autolock autoLock(mLock);
            mPaused = false;
            // rendering started message may have been delayed if we were paused.
            if (mRenderingDataDelivered) {
                notifyIfMediaRenderingStarted_l();
            }
            // configure audiosink as we did not do it when pausing
            if (mAudioSink != NULL && mAudioSink->ready()) {
                mAudioSink->setPlaybackRate(mPlaybackSettings);
            }
    
            mMediaClock->setPlaybackRate(mPlaybackRate);
    
            if (!mAudioQueue.empty()) {
                postDrainAudioQueue_l();
            }
        }
    
        if (!mVideoQueue.empty()) {
            postDrainVideoQueue();
        }
    }
    

    基本上是通过mAudioSink->start()和mMediaClock->setPlaybackRate实现,这过程中也有音视频队列清空的操作。

    flush接口

    主要分为针对音频的flush和针对视频的flush,具体实现时,音频主要是使用AudioSink的pause/flush/start接口,视频主要是使用清空缓冲队列和mVideoScheduler->restart实现。详细实现建议参考NuPlayer::Renderer::onFlush的代码。

    queueEOS

    添加流结束标志,最终实现是在onQueueEOS接口中,代码如下:

    void NuPlayer::Renderer::onQueueEOS(const sp<AMessage> &msg) {
        int32_t audio;
        CHECK(msg->findInt32("audio", &audio));
    
        if (dropBufferIfStale(audio, msg)) {
            return;
        }
    
        int32_t finalResult;
        CHECK(msg->findInt32("finalResult", &finalResult));
    
        QueueEntry entry;
        entry.mOffset = 0;
        entry.mFinalResult = finalResult;
    
        if (audio) { // 音频EOS
            Mutex::Autolock autoLock(mLock);
            if (mAudioQueue.empty() && mSyncQueues) {
                syncQueuesDone_l();
            }
            mAudioQueue.push_back(entry);
            postDrainAudioQueue_l();
        } else { // 视频EOS
            if (mVideoQueue.empty() && getSyncQueues()) {
                Mutex::Autolock autoLock(mLock);
                syncQueuesDone_l();
            }
            mVideoQueue.push_back(entry);
            postDrainVideoQueue();
        }
    }
    

    音视频原始数据输入——queueBuffer

    在NuPlayer中没看到这个函数调用,但总体来说这个应该由音视频解码器调用,主要将解码之后的音视频原始数据通知显示端并作缓存和同步。主要实现代码如下:(有删减)

    void NuPlayer::Renderer::onQueueBuffer(const sp<AMessage> &msg) {
        int32_t audio;
        CHECK(msg->findInt32("audio", &audio));
    
        if (dropBufferIfStale(audio, msg)) {
            return;
        }
    
        sp<ABuffer> buffer;
        CHECK(msg->findBuffer("buffer", &buffer)); // 传入的数据存储在这里
    
        QueueEntry entry;
        entry.mBuffer = buffer;
        entry.mNotifyConsumed = notifyConsumed;
        entry.mOffset = 0;
        entry.mFinalResult = OK;
        entry.mBufferOrdinal = ++mTotalBuffersQueued;
    	// 将数据放到音频或者视频缓冲队列中
        if (audio) {
            Mutex::Autolock autoLock(mLock);
            mAudioQueue.push_back(entry);
            postDrainAudioQueue_l();
        } else {
            mVideoQueue.push_back(entry);
            postDrainVideoQueue();
        }
    	// 后续代码是做同步的
        Mutex::Autolock autoLock(mLock);
        if (!mSyncQueues || mAudioQueue.empty() || mVideoQueue.empty()) {
            return;
        }
    
        sp<ABuffer> firstAudioBuffer = (*mAudioQueue.begin()).mBuffer;
        sp<ABuffer> firstVideoBuffer = (*mVideoQueue.begin()).mBuffer;
    
        if (firstAudioBuffer == NULL || firstVideoBuffer == NULL) {
            // EOS signalled on either queue.
            syncQueuesDone_l();
            return;
        }
    
        int64_t firstAudioTimeUs;
        int64_t firstVideoTimeUs;
        CHECK(firstAudioBuffer->meta()
                ->findInt64("timeUs", &firstAudioTimeUs));
        CHECK(firstVideoBuffer->meta()
                ->findInt64("timeUs", &firstVideoTimeUs));
    
        int64_t diff = firstVideoTimeUs - firstAudioTimeUs;
    
        ALOGV("queueDiff = %.2f secs", diff / 1E6);
    
        if (diff > 100000ll) { // 
            // Audio data starts More than 0.1 secs before video.
            // Drop some audio.
    
            (*mAudioQueue.begin()).mNotifyConsumed->post();
            mAudioQueue.erase(mAudioQueue.begin());
            return;
        }
    
        syncQueuesDone_l();
    }
    

    4 MediaClock简介

    看名字,MediaClock有点时钟同步的感觉,说白了就是一个多媒体时钟,是libstagefright提供的一个公共类。具体接口如下:

    struct MediaClock : public RefBase {
        MediaClock();
    
        void setStartingTimeMedia(int64_t startingTimeMediaUs);
        void clearAnchor();
        void updateAnchor( int64_t anchorTimeMediaUs,
                int64_t anchorTimeRealUs, int64_t maxTimeMediaUs = INT64_MAX);
    
        void updateMaxTimeMedia(int64_t maxTimeMediaUs);
    
        void setPlaybackRate(float rate);
        float getPlaybackRate() const;
    
        // 查询与实际时间|realUs|对应的多媒体时间,并将结果保存在|outMediaUs|中
        status_t getMediaTime( int64_t realUs, int64_t *outMediaUs,
                bool allowPastMaxTime = false) const;
        // query real time corresponding to media time 查询与多媒体时间|targetMediaUs|对应的实际时间,结果保存在|outRealUs|中
        status_t getRealTimeFor(int64_t targetMediaUs, int64_t *outRealUs) const;
    
    private:
        int64_t mAnchorTimeMediaUs;
        int64_t mAnchorTimeRealUs;
        int64_t mMaxTimeMediaUs;
        int64_t mStartingTimeMediaUs;
    
        float mPlaybackRate;
    };
    

    我把这个类的实现分为两部分,不需要逻辑判断的赋值或返回代码,需要额外计算的代码。先看简单的部分,函数功能主要是赋值和返回参数。

    // code from ~/frameworks/av/media/libstagefright/MediaClock.cpp
    MediaClock::MediaClock() : mAnchorTimeMediaUs(-1), mAnchorTimeRealUs(-1),
          mMaxTimeMediaUs(INT64_MAX), mStartingTimeMediaUs(-1), mPlaybackRate(1.0) {}
    
    MediaClock::~MediaClock() {}
    
    void MediaClock::setStartingTimeMedia(int64_t startingTimeMediaUs) {
        mStartingTimeMediaUs = startingTimeMediaUs;
    }
    
    void MediaClock::clearAnchor() {
        mAnchorTimeMediaUs = -1;
        mAnchorTimeRealUs = -1;
    }
    
    void MediaClock::updateMaxTimeMedia(int64_t maxTimeMediaUs) {
        mMaxTimeMediaUs = maxTimeMediaUs;
    }
    
    float MediaClock::getPlaybackRate() const {
        Mutex::Autolock autoLock(mLock);
        return mPlaybackRate;
    }
    

    这部分代码实现了时钟的主要功能,对多媒体时间和实际时间做了对应关系。(注意代码部分有删减,仅保留核心逻辑)

    void MediaClock::updateAnchor(
            int64_t anchorTimeMediaUs, // 锚点的播放时间戳
            int64_t anchorTimeRealUs, // 锚点的实际时间
            int64_t maxTimeMediaUs) {
        int64_t nowUs = ALooper::GetNowUs(); // 当前系统时钟
        int64_t nowMediaUs = anchorTimeMediaUs + (nowUs - anchorTimeRealUs) * (double)mPlaybackRate; // 转换为当前值,误差低
    
        if (maxTimeMediaUs != -1) {
            mMaxTimeMediaUs = maxTimeMediaUs;
        }
        mAnchorTimeRealUs = nowUs;
        mAnchorTimeMediaUs = nowMediaUs;
    }
    
    void MediaClock::setPlaybackRate(float rate) {
        CHECK_GE(rate, 0.0);
        if (mAnchorTimeRealUs == -1) {
            mPlaybackRate = rate;
            return;
        }
    
        int64_t nowUs = ALooper::GetNowUs();
        mAnchorTimeMediaUs += (nowUs - mAnchorTimeRealUs) * (double)mPlaybackRate;
        mAnchorTimeRealUs = nowUs;
        mPlaybackRate = rate;
    }
    
    // 以下两个函数完成MediaTime <-->realTime的映射,具体原理还是来自updateAnchor
    status_t MediaClock::getMediaTime(int64_t realUs, int64_t *outMediaUs, bool allowPastMaxTime) const {
        return getMediaTime_l(realUs, outMediaUs, allowPastMaxTime);
    }
    
    status_t MediaClock::getMediaTime_l(int64_t realUs, int64_t *outMediaUs, bool allowPastMaxTime) const {
        if (mAnchorTimeRealUs == -1) {
            return NO_INIT;
        }
    
        int64_t mediaUs = mAnchorTimeMediaUs
                + (realUs - mAnchorTimeRealUs) * (double)mPlaybackRate;
        if (mediaUs > mMaxTimeMediaUs && !allowPastMaxTime) {
            mediaUs = mMaxTimeMediaUs;
        }
        if (mediaUs < mStartingTimeMediaUs) {
            mediaUs = mStartingTimeMediaUs;
        }
        if (mediaUs < 0) {
            mediaUs = 0;
        }
        *outMediaUs = mediaUs;
        return OK;
    }
    
    status_t MediaClock::getRealTimeFor(int64_t targetMediaUs, int64_t *outRealUs) const {
        if (outRealUs == NULL) {
            return BAD_VALUE;
        }
    
        if (mPlaybackRate == 0.0) {
            return NO_INIT;
        }
    
        int64_t nowUs = ALooper::GetNowUs();
        int64_t nowMediaUs;
        status_t status =
                getMediaTime_l(nowUs, &nowMediaUs, true /* allowPastMaxTime */);
        if (status != OK) {
            return status;
        }
        *outRealUs = (targetMediaUs - nowMediaUs) / (double)mPlaybackRate + nowUs;
        return OK;
    }
    

    还记得在前面解释Renderer::pause实现的时候把mPlaybackRate设置成0嘛,看到上面的计算代码基本上就可以明白了。
    比较有意思的是针对mPlaybackRate的处理及Renderer调用的逻辑。下面是获得当前播放位置的函数实现

    status_t NuPlayer::Renderer::getCurrentPosition(int64_t *mediaUs) {
    	// 注意是直接调用的MediaClock::getMediaTime()
        status_t result = mMediaClock->getMediaTime(ALooper::GetNowUs(), mediaUs);
        if (result == OK) {
            return result;
        }
    
        // MediaClock未初始化,尝试初始化之
        {
            AudioTimestamp ts;// 另一种时钟计算方法
            status_t res = mAudioSink->getTimestamp(ts);
            if (res != OK) {
                return result;
            }
    
            // AudioSink has rendered some frames.
            int64_t nowUs = ALooper::GetNowUs();
            int64_t nowMediaUs = mAudioSink->getPlayedOutDurationUs(nowUs)
                    + mAudioFirstAnchorTimeMediaUs;
            mMediaClock->updateAnchor(nowMediaUs, nowUs, -1);
        }
    
        return mMediaClock->getMediaTime(ALooper::GetNowUs(), mediaUs);
    }
    

    到这里基本解释清楚MediaClock是做什么的,但是疑问还在,音视频同步在哪里,怎么做到的?

    5 AudioSink简介

    以下资料来在Google group,内容如下:

    AudioTrack is the hardware audio sink. AudioSink is used for in-memory
    decode and potentially other applications where output doesn't go
    straight to hardware.

    翻译过来就是AudioTrack是一种特殊的AudioSink,与硬件对应;而AudioSink是用于内存解码的,所得数据不直接输出到音频设备上。
    在之前文章MediaPlayer Interface&State中可以看到MediaPlayerBase里面有一个抽象类定义,AudioSink。下面是具体的接口:

    class AudioSink : public RefBase {
    public:
        enum cb_event_t {
            CB_EVENT_FILL_BUFFER,   // Request to write more data to buffer.
            CB_EVENT_STREAM_END,    // Sent after all the buffers queued in AF and HW are played
                                    // back (after stop is called)
            CB_EVENT_TEAR_DOWN      // The AudioTrack was invalidated due to use case change:
                                    // Need to re-evaluate offloading options
        };
    
        // Callback returns the number of bytes actually written to the buffer.
        typedef size_t (*AudioCallback)(
                AudioSink *audioSink, void *buffer, size_t size, void *cookie, cb_event_t event);
    
        virtual             ~AudioSink() {}
        virtual bool        ready() const = 0; // audio output is open and ready
        virtual ssize_t     bufferSize() const = 0;
        virtual ssize_t     frameCount() const = 0;
        virtual ssize_t     channelCount() const = 0;
        virtual ssize_t     frameSize() const = 0;
        virtual uint32_t    latency() const = 0;
        virtual float       msecsPerFrame() const = 0;
        virtual status_t    getPosition(uint32_t *position) const = 0;
        virtual status_t    getTimestamp(AudioTimestamp &ts) const = 0;
        virtual int64_t     getPlayedOutDurationUs(int64_t nowUs) const = 0;
        virtual status_t    getFramesWritten(uint32_t *frameswritten) const = 0;
        virtual audio_session_t getSessionId() const = 0;
        virtual audio_stream_type_t getAudioStreamType() const = 0;
        virtual uint32_t    getSampleRate() const = 0;
        virtual int64_t     getBufferDurationInUs() const = 0;
    
        // If no callback is specified, use the "write" API below to submit audio data.
        virtual status_t    open(
                uint32_t sampleRate, int channelCount, audio_channel_mask_t channelMask,
                audio_format_t format=AUDIO_FORMAT_PCM_16_BIT,
                int bufferCount=DEFAULT_AUDIOSINK_BUFFERCOUNT,
                AudioCallback cb = NULL,
                void *cookie = NULL,
                audio_output_flags_t flags = AUDIO_OUTPUT_FLAG_NONE,
                const audio_offload_info_t *offloadInfo = NULL,
                bool doNotReconnect = false,
                uint32_t suggestedFrameCount = 0) = 0;
    
        virtual status_t    start() = 0;
    
        /* Input parameter |size| is in byte units stored in |buffer|.
         * Data is copied over and actual number of bytes written (>= 0)
         * is returned, or no data is copied and a negative status code
         * is returned (even when |blocking| is true).
         * When |blocking| is false, AudioSink will immediately return after
         * part of or full |buffer| is copied over.
         * When |blocking| is true, AudioSink will wait to copy the entire
         * buffer, unless an error occurs or the copy operation is
         * prematurely stopped.
         */
        virtual ssize_t     write(const void* buffer, size_t size, bool blocking = true) = 0;
    
        virtual void        stop() = 0;
        virtual void        flush() = 0;
        virtual void        pause() = 0;
        virtual void        close() = 0;
    
        virtual status_t    setPlaybackRate(const AudioPlaybackRate& rate) = 0;
        virtual status_t    getPlaybackRate(AudioPlaybackRate* rate /* nonnull */) = 0;
        virtual bool        needsTrailingPadding() { return true; }
    
        virtual status_t    setParameters(const String8& /* keyValuePairs */) { return NO_ERROR; }
        virtual String8     getParameters(const String8& /* keys */) { return String8::empty(); }
    };
    

    在Renderer的构造函数中可以看到AudioSink是由NuPlayer传递过来的。明显的这仅仅是通过抽象实现了在Renderer中操作AudioSink及其子类的逻辑。当然在实际使用中,AudioSink也可以作为播放时间的参考,比如上面的getCurrentPosition的实现。这里面的open/close/start/stop/flush/pause/write接口均在Renderer中调用过,后续针对同步的解释会详细说明的。

    6 VideoFrameScheduler简介

    看名字,感觉这个功能跟MediaClock类似,只是专门针对视频帧的处理逻辑,这也是libstagefright提供的一个公共类,实际上是做视频渲染调整的,以保证视频渲染时间在VSYNC时间之后,防止出现画面撕裂的情况。其对外接口如下:

    struct VideoFrameScheduler : public RefBase {
        VideoFrameScheduler();
    
        // (re)initialize scheduler 初始化,给定帧率
        void init(float videoFps = -1);
        // 仅在视频渲染时间不连续的情况下使用,比如seek
        void restart();
        // 通过renderTime计算视频帧的调整时间(单位纳秒)
        nsecs_t schedule(nsecs_t renderTime);
    
        // 返回主屏的垂直同步间隔
        nsecs_t getVsyncPeriod();
        // 返回帧率
        float getFrameRate();
        void release();
    }
    

    内部实现我就不做解释了,基本意思还是从Renderer的调用中说起。Renderer中主要调用了VideoFrameScheduler的以下接口:

    mVideoScheduler = new VideoFrameScheduler();
    mVideoScheduler->init(fps);
    
    mVideoScheduler->restart(); // 以下调用都在postDrainVideoQueue中
    realTimeUs = mVideoScheduler->schedule(realTimeUs * 1000) / 1000;
    int64_t twoVsyncsUs = 2 * (mVideoScheduler->getVsyncPeriod() / 1000);
    
    

    7 音视频同步时如何实现的?

    从Renderer接口层来看,没有任何关于同步处理的接口,仅有有限的几个控制接口flush/pause/resume,以及queueBuffer/queueEOS接口。同步问题的核心就在于ALooper-AHandler机制。其实真正的同步都是在消息循环的响应函数里实现的。先看音频。

    Renderer中的音频同步机制

    起始位置从音频PCM数据进入开始,处理在Renderer::queueBuffer()中,最终发送了kWhatQueueBuffer消息。这个消息的实际处理函数是Renderer::onQueueBuffer()。实际代码在“音视频原始数据输入——queueBuffer”中有,这里仅针对音频流程解释下。 基本逻辑很简单,保存传入的buffer参数,并通知输出下AudioQueue。

    QueueEntry entry; 
    Mutex::Autolock autoLock(mLock);
    mAudioQueue.push_back(entry);
    postDrainAudioQueue_l();
    

    下面看看postDrainAudioQueue_l的实现,内部实现逻辑基本上就是边界判断加上发送kWhatDrainAudioQueue消息。

    void NuPlayer::Renderer::postDrainAudioQueue_l(int64_t delayUs) {
        if (mAudioQueue.empty()) return;
    
        mDrainAudioQueuePending = true;
        sp<AMessage> msg = new AMessage(kWhatDrainAudioQueue, this);
        msg->setInt32("drainGeneration", mAudioDrainGeneration);
        msg->post(delayUs);
    }
    

    那就继续查看下这个消息如何处理的。

            case kWhatDrainAudioQueue:
            {
                mDrainAudioQueuePending = false;
                if (onDrainAudioQueue()) {
                    uint32_t numFramesPlayed;
                    uint32_t numFramesPendingPlayout = mNumFramesWritten - numFramesPlayed;
    
                    // 这里是audio sink中缓存了多长的可用于播放的数据
                    int64_t delayUs = mAudioSink->msecsPerFrame() * numFramesPendingPlayout * 1000ll;
                    if (mPlaybackRate > 1.0f) {
                        delayUs /= mPlaybackRate;
                    }
    
                    // 利用一半的延时来保证下次刷新时间(注意时间上有重叠)
                    delayUs /= 2;
                    // 参考buffer大小来估计最大的延时时间
                    const int64_t maxDrainDelayUs = std::max(
                            mAudioSink->getBufferDurationInUs(), (int64_t)500000 /* half second */);
                    ALOGD_IF(delayUs > maxDrainDelayUs, "postDrainAudioQueue long delay: %lld > %lld",
                            (long long)delayUs, (long long)maxDrainDelayUs);
                    Mutex::Autolock autoLock(mLock);
                    postDrainAudioQueue_l(delayUs); // 这里同一个消息重发了
                }
                break;
            }
    

    到这里,貌似还是没有同步的机制,不过我们已经知道这个音频播放消息的触发机制了,在queueBuffer和消息处理函数中都会触发,基本上就是定时器。还有最后一个函数onDrainAudioQueue()。下面是代码:

    bool NuPlayer::Renderer::onDrainAudioQueue() {
        uint32_t numFramesPlayed;
        if (mAudioSink->getPosition(&numFramesPlayed) != OK) {      
            drainAudioQueueUntilLastEOS();
            ALOGW("onDrainAudioQueue(): audio sink is not ready");
            return false;
        }
    
        uint32_t prevFramesWritten = mNumFramesWritten;
        while (!mAudioQueue.empty()) {
            QueueEntry *entry = &*mAudioQueue.begin();
    
            mLastAudioBufferDrained = entry->mBufferOrdinal;
    
            if (entry->mBuffer == NULL) {
    			// 删除针对EOS的处理代码            
            }
    
            // ignore 0-sized buffer which could be EOS marker with no data
            if (entry->mOffset == 0 && entry->mBuffer->size() > 0) {
                int64_t mediaTimeUs;
                CHECK(entry->mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
                ALOGV("onDrainAudioQueue: rendering audio at media time %.2f secs",
                        mediaTimeUs / 1E6);
                onNewAudioMediaTime(mediaTimeUs);
            }
    
            size_t copy = entry->mBuffer->size() - entry->mOffset;
            ssize_t written = mAudioSink->write(entry->mBuffer->data() + entry->mOffset,
                                                copy, false /* blocking */);
            if (written < 0) {/* ...忽略异常处理部分代码 */}
    
            entry->mOffset += written;
            size_t remainder = entry->mBuffer->size() - entry->mOffset;
            if ((ssize_t)remainder < mAudioSink->frameSize()) {
                if (remainder > 0) {// 这是直接凑成完整的一帧音频
                    ALOGW("Corrupted audio buffer has fractional frames, discarding %zu bytes.", remainder);
                    entry->mOffset += remainder;
                    copy -= remainder;
                }
    
                entry->mNotifyConsumed->post();
                mAudioQueue.erase(mAudioQueue.begin());
                entry = NULL;
            }
    
            size_t copiedFrames = written / mAudioSink->frameSize();
            mNumFramesWritten += copiedFrames;
    
            {
                Mutex::Autolock autoLock(mLock);
                int64_t maxTimeMedia;
                maxTimeMedia = mAnchorTimeMediaUs +
                            (int64_t)(max((long long)mNumFramesWritten - mAnchorNumFramesWritten, 0LL)
                                    * 1000LL * mAudioSink->msecsPerFrame());
                mMediaClock->updateMaxTimeMedia(maxTimeMedia);
    
                notifyIfMediaRenderingStarted_l();
            }
    
            if (written != (ssize_t)copy) {
                // A short count was received from AudioSink::write()
                //
                // AudioSink write is called in non-blocking mode.
                // It may return with a short count when:
                //
                // 1) Size to be copied is not a multiple of the frame size. Fractional frames are
                //    discarded.
                // 2) The data to be copied exceeds the available buffer in AudioSink.
                // 3) An error occurs and data has been partially copied to the buffer in AudioSink.
                // 4) AudioSink is an AudioCache for data retrieval, and the AudioCache is exceeded.
    
                // (Case 1)
                // Must be a multiple of the frame size.  If it is not a multiple of a frame size, it
                // needs to fail, as we should not carry over fractional frames between calls.
                CHECK_EQ(copy % mAudioSink->frameSize(), 0);
    
                // (Case 2, 3, 4)
                // Return early to the caller.
                // Beware of calling immediately again as this may busy-loop if you are not careful.
                ALOGV("AudioSink write short frame count %zd < %zu", written, copy);
                break;
            }
        }
    
        // calculate whether we need to reschedule another write.
        bool reschedule = !mAudioQueue.empty()
                && (!mPaused
                    || prevFramesWritten != mNumFramesWritten); // permit pause to fill buffers
        //ALOGD("reschedule:%d  empty:%d  mPaused:%d  prevFramesWritten:%u  mNumFramesWritten:%u",
        //        reschedule, mAudioQueue.empty(), mPaused, prevFramesWritten, mNumFramesWritten);
        return reschedule;
    }
    

    这里面比较主要的更新是onNewAudioMediaTimemNumFramesWritten字段。
    剩下的一部分代码是关于异常边界情况下的音视频处理逻辑:

        sp<ABuffer> firstAudioBuffer = (*mAudioQueue.begin()).mBuffer;
        sp<ABuffer> firstVideoBuffer = (*mVideoQueue.begin()).mBuffer;
    
        if (firstAudioBuffer == NULL || firstVideoBuffer == NULL) {
            // 对于一个队列为空的情况,通知另个一队列EOS
            syncQueuesDone_l();
            return;
        }
    
        int64_t firstAudioTimeUs;
        int64_t firstVideoTimeUs;
        CHECK(firstAudioBuffer->meta()
                ->findInt64("timeUs", &firstAudioTimeUs));
        CHECK(firstVideoBuffer->meta()
                ->findInt64("timeUs", &firstVideoTimeUs));
    
        int64_t diff = firstVideoTimeUs - firstAudioTimeUs;
        if (diff > 100000ll) {
            // 音频数据时间戳比视频数据早0.1s,
    
            (*mAudioQueue.begin()).mNotifyConsumed->post();
            mAudioQueue.erase(mAudioQueue.begin());
            return;
        }
    
        syncQueuesDone_l();
    

    Renderer中的视频同步部分

    和音频同步类似,入口在在Renderer::queueBuffer(),主要区分在Renderer::onQueueBuffer()中,代码如下:

    // 如果是视频,则将数据存放到视频队列,然后安排刷新
    mVideoQueue.push_back(entry);
    postDrainVideoQueue();
    

    下面按照之前的思路继续分析,接下来是postDrainVideoQueue实现,主要音视频同步逻辑位于这里。

    void NuPlayer::Renderer::postDrainVideoQueue() {
        if (mVideoQueue.empty()) {
            return;
        }
    
        QueueEntry &entry = *mVideoQueue.begin();
    
        sp<AMessage> msg = new AMessage(kWhatDrainVideoQueue, this); //这是实际处理视频缓冲区和显示的消息
        msg->setInt32("drainGeneration", getDrainGeneration(false /* audio */));
    
        if (entry.mBuffer == NULL) {
            // EOS doesn't carry a timestamp.
            msg->post();
            mDrainVideoQueuePending = true;
            return;
        }
    
        bool needRepostDrainVideoQueue = false;
        int64_t delayUs;
        int64_t nowUs = ALooper::GetNowUs();
        int64_t realTimeUs;
    	int64_t mediaTimeUs;
        CHECK(entry.mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
        if (mFlags & FLAG_REAL_TIME) {        
            realTimeUs = mediaTimeUs;
        } else {
            {
                Mutex::Autolock autoLock(mLock);
                if (mAnchorTimeMediaUs < 0) { // 同步基准未设置的情况下,直接显示
                    mMediaClock->updateAnchor(mediaTimeUs, nowUs, mediaTimeUs);
                    mAnchorTimeMediaUs = mediaTimeUs;
                    realTimeUs = nowUs;
                } else if (!mVideoSampleReceived) { // 第一帧未显示前,直接显示
                    // Always render the first video frame.
                    realTimeUs = nowUs;
                } else if (mAudioFirstAnchorTimeMediaUs < 0 // 音频未播放之前,以视频为准
                    || mMediaClock->getRealTimeFor(mediaTimeUs, &realTimeUs) == OK) {
                    realTimeUs = getRealTimeUs(mediaTimeUs, nowUs);
                } else if (mediaTimeUs - mAudioFirstAnchorTimeMediaUs >= 0) { // 视频超前的情况下,等待
                    needRepostDrainVideoQueue = true; 
                    realTimeUs = nowUs;
                } else {
                    realTimeUs = nowUs;
                }
            }
    
            // Heuristics to handle situation when media time changed without a
            // discontinuity. If we have not drained an audio buffer that was
            // received after this buffer, repost in 10 msec. Otherwise repost
            // in 500 msec.
            delayUs = realTimeUs - nowUs;
            int64_t postDelayUs = -1;
            if (delayUs > 500000) {
                postDelayUs = 500000;
                if (mHasAudio && (mLastAudioBufferDrained - entry.mBufferOrdinal) <= 0) {
                    postDelayUs = 10000;
                }
            } else if (needRepostDrainVideoQueue) {
                // CHECK(mPlaybackRate > 0);
                // CHECK(mAudioFirstAnchorTimeMediaUs >= 0);
                // CHECK(mediaTimeUs - mAudioFirstAnchorTimeMediaUs >= 0);
                postDelayUs = mediaTimeUs - mAudioFirstAnchorTimeMediaUs;
                postDelayUs /= mPlaybackRate;
            }
    
            if (postDelayUs >= 0) {
                msg->setWhat(kWhatPostDrainVideoQueue);
                msg->post(postDelayUs);
                mVideoScheduler->restart();
                ALOGI("possible video time jump of %dms or uninitialized media clock, retrying in %dms",
                        (int)(delayUs / 1000), (int)(postDelayUs / 1000));
                mDrainVideoQueuePending = true;
                return;
            }
        }
    
        realTimeUs = mVideoScheduler->schedule(realTimeUs * 1000) / 1000;
        int64_t twoVsyncsUs = 2 * (mVideoScheduler->getVsyncPeriod() / 1000);
    
        delayUs = realTimeUs - nowUs;
    	// 上面代码的主要目的是计算这个延时
        ALOGW_IF(delayUs > 500000, "unusually high delayUs: %" PRId64, delayUs);
        // post 2 display refreshes before rendering is due
        msg->post(delayUs > twoVsyncsUs ? delayUs - twoVsyncsUs : 0);
    
        mDrainVideoQueuePending = true;
    }
    

    这里主要的是发送了一个延时消息kWhatDrainVideoQueue,下面是如何处理的代码:

            case kWhatDrainVideoQueue:
            {
                int32_t generation;
                CHECK(msg->findInt32("drainGeneration", &generation));
                if (generation != getDrainGeneration(false /* audio */)) {
                    break;
                }
    
                mDrainVideoQueuePending = false;
                onDrainVideoQueue();
                postDrainVideoQueue(); // 注意这里相当于定时器的实现了
                break;
            }
    

    直接调用onDrainVideoQueue函数,看看如何实现的:

    void NuPlayer::Renderer::onDrainVideoQueue() {
        if (mVideoQueue.empty()) {
            return;
        }
    
        QueueEntry *entry = &*mVideoQueue.begin();
        if (entry->mBuffer == NULL) {
            // ...省略针对EOS 处理
        }
    
        int64_t nowUs = ALooper::GetNowUs();
        int64_t realTimeUs;
        int64_t mediaTimeUs = -1;
        if (mFlags & FLAG_REAL_TIME) {
            CHECK(entry->mBuffer->meta()->findInt64("timeUs", &realTimeUs));
        } else {
            CHECK(entry->mBuffer->meta()->findInt64("timeUs", &mediaTimeUs));
            realTimeUs = getRealTimeUs(mediaTimeUs, nowUs);
        }
    
        bool tooLate = false;
        if (!mPaused) {
            setVideoLateByUs(nowUs - realTimeUs);
            tooLate = (mVideoLateByUs > 40000);
    
            if (tooLate) {
                ALOGV("video late by %lld us (%.2f secs)",
                     (long long)mVideoLateByUs, mVideoLateByUs / 1E6);
            } else {
                int64_t mediaUs = 0;
                mMediaClock->getMediaTime(realTimeUs, &mediaUs);
                ALOGV("rendering video at media time %.2f secs",
                        (mFlags & FLAG_REAL_TIME ? realTimeUs :
                        mediaUs) / 1E6);
    
                if (!(mFlags & FLAG_REAL_TIME)
                        && mLastAudioMediaTimeUs != -1
                        && mediaTimeUs > mLastAudioMediaTimeUs) {
                    // If audio ends before video, video continues to drive media clock.
                    // Also smooth out videos >= 10fps.
                    mMediaClock->updateMaxTimeMedia(mediaTimeUs + 100000);
                }
            }
        } else {
            setVideoLateByUs(0);
            if (!mVideoSampleReceived && !mHasAudio) {
                // This will ensure that the first frame after a flush won't be used as anchor
                // when renderer is in paused state, because resume can happen any time after seek.
                Mutex::Autolock autoLock(mLock);
                clearAnchorTime_l();
            }
        }
    
        // Always render the first video frame while keeping stats on A/V sync.
        if (!mVideoSampleReceived) {
            realTimeUs = nowUs;
            tooLate = false;
        }
    
        entry->mNotifyConsumed->setInt64("timestampNs", realTimeUs * 1000ll); // 上面所有计算的参数在这里使用了
        entry->mNotifyConsumed->setInt32("render", !tooLate);
        entry->mNotifyConsumed->post(); // 注意这里,实际是向解码器发送消息,用于显示
        mVideoQueue.erase(mVideoQueue.begin());
        entry = NULL;
    
        mVideoSampleReceived = true;
    
        if (!mPaused) { // 这里是通知NuPlayer层渲染开始
            if (!mVideoRenderingStarted) {
                mVideoRenderingStarted = true;
                notifyVideoRenderingStart();
            }
            Mutex::Autolock autoLock(mLock);
            notifyIfMediaRenderingStarted_l();
        }
    }
    

    到这里,小结下,读完这部分代码发现,NuPlayer::Renderer使用的以视频为基准的同步机制,音频晚了直接丢包,视频需要显示。同步主要位于视频缓冲区处理部分onDrainVideoQueue和音频缓冲区处理部分onDrainVideoQueue中。音视频的渲染都是采用类似定时器的机制,只不过视频显示需要依赖于实际解码器,音频播放需要依赖于AudioSink的接口。

    8 总结

    本文主要参考NuPlayer::Renderer的代码做的分析,持续时间比较长。我都怀疑自己具体写的对不对。
    非常抱歉拖了这么久,文中代码比较多,如果诸位绝对不对胃口可以略过。
    怎么说呢? Renderer涉及部分比较多,包括NuPlayer、AudioSink、MediaClock、VideoScheduler等。细节还是有待分析,不过基本整理情况是什么了。
    我到现在才认识到理解和整理出来的差距。还需要多历练下。

  • 相关阅读:
    无穷字符串问题--CSDN上的面试题(原创)
    c语言:将二进制数按位输出
    构造和为指定值的表达式:±1±2±3±4±5=3 确定符号
    c语言:最长对称子串(3种解决方案)
    最长公共子串
    ie7下 滚动条内容不动问题
    沙盒密探——可实现的js缓存攻击
    yii2归档安装
    php 安装composer
    [转]-Android Studio 快捷键整理分享-SadieYu
  • 原文地址:https://www.cnblogs.com/tocy/p/4-nuplayer-renderer-source-code-analysis.html
Copyright © 2011-2022 走看看