zoukankan      html  css  js  c++  java
  • Android M AudioPolicy 分析

    1.AudioPolicyService基础

    AudioPolicy在Android系统中主要负责Audio"策略"相关的问题。它和AudioFlinger一起组成了Android Audio系统的两个服务。一个负责管理audio的“路由”,一个负责管理audio“设备”。在Android M 版本的系统中,这两个服务都是在系统启动的过程中,通过MediaServer来加载的。

    AudioPolicyService在Android Audio系统中主要完成以下几个任务:

    ①管理输入输出设备,包括设备的连接、断开状态,设备的选择和切换等

    ②管理系统的音频策略,比如通话时播放音乐、或者播放音乐时来电话的一系列处理

    ③管理系统的音量

    ④上层的一些音频参数也可以通过AudioPolicyService设置到底层去

    2.AudioPolicyService启动流程

    AudioPolicyService服务运行在mediaserver进程中, 随着mediaserver进程启动而启动。

    // frameworks/av/media/mediaserver/main_mediaserver.cpp
    int main(int argc __unused, char** argv)
    {
        ......
        ......
        AudioFlinger::instantiate();
        MediaPlayerService::instantiate();
        AudioPolicyService::instantiate();
        ......
    }

    AudioFlinger::instantiate()并不属于AudioFlinger的内部类,而是BinderService类的一个实现包括AudioFlinger,AudioPolicy等在内的几个服务都继承自这个统一的Binder的服务类,具体实现在BinderService.h中

    // frameworks/native/include/binder/BinderService.h
    static void instantiate() { publish(); }

    代码只有一行,就是调用了自己的publish()函数,我们就去分析一下这个publish()函数

    static status_t publish(bool allowIsolated = false) 
    {
        sp<IServiceManager> sm(defaultServiceManager());
        return sm->addService( String16(SERVICE::getServiceName()),new SERVICE(), allowIsolated);
    }

    SERVICE是文件中定义的一个模板,AudioPolicyService调用了instantiate()函数,所以当前的SERVICE为AudioPolicyService

    //  frameworks/native/include/binder/BinderService.h
    template<typename SERVICE>

    可以看出publish()函数所做的事获取到ServiceManager的代理,然后nwe一个调用instantiate的那个service的对象并把它添加到ServiceManager中。

      所以下一步就是去分析AudioPolicyService的构造函数了

    // frameworks/av/services/audiopolicy/service/AudioPolicyService.cpp
    AudioPolicyService::AudioPolicyService() : BnAudioPolicyService(), 
                                        mpAudioPolicy(NULL),
                                        mAudioPolicyManager(NULL),   
                                        mAudioPolicyClient(NULL), 
                                        mPhoneState(AUDIO_MODE_INVALID)
    {
    }

     然后发现它的构造函数里面除了初始化了一些变量之外似乎并没有做其他的事情,既然这样,那么AudioPolicyService所作的初始化的事情到底是在什么地方进行的呢,继续分析上面的构造函数,AudioPolicyService是继承自BnAudioPolicyService的,一步步往上推,最终发现它的祖先是RefBase,根据强指针的特性,目标对象在第一次被引用时会调用onFirstRef()的,现在目标明确了,我们就去看一下AudioPolicyService::onFirstRef()里面有没有我们比较关心的信息。

    // frameworks/av/services/audiopolicy/service/AudioPolicyService.cpp
    void AudioPolicyService::onFirstRef()
    {
        ......
        //用于播放tone音
        mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
        //用于执行audio命令
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        //用于执行输出命令
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
        #ifdef USE_LEGACY_AUDIO_POLICY
        //因为USE_LEGACY_AUDIO_POLICY在当前源码树中未定义
        //所以这个不成立,走#else
        ......
        #else
        ALOGI("AudioPolicyService CSTOR in new mode");
        mAudioPolicyClient = new AudioPolicyClient(this);                                    
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
        #endif
    }

     可以看出在第一次被强引用时AudioPolicyService创建了3个AudioCommandThread和AudioPolicyManager。

      看一下看一下创建了3个AudioCommandThread之后做了哪些事。首先它直接new了一个AudioPolicyClient,AudioPolicyClient类定义在AudioPolicyService.h中

    //frameworks/av/services/audiopolicy/service/AudioPolicyService.h
    class AudioPolicyClient : public AudioPolicyClientInterface
    {
        ......
    }

    他的实现在frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp中。创建完了AudioPolicyClient之后通过调用createAudioPolicyManager方法创建了一个AudioPolicyManager对象下面看一下createAudioPolicyManager方法中是怎样创建AudioPolicyManager的。

    //frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp
    
    extern "C" AudioPolicyInterface* createAudioPolicyManager(
            AudioPolicyClientInterface *clientInterface)
    {
        return new AudioPolicyManager(clientInterface);
    }

    可见他是直接new了一个AudioPolicyManager然后把刚才创建的AudioPolicyClient传了进去。

    //  frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
    AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
    {
        ......
        mEngine->setObserver(this);
        ......
    }

    mEngine->setObserver(this)把mApmObserver绑定为AudioPolicyManager对象,所以在frameworks/av/services/audiopolicy/enginedefault/Engine.cpp中调用mApmObserver->xxx()都是调用的AudioPolicyManager类中的成员函数。 Engine类里要用到AudioPOlicyManager类里的成员函数就用mApmObserver->xxx(),

    AudioPolicyManager类里要使用Engine类的成员函数就用mEngine->xxx()

    3.AudioPolicyManager分析

    AudioPolicyService中的绝大部分功能,如音频策略的管理、输入输出设备的管理、设备切换和音量调节等功能都是通过AudioPolicyManager来实现的。AudioPolicyManager是由AudioPolicyService通过

    mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

    创建的,看一下createAudioPolicyManager创建AudioPolicyManager的流程

    // frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp
    extern "C" AudioPolicyInterface* createAudioPolicyManager(
            AudioPolicyClientInterface *clientInterface)
    {
        return new AudioPolicyManager(clientInterface);
    }

    它是通过直接new了一个AudioPolicyManager来创建的,所以我们可以直接去分析AudioPolicyManager的构造函数了。

    //frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
    AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
    {
        //1.加载audio_policy.conf配置文件 /vendor/etc/audio_policy.conf
        ConfigParsingUtils::loadAudioPolicyConfig(....);
        //2.初始化各种音频流对应的音量调节点
        mEngine->initializeVolumeCurves(mSpeakerDrcEnabled);
        //3.加载audio policy硬件抽象库
        mHwModules[i]->mHandle =  mpClientInterface->loadHwModule(mHwModules[i]->mName);
        //4.打开输出设备
        mpClientInterface->openOutput(....);
        //5.保存输出设备描述符对象
        addOutput(output, outputDesc);
        //6.设置输出设备
        setOutputDevice(....);
        //7.更新输出设备
        updateDevicesAndOutputs();
    }

    大致的流程如下所示

     1.加载audio_policy.conf配置文件 /system/etc/audio_policy.conf(对于模拟器而言走的是defaultAudioPolicyConfig())

      在AudioPolicyManager创建过程中会通过加载audio_policy.conf配置文件来加载音频设备,Android为每种音频接口定义了对应的硬件抽象层。硬件抽象层代码参考写法

    hardware/libhardware/modules/audio

    external/bluetooth/bluedroid/audio_a2dp_hw/

    audio.a2dp.default.so

    hardware/libhardware/modules/audio/

    audio.primary.default.so

    hardware/libhardware/modules/usbaudio/

    audio.usb.default.so

      每种音频接口定义了不同的输入输出,一个接口可以具有多个输入或者输出,每个输入输出可以支持不同的设备,通过读取audio_policy.conf文件可以获取系统支持的音频接口参数,在AudioPolicyManager中会优先加载/vendor/etc/audio_policy.conf配置文件, 如果该配置文件不存在, 则加载/system/etc/audio_policy.conf配置文件。AudioPolicyManager加载完所有音频接口后,就知道了系统支持的所有音频接口参数,可以为音频输出提供决策。

      audio_policy.conf同时定义了多个audio接口,每一个audio接口包含若干output和input,而每个output和input又同时支持多种输入输出模式,每种输入输出模式又支持若干种设备.

    ConfigParsingUtils::loadAudioPolicyConfig(....);

    分成两部分,第一部分是解析全局标签,第二部分是解析audio_hw_modules标签,其子标签都表示hardware module,有primary和r_submix两种hardware module都被解析到mHwModules,hardware module的子标签有outputs和inputs,outputs的各个子标签被解析到mHwModules 的 mOutputProfiles,inputs的各个子标签被解析到mHwModules 的 mInputProfiles。

    2.初始化各种音频流对应的音量调节点

    mEngine->initializeVolumeCurves(mSpeakerDrcEnabled);

    3.加载audio policy硬件抽象库

    mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
    
    //frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp
    audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
    {
        ALOGD("tian--test");
        sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
        if (af == 0) {
            ALOGW("%s: could not get AudioFlinger", __func__);
            return 0;
        }
        return af->loadHwModule(name);
    }

    这里直接调用了AudioFlinger::loadHwModule()。下面进入到AudioFlinger.cpp中看一下它做了那些事

    // 当AudioPolicyManager构造时,它会读取厂商关于音频的描述文件audio_policy.conf
    //然后根据此来打开音频接口,这一过程最终会调用到AudioFlinger::loadHwModule
    audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
    {
        if (name == NULL) {
            return 0;
        }
        if (!settingsAllowed()) {
            return 0;
        }
        Mutex::Autolock _l(mLock);
        return loadHwModule_l(name);
    }

    最终调用到了loadHwModule_l()函数,继续进去查看

    //name 取值:
    static const char * const audio_interfaces[] = {
        AUDIO_HARDWARE_MODULE_ID_PRIMARY,//主音频设备,必须存在
        AUDIO_HARDWARE_MODULE_ID_A2DP,//蓝牙A2DP
        AUDIO_HARDWARE_MODULE_ID_USB,//USB音频
    };
    
    audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
    {
        //1.是否已经加载过这个interface
        for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
            if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
                ALOGW("loadHwModule() module %s already loaded", name);
                return mAudioHwDevs.keyAt(i);
            }
        }
    
        //2.加载audio interface
    audio_hw_device_t *dev;
    int rc = load_audio_interface(name, &dev);
    ......
    //3.初始化
    rc = dev->init_check(dev);
    //4.添加到全局变量中
    audio_module_handle_t handle = nextUniqueId();
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));
    }

    第二步加载指定的audio interface 如primary,a2dp或usb函数load_audio_interface()

    用来加载设备所需的库文件,然后打开设备并创建一个audio_hw_device_t实例,音频设备接口所对应的库文件名称是有一定格式的,如a2dp的模块名可能是audio.a2dp.so或audio.a2dp.default.so等,查找路径主要有两个/system/lib/hw和/vendor/lib/hw,看一下它是怎么实现的

    //frameworks/av/services/audioflinger/AudioFlinger.cpp
    
    static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
    {
    ......
    //1.获取音频模块              //被定义为audio
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    //2.打开音频设备
    rc = audio_hw_device_open(mod, dev);
    ......
    return rc;
    }
    int hw_get_module_by_class(const char *class_id, const char *inst,
                               const struct hw_module_t **module)
    {
        int i = 0;
        char prop[PATH_MAX] = {0};
        char path[PATH_MAX] = {0};
        char name[PATH_MAX] = {0};
        char prop_name[PATH_MAX] = {0};
    
        //拼接字符串
        if (inst)
            snprintf(name, PATH_MAX, "%s.%s", class_id, inst);
        else
            strlcpy(name, class_id, PATH_MAX);
    
        /*
         * Here we rely on the fact that calling dlopen multiple times on
         * the same .so will simply increment a refcount (and not load
         * a new copy of the library).
         * We also assume that dlopen() is thread-safe.
         */
         //去prop系统查找
        /* First try a property specific to the class and possibly instance */
        snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
        if (property_get(prop_name, prop, NULL) > 0) {
            if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
                goto found;
            }
        }
    
        /* Loop through the configuration variants looking for a module */
        for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {
            if (property_get(variant_keys[i], prop, NULL) == 0) {
                continue;
            }
            if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
                goto found;
            }
        }
    
        /* Nothing found, try the default */
        if (hw_module_exists(path, sizeof(path), name, "default") == 0) {
            goto found;
        }
    
        return -ENOENT;
    
    found:
        /* load the module, if this fails, we're doomed, and we should not try
         * to load a different variant. */
         //加载
        return load(class_id, path, module);
    }

    字符串path被拼接为了/system/lib/hw/audio.xxx.so然后调用load函数laod函数通过handle = dlopen(path, RTLD_NOW);打开了/system/lib/hw/audio.xxx.so并返回一个句柄。

    //hardware/libhardware/hardware.c
    static int load(const char *id,
            const char *path,
            const struct hw_module_t **pHmi)
    {
        ….
        //通过dlopen()方法来打开动态库
        handle = dlopen(path, RTLD_NOW);
    //通过dlsym来加载动态库里的hmi
    ……
        const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
        hmi = (struct hw_module_t *)dlsym(handle, sym)
        ….
    }

    打开输出设备

    //frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp
    status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                               audio_io_handle_t *output,
                                                               audio_config_t *config,
                                                               audio_devices_t *devices,
                                                               const String8& address,
                                                               uint32_t *latencyMs,
                                                               audio_output_flags_t flags)
    {
        sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
        if (af == 0) {
            ALOGW("%s: could not get AudioFlinger", __func__);
            return PERMISSION_DENIED;
        }
        return af->openOutput(module, output, config, devices, address, latencyMs, flags);
    }

    在此处调用到了AudioFlinger:: openOutput(…..)。

    status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                      audio_io_handle_t *output,
                                      audio_config_t *config,
                                      audio_devices_t *devices,
                                      const String8& address,
                                      uint32_t *latencyMs,
                                      audio_output_flags_t flags)
    {
        ALOGI("openOutput(), module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x",
                  module,
                  (devices != NULL) ? *devices : 0,
                  config->sample_rate,
                  config->format,
                  config->channel_mask,
                  flags);
    
        if (*devices == AUDIO_DEVICE_NONE) {
            return BAD_VALUE;
        }
    
        Mutex::Autolock _l(mLock);
       // ALOGD("tian-openOutput");
        sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
        if (thread != 0) {
            *latencyMs = thread->latency();
    
            // notify client processes of the new output creation
            thread->ioConfigChanged(AUDIO_OUTPUT_OPENED);
    
            // the first primary output opened designates the primary hw device
            if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
                ALOGI("Using module %d has the primary audio interface", module);
                mPrimaryHardwareDev = thread->getOutput()->audioHwDev;
    
                AutoMutex lock(mHardwareLock);
                mHardwareStatus = AUDIO_HW_SET_MODE;
                mPrimaryHardwareDev->hwDevice()->set_mode(mPrimaryHardwareDev->hwDevice(), mMode);
                mHardwareStatus = AUDIO_HW_IDLE;
            }
            return NO_ERROR;
        }
    
        return NO_INIT;
    }

    下面看一下openOutput_l()这个函数

    //输入参数中的module是由前面loadNodule来获得的,它是一个audio interface的id号
    //可以通过此id在mAudioHwSevs中查找对应的AudioHwDevice对象
    //这个方法中会将打开的output加到mPlaybackThreads线程中
    sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                                audio_io_handle_t *output,
                                                                audio_config_t *config,
                                                                audio_devices_t devices,
                                                                const String8& address,
                                                                audio_output_flags_t flags)
    {
        //1.查找相应的audio interface
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
    if (outHwDev == NULL) {
            return 0;
        }
    
        audio_hw_device_t *hwDevHal = outHwDev->hwDevice();
        if (*output == AUDIO_IO_HANDLE_NONE) {
            *output = nextUniqueId();
        }
    //2.为设备打开一个输出流,创建Audio HAL的音频输出对象
        status_t status = outHwDev->openOutputStream(
                &outputStream,
                *output,
                devices,
                flags,
                config,
                address.string());
    //3. 创建playbackThread
        if (status == NO_ERROR) {
            PlaybackThread *thread;
            if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
                ALOGD("tian-OffloadThread");
              //  thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
                
            } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                    || !isValidPcmSinkFormat(config->format)
                    || !isValidPcmSinkChannelMask(config->channel_mask)) {
                    ALOGD("tian-DirectOutputThread");
                thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
            } else {
                //一般是创建混音线程,代表AudioStreamOut对象的output也传递进去了
                thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
            }
            mPlaybackThreads.add(*output, thread);//添加播放线程
            return thread;
        }
  • 相关阅读:
    回溯法之迷宫问题
    一个.net的正则表达式测试工具
    关于FeedSky话题广告
    google notebook更新了&digg notebook
    近日,来北京近一月
    城堡技术论坛(castle.org.cn)上线!
    玉龙雪山
    消息队列(Message Queue)
    Mac Theme for Google Reader
    开始学习npetshop2
  • 原文地址:https://www.cnblogs.com/CoderTian/p/5705742.html
Copyright © 2011-2022 走看看