前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >oboe 从使用到源码详解

oboe 从使用到源码详解

作者头像
一只小虾米
发布2022-10-25 16:55:49
1.3K0
发布2022-10-25 16:55:49
举报
文章被收录于专栏:Android点滴分享

本篇介绍

Oboe是一个开源的低延时音频库,本篇从使用到源码研究下Oboe的内容

使用Oboe

oboe的官网提供了相当详细的指导,使用起来也比较简单,以一个播放器为例:

代码语言:javascript
复制
#include <oboe/Oboe.h>
#include <math.h>
using namespace oboe;

class OboeSinePlayer: public oboe::AudioStreamDataCallback {
public:

    virtual ~OboeSinePlayer() = default;

    // Call this from Activity onResume()
    int32_t startAudio() {
        std::lock_guard<std::mutex> lock(mLock);
        oboe::AudioStreamBuilder builder;
        // The builder set methods can be chained for convenience.
        Result result = builder.setSharingMode(oboe::SharingMode::Exclusive)
                ->setPerformanceMode(oboe::PerformanceMode::LowLatency)
                ->setChannelCount(kChannelCount)
                ->setSampleRate(kSampleRate)
        ->setSampleRateConversionQuality(oboe::SampleRateConversionQuality::Medium)
                ->setFormat(oboe::AudioFormat::Float)
                ->setDataCallback(this)
                ->openStream(mStream);
    if (result != Result::OK) return (int32_t) result;
    
        // Typically, start the stream after querying some stream information, as well as some input from the user
        result = outStream->requestStart();
    return (int32_t) result;
    }
   
    // Call this from Activity onPause()
    void stopAudio() {
        // Stop, close and delete in case not already closed.
        std::lock_guard<std::mutex> lock(mLock);
        if (mStream) {
            mStream->stop();
            mStream->close();
            mStream.reset();
        }
    }

    oboe::DataCallbackResult onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) override {
        float *floatData = (float *) audioData;
        for (int i = 0; i < numFrames; ++i) {
            float sampleValue = kAmplitude * sinf(mPhase);
            for (int j = 0; j < kChannelCount; j++) {
                floatData[i * kChannelCount + j] = sampleValue;
            }
            mPhase += mPhaseIncrement;
            if (mPhase >= kTwoPi) mPhase -= kTwoPi;
        }
        return oboe::DataCallbackResult::Continue;
    }

private:
    std::mutex         mLock;
    std::shared_ptr<oboe::AudioStream> mStream;

    // Stream params
    static int constexpr kChannelCount = 2;
    static int constexpr kSampleRate = 48000;
    // Wave params, these could be instance variables in order to modify at runtime
    static float constexpr kAmplitude = 0.5f;
    static float constexpr kFrequency = 440;
    static float constexpr kPI = M_PI;
    static float constexpr kTwoPi = kPI * 2;
    static double constexpr mPhaseIncrement = kFrequency * kTwoPi / (double) kSampleRate;
    // Keeps track of where the wave is
    float mPhase = 0.0;
};

源码介绍

Oboe的入口是AudioStreamBuilder的openStream:

代码语言:javascript
复制
Result AudioStreamBuilder::openStream(AudioStream **streamPP) {
    auto result = isValidConfig();
    if (result != Result::OK) {
        LOGW("%s() invalid config %d", __func__, result);
        return result;
    }

    LOGI("%s() %s -------- %s --------",
         __func__, getDirection() == Direction::Input ? "INPUT" : "OUTPUT", getVersionText());

    if (streamPP == nullptr) {
        return Result::ErrorNull;
    }
    *streamPP = nullptr;

    AudioStream *streamP = nullptr;

    // Maybe make a FilterInputStream.
    AudioStreamBuilder childBuilder(*this);
    // Check need for conversion and modify childBuilder for optimal stream.
    bool conversionNeeded = QuirksManager::getInstance().isConversionNeeded(*this, childBuilder); // 如果允许调整的话,内部会针对通道数,采样率等进行调整
    // Do we need to make a child stream and convert.
   // 这儿是一个递归,针对需要调整的场景,可以再创建一个自己去构造,因为childBuilder已经调整了,因此不会陷入无限递归
    if (conversionNeeded) {
        AudioStream *tempStream;
        result = childBuilder.openStream(&tempStream);
        if (result != Result::OK) {
            return result;
        }

        if (isCompatible(*tempStream)) {
            // The child stream would work as the requested stream so we can just use it directly.
            *streamPP = tempStream;
            return result;
        } else {
            AudioStreamBuilder parentBuilder = *this;
            // Build a stream that is as close as possible to the childStream.
            if (getFormat() == oboe::AudioFormat::Unspecified) {
                parentBuilder.setFormat(tempStream->getFormat());
            }
            if (getChannelCount() == oboe::Unspecified) {
                parentBuilder.setChannelCount(tempStream->getChannelCount());
            }
            if (getSampleRate() == oboe::Unspecified) {
                parentBuilder.setSampleRate(tempStream->getSampleRate());
            }
            if (getFramesPerDataCallback() == oboe::Unspecified) {
                parentBuilder.setFramesPerCallback(tempStream->getFramesPerDataCallback());
            }

            // Use childStream in a FilterAudioStream.
            LOGI("%s() create a FilterAudioStream for data conversion.", __func__);
            FilterAudioStream *filterStream = new FilterAudioStream(parentBuilder, tempStream);
            result = filterStream->configureFlowGraph();
            if (result !=  Result::OK) {
                filterStream->close();
                delete filterStream;
                // Just open streamP the old way.
            } else {
                streamP = static_cast<AudioStream *>(filterStream);
            }
        }
    }

    if (streamP == nullptr) {
        streamP = build();
        if (streamP == nullptr) {
            return Result::ErrorNull;
        }
    }
 // 这个逻辑是oboe的设备兼容性方案
    // If MMAP has a problem in this case then disable it temporarily.
    bool wasMMapOriginallyEnabled = AAudioExtensions::getInstance().isMMapEnabled();
    bool wasMMapTemporarilyDisabled = false;
    if (wasMMapOriginallyEnabled) {
        bool isMMapSafe = QuirksManager::getInstance().isMMapSafe(childBuilder);
        if (!isMMapSafe) {
            AAudioExtensions::getInstance().setMMapEnabled(false);
            wasMMapTemporarilyDisabled = true;
        }
    }
    result = streamP->open();
    if (wasMMapTemporarilyDisabled) {
        AAudioExtensions::getInstance().setMMapEnabled(wasMMapOriginallyEnabled); // restore original
    }
    if (result == Result::OK) {

        int32_t  optimalBufferSize = -1;
        // Use a reasonable default buffer size.
        if (streamP->getDirection() == Direction::Input) {
            // For input, small size does not improve latency because the stream is usually
            // run close to empty. And a low size can result in XRuns so always use the maximum.
            optimalBufferSize = streamP->getBufferCapacityInFrames();
        } else if (streamP->getPerformanceMode() == PerformanceMode::LowLatency
                && streamP->getDirection() == Direction::Output)  { // Output check is redundant.
            optimalBufferSize = streamP->getFramesPerBurst() *
                                    kBufferSizeInBurstsForLowLatencyStreams;
        }
        if (optimalBufferSize >= 0) {
            auto setBufferResult = streamP->setBufferSizeInFrames(optimalBufferSize);
            if (!setBufferResult) {
                LOGW("Failed to setBufferSizeInFrames(%d). Error was %s",
                     optimalBufferSize,
                     convertToText(setBufferResult.error()));
            }
        }

        *streamPP = streamP;
    } else {
        delete streamP;
    }
    return result;
}

这儿完成的事情如下:

  1. 是否需要调整参数,如果需要调整,就再创建一个自己去OpenStream
  2. 创建好Stream后,进行兼容性方案检查,QuirksManager 可以针对不同的芯片制定了对应的兼容方案,业务也可以好好利用这块能力,这就是oboe兼容性好的秘密
  3. 调用open打开流。

对于android 有opensl和aaudio两种api,先看下类结构:

WechatIMG7515.jpeg

从类图中可以看到如下几个信息:

  1. Oboe 为了低延时和更好的兼容性,会根据需要再决策一次硬件参数,如果和设置参数不一样,那么内部就会进行重采样
  2. QuirksManager 内部针对三星和高通芯片进行了兼容性处理

接下来看下opensl的open,以采集为例:

代码语言:javascript
复制
Result AudioInputStreamOpenSLES::open() {
    ...
    // 初始化 opensl engine
    Result oboeResult = AudioStreamOpenSLES::open();
    if (Result::OK != oboeResult) return oboeResult;

    SLuint32 bitsPerSample = static_cast<SLuint32>(getBytesPerSample() * kBitsPerByte);

    // configure audio sink
    mBufferQueueLength = calculateOptimalBufferQueueLength(); // 计算最佳buffer,需要至少是2倍的回调数据
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {
            SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE,    // locatorType
            static_cast<SLuint32>(mBufferQueueLength)};   // numBuffers

    // Define the audio data format.
    SLDataFormat_PCM format_pcm = {
            SL_DATAFORMAT_PCM,       // formatType
            static_cast<SLuint32>(mChannelCount),           // numChannels
            static_cast<SLuint32>(mSampleRate * kMillisPerSecond), // milliSamplesPerSec
            bitsPerSample,                      // bitsPerSample
            bitsPerSample,                      // containerSize;
            channelCountToChannelMask(mChannelCount), // channelMask
            getDefaultByteOrder(),
    };

    SLDataSink audioSink = {&amp;loc_bufq, &amp;format_pcm};

    /**
     * API 23 (Marshmallow) introduced support for floating-point data representation and an
     * extended data format type: SLAndroidDataFormat_PCM_EX for recording streams (playback streams
     * got this in API 21). If running on API 23+ use this newer format type, creating it from our
     * original format.
     */
    SLAndroidDataFormat_PCM_EX format_pcm_ex;
    if (getSdkVersion() >= __ANDROID_API_M__) {
        SLuint32 representation = OpenSLES_ConvertFormatToRepresentation(getFormat());
        // Fill in the format structure.
        format_pcm_ex = OpenSLES_createExtendedFormat(format_pcm, representation);
        // Use in place of the previous format.
        audioSink.pFormat = &amp;format_pcm_ex;
    }


    // configure audio source
    SLDataLocator_IODevice loc_dev = {SL_DATALOCATOR_IODEVICE,
                                      SL_IODEVICE_AUDIOINPUT,
                                      SL_DEFAULTDEVICEID_AUDIOINPUT,
                                      NULL};
    SLDataSource audioSrc = {&amp;loc_dev, NULL};

    SLresult result = EngineOpenSLES::getInstance().createAudioRecorder(&amp;mObjectInterface,
                                                                        &amp;audioSrc,
                                                                        &amp;audioSink);

    if (SL_RESULT_SUCCESS != result) {
        LOGE("createAudioRecorder() result:%s", getSLErrStr(result));
        goto error;
    }

    // Configure the stream.
    result = (*mObjectInterface)->GetInterface(mObjectInterface,
                                            SL_IID_ANDROIDCONFIGURATION,
                                            &amp;configItf);

    if (SL_RESULT_SUCCESS != result) {
        LOGW("%s() GetInterface(SL_IID_ANDROIDCONFIGURATION) failed with %s",
             __func__, getSLErrStr(result));
    } else {
        if (getInputPreset() == InputPreset::VoicePerformance) {
            LOGD("OpenSL ES does not support InputPreset::VoicePerformance. Use VoiceRecognition.");
            mInputPreset = InputPreset::VoiceRecognition;
        }
        SLuint32 presetValue = OpenSLES_convertInputPreset(getInputPreset());
        result = (*configItf)->SetConfiguration(configItf,
                                         SL_ANDROID_KEY_RECORDING_PRESET,
                                         &amp;presetValue,
                                         sizeof(SLuint32));
        if (SL_RESULT_SUCCESS != result
                &amp;&amp; presetValue != SL_ANDROID_RECORDING_PRESET_VOICE_RECOGNITION) {
            presetValue = SL_ANDROID_RECORDING_PRESET_VOICE_RECOGNITION;
            LOGD("Setting InputPreset %d failed. Using VoiceRecognition instead.", getInputPreset());
            mInputPreset = InputPreset::VoiceRecognition;
            (*configItf)->SetConfiguration(configItf,
                                             SL_ANDROID_KEY_RECORDING_PRESET,
                                             &amp;presetValue,
                                             sizeof(SLuint32));
        }

        result = configurePerformanceMode(configItf);
        if (SL_RESULT_SUCCESS != result) {
            goto error;
        }
    }

    result = (*mObjectInterface)->Realize(mObjectInterface, SL_BOOLEAN_FALSE);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("Realize recorder object result:%s", getSLErrStr(result));
        goto error;
    }

    result = (*mObjectInterface)->GetInterface(mObjectInterface, SL_IID_RECORD, &amp;mRecordInterface);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("GetInterface RECORD result:%s", getSLErrStr(result));
        goto error;
    }

    result = finishCommonOpen(configItf);
    if (SL_RESULT_SUCCESS != result) {
        goto error;
    }

    setState(StreamState::Open);
    return Result::OK;

error:
    close(); // Clean up various OpenSL objects and prevent resource leaks.
    return Result::ErrorInternal; // TODO convert error from SLES to OBOE
}

这儿主要就是使用OpenSL 接口创建采集,这儿的调用就会走到lbOpenSLES里面,比如EngineOpenSLES::getInstance().createAudioRecorder对应的实现就是:

代码语言:javascript
复制
static SLresult IEngine_CreateAudioRecorder(SLEngineItf self, SLObjectItf *pRecorder,
    SLDataSource *pAudioSrc, SLDataSink *pAudioSnk, SLuint32 numInterfaces,
    const SLInterfaceID *pInterfaceIds, const SLboolean *pInterfaceRequired)
{
    SL_ENTER_INTERFACE

#if (USE_PROFILES &amp; USE_PROFILES_OPTIONAL) || defined(ANDROID)
    if (NULL == pRecorder) {
        result = SL_RESULT_PARAMETER_INVALID;
    } else {
        *pRecorder = NULL;
        unsigned exposedMask;
        const ClassTable *pCAudioRecorder_class = objectIDtoClass(SL_OBJECTID_AUDIORECORDER);
        if (NULL == pCAudioRecorder_class) {
            result = SL_RESULT_FEATURE_UNSUPPORTED;
        } else {
            result = checkInterfaces(pCAudioRecorder_class, numInterfaces,
                    pInterfaceIds, pInterfaceRequired, &amp;exposedMask, NULL);
        }

        if (SL_RESULT_SUCCESS == result) {

            // Construct our new AudioRecorder instance
            CAudioRecorder *thiz = (CAudioRecorder *) construct(pCAudioRecorder_class, exposedMask,
                    self);
            if (NULL == thiz) {
                result = SL_RESULT_MEMORY_FAILURE;
            } else {

                do {

                    // Initialize fields not associated with any interface

                    // Default data source in case of failure in checkDataSource
                    thiz->mDataSource.mLocator.mLocatorType = SL_DATALOCATOR_NULL;
                    thiz->mDataSource.mFormat.mFormatType = SL_DATAFORMAT_NULL;

                    // Default data sink in case of failure in checkDataSink
                    thiz->mDataSink.mLocator.mLocatorType = SL_DATALOCATOR_NULL;
                    thiz->mDataSink.mFormat.mFormatType = SL_DATAFORMAT_NULL;

                    // These fields are set to real values by
                    // android_audioRecorder_checkSourceSink.  Note that the data sink is
                    // always PCM buffer queue, so we know the channel count and sample rate early.
                    thiz->mNumChannels = UNKNOWN_NUMCHANNELS;
                    thiz->mSampleRateMilliHz = UNKNOWN_SAMPLERATE;
#ifdef ANDROID
                    // placement new (explicit constructor)
                    // FIXME unnecessary once those fields are encapsulated in one class, rather
                    //   than a structure
                    // 创建采集
                    (void) new (&amp;thiz->mAudioRecord) android::sp<android::AudioRecord>();
                    (void) new (&amp;thiz->mCallbackProtector)
                            android::sp<android::CallbackProtector>();
                    thiz->mRecordSource = AUDIO_SOURCE_DEFAULT;
#endif

                    // Check the source and sink parameters, and make a local copy of all parameters
                    result = checkDataSource("pAudioSrc", pAudioSrc, &amp;thiz->mDataSource,
                            DATALOCATOR_MASK_IODEVICE, DATAFORMAT_MASK_NULL);
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }
                    result = checkDataSink("pAudioSnk", pAudioSnk, &amp;thiz->mDataSink,
                            DATALOCATOR_MASK_URI
#ifdef ANDROID
                            | DATALOCATOR_MASK_ANDROIDSIMPLEBUFFERQUEUE
#endif
                            , DATAFORMAT_MASK_MIME | DATAFORMAT_MASK_PCM | DATAFORMAT_MASK_PCM_EX
                    );
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }

                    // It would be unsafe to ever refer to the application pointers again
                    pAudioSrc = NULL;
                    pAudioSnk = NULL;

                    // check the audio source and sink parameters against platform support
#ifdef ANDROID
                    result = android_audioRecorder_checkSourceSink(thiz);
                    if (SL_RESULT_SUCCESS != result) {
                        SL_LOGE("Cannot create AudioRecorder: invalid source or sink");
                        break;
                    }
#endif

#ifdef ANDROID
                    // Allocate memory for buffer queue
                    SLuint32 locatorType = thiz->mDataSink.mLocator.mLocatorType;
                    if (locatorType == SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE) {
                        thiz->mBufferQueue.mNumBuffers =
                            thiz->mDataSink.mLocator.mBufferQueue.numBuffers;
                        // inline allocation of circular Buffer Queue mArray, up to a typical max
                        if (BUFFER_HEADER_TYPICAL >= thiz->mBufferQueue.mNumBuffers) {
                            thiz->mBufferQueue.mArray = thiz->mBufferQueue.mTypical;
                        } else {
                            // Avoid possible integer overflow during multiplication; this arbitrary
                            // maximum is big enough to not interfere with real applications, but
                            // small enough to not overflow.
                            if (thiz->mBufferQueue.mNumBuffers >= 256) {
                                result = SL_RESULT_MEMORY_FAILURE;
                                break;
                            }
                            thiz->mBufferQueue.mArray = (BufferHeader *) malloc((thiz->mBufferQueue.
                                    mNumBuffers + 1) * sizeof(BufferHeader));
                            if (NULL == thiz->mBufferQueue.mArray) {
                                result = SL_RESULT_MEMORY_FAILURE;
                                break;
                            }
                        }
                        thiz->mBufferQueue.mFront = thiz->mBufferQueue.mArray;
                        thiz->mBufferQueue.mRear = thiz->mBufferQueue.mArray;
                    }
#endif

                    // platform-specific initialization
#ifdef ANDROID
                    android_audioRecorder_create(thiz);
#endif

                } while (0);

                if (SL_RESULT_SUCCESS != result) {
                    IObject_Destroy(&amp;thiz->mObject.mItf);
                } else {
                    IObject_Publish(&amp;thiz->mObject);
                    // return the new audio recorder object
                    *pRecorder = &amp;thiz->mObject.mItf;
                }
            }

        }

    }
#else
    result = SL_RESULT_FEATURE_UNSUPPORTED;
#endif

    SL_LEAVE_INTERFACE
}

在libOpenSLES内部有一个结构体SLEngineItf_,包含了不仅音频,还有视频等的操作方法,这儿只关注IEngine_CreateAudioRecorder就好,这儿的construct就会创建一个CAudioRecorder,这个结构内容如下:

代码语言:javascript
复制
/*typedef*/ struct CAudioRecorder_struct {
    // mandated interfaces
    IObject mObject;
#ifdef ANDROID
#define INTERFACES_AudioRecorder 14 // see MPH_to_AudioRecorder in MPH_to.c for list of interfaces
#else
#define INTERFACES_AudioRecorder 9  // see MPH_to_AudioRecorder in MPH_to.c for list of interfaces
#endif
    SLuint8 mInterfaceStates2[INTERFACES_AudioRecorder - INTERFACES_Default];
    IDynamicInterfaceManagement mDynamicInterfaceManagement;
    IRecord mRecord;
    IAudioEncoder mAudioEncoder;
    // optional interfaces
    IBassBoost mBassBoost;
    IDynamicSource mDynamicSource;
    IEqualizer mEqualizer;
    IVisualization mVisualization;
    IVolume mVolume;
#ifdef ANDROID
    IBufferQueue mBufferQueue;
    IAndroidConfiguration mAndroidConfiguration;
    IAndroidAcousticEchoCancellation  mAcousticEchoCancellation;
    IAndroidAutomaticGainControl mAutomaticGainControl;
    IAndroidNoiseSuppression mNoiseSuppression;
#endif
    // remaining are per-instance private fields not associated with an interface
    DataLocatorFormat mDataSource;
    DataLocatorFormat mDataSink;
    // cached data for this instance
    SLuint8 mNumChannels;   // initially UNKNOWN_NUMCHANNELS, then const once it is known,
                            // range 1 <= x <= FCC_8
    SLuint32 mSampleRateMilliHz;// initially UNKNOWN_SAMPLERATE, then const once it is known
    // implementation-specific data for this instance
#ifdef ANDROID
    // FIXME consolidate the next several variables into ARecorder class to avoid placement new
    enum AndroidObjectType mAndroidObjType;
    android::sp<android::AudioRecord> mAudioRecord;
    android::sp<android::CallbackProtector> mCallbackProtector;
    audio_source_t mRecordSource;
    SLuint32 mPerformanceMode;
#endif
} /*CAudioRecorder*/;

这里面包含了采集参数,也有采集对象,接下来就会创建android::AudioRecord并赋值给mAudioRecord,这时候采集就创建好了。

再看下播放的创建:

代码语言:javascript
复制
Result AudioOutputStreamOpenSLES::open() {
...
  // 初始化 opensl
    Result oboeResult = AudioStreamOpenSLES::open();
    if (Result::OK != oboeResult)  return oboeResult;
    // 创建 output mixer
    SLresult result = OutputMixerOpenSL::getInstance().open();
    if (SL_RESULT_SUCCESS != result) {
        AudioStreamOpenSLES::close();
        return Result::ErrorInternal;
    }

    SLuint32 bitsPerSample = static_cast<SLuint32>(getBytesPerSample() * kBitsPerByte);

    // configure audio source
    mBufferQueueLength = calculateOptimalBufferQueueLength();
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {
            SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE,    // locatorType
            static_cast<SLuint32>(mBufferQueueLength)};   // numBuffers

    // Define the audio data format.
    SLDataFormat_PCM format_pcm = {
            SL_DATAFORMAT_PCM,       // formatType
            static_cast<SLuint32>(mChannelCount),           // numChannels
            static_cast<SLuint32>(mSampleRate * kMillisPerSecond),    // milliSamplesPerSec
            bitsPerSample,                      // bitsPerSample
            bitsPerSample,                      // containerSize;
            channelCountToChannelMask(mChannelCount), // channelMask
            getDefaultByteOrder(),
    };

    SLDataSource audioSrc = {&loc_bufq, &format_pcm};

    /**
     * API 21 (Lollipop) introduced support for floating-point data representation and an extended
     * data format type: SLAndroidDataFormat_PCM_EX. If running on API 21+ use this newer format
     * type, creating it from our original format.
     */
    SLAndroidDataFormat_PCM_EX format_pcm_ex;
    if (getSdkVersion() >= __ANDROID_API_L__) {
        SLuint32 representation = OpenSLES_ConvertFormatToRepresentation(getFormat());
        // Fill in the format structure.
        format_pcm_ex = OpenSLES_createExtendedFormat(format_pcm, representation);
        // Use in place of the previous format.
        audioSrc.pFormat = &format_pcm_ex;
    }
   // 创建player
    result = OutputMixerOpenSL::getInstance().createAudioPlayer(&mObjectInterface,
                                                                          &audioSrc);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("createAudioPlayer() result:%s", getSLErrStr(result));
        goto error;
    }

    // Configure the stream.
    result = (*mObjectInterface)->GetInterface(mObjectInterface,
                                               SL_IID_ANDROIDCONFIGURATION,
                                               (void *)&configItf);
    if (SL_RESULT_SUCCESS != result) {
        LOGW("%s() GetInterface(SL_IID_ANDROIDCONFIGURATION) failed with %s",
             __func__, getSLErrStr(result));
    } else {
        result = configurePerformanceMode(configItf);
        if (SL_RESULT_SUCCESS != result) {
            goto error;
        }

        SLuint32 presetValue = OpenSLES_convertOutputUsage(getUsage());
        result = (*configItf)->SetConfiguration(configItf,
                                                SL_ANDROID_KEY_STREAM_TYPE,
                                                &presetValue,
                                                sizeof(presetValue));
        if (SL_RESULT_SUCCESS != result) {
            goto error;
        }
    }

    result = (*mObjectInterface)->Realize(mObjectInterface, SL_BOOLEAN_FALSE);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("Realize player object result:%s", getSLErrStr(result));
        goto error;
    }

    result = (*mObjectInterface)->GetInterface(mObjectInterface, SL_IID_PLAY, &mPlayInterface);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("GetInterface PLAY result:%s", getSLErrStr(result));
        goto error;
    }

    result = finishCommonOpen(configItf);
    if (SL_RESULT_SUCCESS != result) {
        goto error;
    }

    setState(StreamState::Open);
    return Result::OK;

error:
    close();  // Clean up various OpenSL objects and prevent resource leaks.
    return Result::ErrorInternal; // TODO convert error from SLES to OBOE
}

这儿也是通过Opensles接口创建的player,实现是:

代码语言:javascript
复制
static SLresult IEngine_CreateAudioPlayer(SLEngineItf self, SLObjectItf *pPlayer,
    SLDataSource *pAudioSrc, SLDataSink *pAudioSnk, SLuint32 numInterfaces,
    const SLInterfaceID *pInterfaceIds, const SLboolean *pInterfaceRequired)
{
    SL_ENTER_INTERFACE

    if (NULL == pPlayer) {
       result = SL_RESULT_PARAMETER_INVALID;
    } else {
        *pPlayer = NULL;
        unsigned exposedMask, requiredMask;
        const ClassTable *pCAudioPlayer_class = objectIDtoClass(SL_OBJECTID_AUDIOPLAYER);
        assert(NULL != pCAudioPlayer_class);
        result = checkInterfaces(pCAudioPlayer_class, numInterfaces,
            pInterfaceIds, pInterfaceRequired, &exposedMask, &requiredMask);
        if (SL_RESULT_SUCCESS == result) {

            // Construct our new AudioPlayer instance
            CAudioPlayer *thiz = (CAudioPlayer *) construct(pCAudioPlayer_class, exposedMask, self);
            if (NULL == thiz) {
                result = SL_RESULT_MEMORY_FAILURE;
            } else {

                do {

                    // Initialize private fields not associated with an interface

                    // Default data source in case of failure in checkDataSource
                    thiz->mDataSource.mLocator.mLocatorType = SL_DATALOCATOR_NULL;
                    thiz->mDataSource.mFormat.mFormatType = SL_DATAFORMAT_NULL;

                    // Default data sink in case of failure in checkDataSink
                    thiz->mDataSink.mLocator.mLocatorType = SL_DATALOCATOR_NULL;
                    thiz->mDataSink.mFormat.mFormatType = SL_DATAFORMAT_NULL;

                    // Default is no per-channel mute or solo
                    thiz->mMuteMask = 0;
                    thiz->mSoloMask = 0;

                    // Will be set soon for PCM buffer queues, or later by platform-specific code
                    // during Realize or Prefetch
                    thiz->mNumChannels = UNKNOWN_NUMCHANNELS;
                    thiz->mSampleRateMilliHz = UNKNOWN_SAMPLERATE;

                    // More default values, in case destructor needs to be called early
                    thiz->mDirectLevel = 0; // no attenuation
#ifdef USE_OUTPUTMIXEXT
                    thiz->mTrack = NULL;
                    thiz->mGains[0] = 1.0f;
                    thiz->mGains[1] = 1.0f;
                    thiz->mDestroyRequested = SL_BOOLEAN_FALSE;
#endif
#ifdef USE_SNDFILE
                    thiz->mSndFile.mPathname = NULL;
                    thiz->mSndFile.mSNDFILE = NULL;
                    memset(&thiz->mSndFile.mSfInfo, 0, sizeof(SF_INFO));
                    memset(&thiz->mSndFile.mMutex, 0, sizeof(pthread_mutex_t));
                    thiz->mSndFile.mEOF = SL_BOOLEAN_FALSE;
                    thiz->mSndFile.mWhich = 0;
                    memset(thiz->mSndFile.mBuffer, 0, sizeof(thiz->mSndFile.mBuffer));
#endif
#ifdef ANDROID
                    // placement new (explicit constructor)
                    // FIXME unnecessary once those fields are encapsulated in one class, rather
                    //   than a structure
                    //###(void) new (&thiz->mAudioTrack) android::sp<android::AudioTrack>();
                    (void) new (&thiz->mTrackPlayer) android::sp<android::TrackPlayerBase>();
                    (void) new (&thiz->mCallbackProtector)
                            android::sp<android::CallbackProtector>();
                    (void) new (&thiz->mAuxEffect) android::sp<android::AudioEffect>();
                    (void) new (&thiz->mAPlayer) android::sp<android::GenericPlayer>();
                    // Android-specific POD fields are initialized in android_audioPlayer_create,
                    // and assume calloc or memset 0 during allocation
#endif

                    // Check the source and sink parameters against generic constraints,
                    // and make a local copy of all parameters in case other application threads
                    // change memory concurrently.

                    result = checkDataSource("pAudioSrc", pAudioSrc, &thiz->mDataSource,
                            DATALOCATOR_MASK_URI | DATALOCATOR_MASK_ADDRESS |
                            DATALOCATOR_MASK_BUFFERQUEUE
#ifdef ANDROID
                            | DATALOCATOR_MASK_ANDROIDFD | DATALOCATOR_MASK_ANDROIDSIMPLEBUFFERQUEUE
                            | DATALOCATOR_MASK_ANDROIDBUFFERQUEUE
#endif
                            , DATAFORMAT_MASK_MIME | DATAFORMAT_MASK_PCM | DATAFORMAT_MASK_PCM_EX);

                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }

                    result = checkDataSink("pAudioSnk", pAudioSnk, &thiz->mDataSink,
                            DATALOCATOR_MASK_OUTPUTMIX                  // for playback
#ifdef ANDROID
                            | DATALOCATOR_MASK_ANDROIDSIMPLEBUFFERQUEUE // for decode to a BQ
                            | DATALOCATOR_MASK_BUFFERQUEUE              // for decode to a BQ
#endif
                            , DATAFORMAT_MASK_NULL
#ifdef ANDROID
                            | DATAFORMAT_MASK_PCM | DATAFORMAT_MASK_PCM_EX  // for decode to PCM
#endif
                            );
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }

                    // It would be unsafe to ever refer to the application pointers again
                    pAudioSrc = NULL;
                    pAudioSnk = NULL;

                    // Check that the requested interfaces are compatible with data source and sink
                    result = checkSourceSinkVsInterfacesCompatibility(&thiz->mDataSource,
                            &thiz->mDataSink, pCAudioPlayer_class, requiredMask);
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }

                    // copy the buffer queue count from source locator (for playback) / from the
                    // sink locator (for decode on ANDROID build) to the buffer queue interface
                    // we have already range-checked the value down to a smaller width
                    SLuint16 nbBuffers = 0;
                    bool usesAdvancedBufferHeaders = false;
                    bool usesSimpleBufferQueue = false;
                    // creating an AudioPlayer which decodes AAC ADTS buffers to a PCM buffer queue
                    //  will cause usesAdvancedBufferHeaders and usesSimpleBufferQueue to be true
                    switch (thiz->mDataSource.mLocator.mLocatorType) {
                    case SL_DATALOCATOR_BUFFERQUEUE:
#ifdef ANDROID
                    case SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE:
#endif
                        usesSimpleBufferQueue = true;
                        nbBuffers = (SLuint16) thiz->mDataSource.mLocator.mBufferQueue.numBuffers;
                        assert(SL_DATAFORMAT_PCM == thiz->mDataSource.mFormat.mFormatType
                                || SL_ANDROID_DATAFORMAT_PCM_EX
                                    == thiz->mDataSource.mFormat.mFormatType);
                        thiz->mNumChannels = thiz->mDataSource.mFormat.mPCM.numChannels;
                        thiz->mSampleRateMilliHz = thiz->mDataSource.mFormat.mPCM.samplesPerSec;
                        break;
#ifdef ANDROID
                    case SL_DATALOCATOR_ANDROIDBUFFERQUEUE:
                        usesAdvancedBufferHeaders = true;
                        nbBuffers = (SLuint16) thiz->mDataSource.mLocator.mABQ.numBuffers;
                        thiz->mAndroidBufferQueue.mNumBuffers = nbBuffers;
                        break;
#endif
                    default:
                        nbBuffers = 0;
                        break;
                    }
#ifdef ANDROID
                    switch (thiz->mDataSink.mLocator.mLocatorType) {
                    case SL_DATALOCATOR_BUFFERQUEUE:
                    case SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE:
                        usesSimpleBufferQueue = true;
                        nbBuffers = thiz->mDataSink.mLocator.mBufferQueue.numBuffers;
                        assert(SL_DATAFORMAT_PCM == thiz->mDataSink.mFormat.mFormatType
                                || SL_ANDROID_DATAFORMAT_PCM_EX
                                    == thiz->mDataSink.mFormat.mFormatType);
                        // FIXME The values specified by the app are meaningless. We get the
                        // real values from the decoder.  But the data sink checks currently require
                        // that the app specify these useless values.  Needs doc/fix.
                        // Instead use the "unknown" values, as needed by prepare completion.
                        // thiz->mNumChannels = thiz->mDataSink.mFormat.mPCM.numChannels;
                        // thiz->mSampleRateMilliHz = thiz->mDataSink.mFormat.mPCM.samplesPerSec;
                        thiz->mNumChannels = UNKNOWN_NUMCHANNELS;
                        thiz->mSampleRateMilliHz = UNKNOWN_SAMPLERATE;
                        break;
                    default:
                        // leave nbBuffers unchanged
                        break;
                    }
#endif
                    thiz->mBufferQueue.mNumBuffers = nbBuffers;

                    // check the audio source and sink parameters against platform support
#ifdef ANDROID
                    result = android_audioPlayer_checkSourceSink(thiz);
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }
#endif

#ifdef USE_SNDFILE
                    result = SndFile_checkAudioPlayerSourceSink(thiz);
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }
#endif

#ifdef USE_OUTPUTMIXEXT
                    result = IOutputMixExt_checkAudioPlayerSourceSink(thiz);
                    if (SL_RESULT_SUCCESS != result) {
                        break;
                    }
#endif

                    // Allocate memory for buffer queue
                    if (usesAdvancedBufferHeaders) {
#ifdef ANDROID
                        // locator is SL_DATALOCATOR_ANDROIDBUFFERQUEUE
                        result = initializeAndroidBufferQueueMembers(thiz);
#else
                        assert(false);
#endif
                    }

                    if (usesSimpleBufferQueue) {
                        // locator is SL_DATALOCATOR_BUFFERQUEUE
                        //         or SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE
                        result = initializeBufferQueueMembers(thiz);
                    }

                    // used to store the data source of our audio player
                    thiz->mDynamicSource.mDataSource = &thiz->mDataSource.u.mSource;

                    // platform-specific initialization
#ifdef ANDROID
                    android_audioPlayer_create(thiz);
#endif

                } while (0);

                if (SL_RESULT_SUCCESS != result) {
                    IObject_Destroy(&thiz->mObject.mItf);
                } else {
                    IObject_Publish(&thiz->mObject);
                    // return the new audio player object
                    *pPlayer = &thiz->mObject.mItf;
                }

            }
        }

    }

    SL_LEAVE_INTERFACE
}

在android_audioPlayer_create中会创建player,不过这时候的player是没有初始化的:

代码语言:javascript
复制
void android_audioPlayer_create(CAudioPlayer *pAudioPlayer) {

    // pAudioPlayer->mAndroidObjType has been set in android_audioPlayer_checkSourceSink()
    // and if it was == INVALID_TYPE, then IEngine_CreateAudioPlayer would never call us
    assert(INVALID_TYPE != pAudioPlayer->mAndroidObjType);

    // These initializations are in the same order as the field declarations in classes.h

    // FIXME Consolidate initializations (many of these already in IEngine_CreateAudioPlayer)
    // mAndroidObjType: see above comment
    pAudioPlayer->mAndroidObjState = ANDROID_UNINITIALIZED;
    pAudioPlayer->mSessionId = (audio_session_t) android::AudioSystem::newAudioUniqueId(
            AUDIO_UNIQUE_ID_USE_SESSION);
    pAudioPlayer->mPIId = PLAYER_PIID_INVALID;

    // placeholder: not necessary yet as session ID lifetime doesn't extend beyond player
    // android::AudioSystem::acquireAudioSessionId(pAudioPlayer->mSessionId);

    pAudioPlayer->mStreamType = ANDROID_DEFAULT_OUTPUT_STREAM_TYPE;
    pAudioPlayer->mPerformanceMode = ANDROID_PERFORMANCE_MODE_DEFAULT;

    // mAudioTrack lifecycle is handled through mTrackPlayer
    pAudioPlayer->mTrackPlayer = new android::TrackPlayerBase();
    assert(pAudioPlayer->mTrackPlayer != 0);
    pAudioPlayer->mCallbackProtector = new android::CallbackProtector();
    // mAPLayer
    // mAuxEffect

    pAudioPlayer->mAuxSendLevel = 0;
    pAudioPlayer->mAmplFromDirectLevel = 1.0f; // matches initial mDirectLevel value
    pAudioPlayer->mDeferredStart = false;

    // This section re-initializes interface-specific fields that
    // can be set or used regardless of whether the interface is
    // exposed on the AudioPlayer or not

    switch (pAudioPlayer->mAndroidObjType) {
    case AUDIOPLAYER_FROM_PCM_BUFFERQUEUE:
        pAudioPlayer->mPlaybackRate.mMinRate = AUDIOTRACK_MIN_PLAYBACKRATE_PERMILLE;
        pAudioPlayer->mPlaybackRate.mMaxRate = AUDIOTRACK_MAX_PLAYBACKRATE_PERMILLE;
        break;
    case AUDIOPLAYER_FROM_URIFD:
        pAudioPlayer->mPlaybackRate.mMinRate = MEDIAPLAYER_MIN_PLAYBACKRATE_PERMILLE;
        pAudioPlayer->mPlaybackRate.mMaxRate = MEDIAPLAYER_MAX_PLAYBACKRATE_PERMILLE;
        break;
    default:
        // use the default range
        break;
    }

}

对于TrackPlayerBase,这是一个壳子,需要真正的AudioTrack才可以,那是啥时候可以初始化的呢?在open中有一个Opensl interface的Realize调用,对于AuidoPlayer,实现如下:

代码语言:javascript
复制
SLresult CAudioPlayer_Realize(void *self, SLboolean async)
{
    CAudioPlayer *thiz = (CAudioPlayer *) self;
    SLresult result = SL_RESULT_SUCCESS;

#ifdef ANDROID
    result = android_audioPlayer_realize(thiz, async);
#endif

#ifdef USE_SNDFILE
    result = SndFile_Realize(thiz);
#endif

    // At this point the channel count and sample rate might still be unknown,
    // depending on the data source and the platform implementation.
    // If they are unknown here, then they will be determined during prefetch.

    return result;
}

再看下android_audioPlayer_realize:

代码语言:javascript
复制
SLresult android_audioPlayer_realize(CAudioPlayer *pAudioPlayer, SLboolean async) {

    SLresult result = SL_RESULT_SUCCESS;
    SL_LOGV("Realize pAudioPlayer=%p", pAudioPlayer);
    AudioPlayback_Parameters app;
    app.sessionId = pAudioPlayer->mSessionId;
    app.streamType = pAudioPlayer->mStreamType;

    switch (pAudioPlayer->mAndroidObjType) {

    //-----------------------------------
    // AudioTrack
    case AUDIOPLAYER_FROM_PCM_BUFFERQUEUE: {
        // initialize platform-specific CAudioPlayer fields

        SLDataFormat_PCM *df_pcm = (SLDataFormat_PCM *)
                pAudioPlayer->mDynamicSource.mDataSource->pFormat;

        uint32_t sampleRate = sles_to_android_sampleRate(df_pcm->samplesPerSec);

        audio_channel_mask_t channelMask;
        channelMask = sles_to_audio_output_channel_mask(df_pcm->channelMask);

        // To maintain backward compatibility with previous releases, ignore
        // channel masks that are not indexed.
        if (channelMask == AUDIO_CHANNEL_INVALID
                || audio_channel_mask_get_representation(channelMask)
                        == AUDIO_CHANNEL_REPRESENTATION_POSITION) {
            channelMask = audio_channel_out_mask_from_count(df_pcm->numChannels);
            SL_LOGI("Emulating old channel mask behavior "
                    "(ignoring positional mask %#x, using default mask %#x based on "
                    "channel count of %d)", df_pcm->channelMask, channelMask,
                    df_pcm->numChannels);
        }
        SL_LOGV("AudioPlayer: mapped SLES channel mask %#x to android channel mask %#x",
            df_pcm->channelMask,
            channelMask);

        checkAndSetPerformanceModePre(pAudioPlayer);

        audio_output_flags_t policy;
        switch (pAudioPlayer->mPerformanceMode) {
        case ANDROID_PERFORMANCE_MODE_POWER_SAVING:
            policy = AUDIO_OUTPUT_FLAG_DEEP_BUFFER;
            break;
        case ANDROID_PERFORMANCE_MODE_NONE:
            policy = AUDIO_OUTPUT_FLAG_NONE;
            break;
        case ANDROID_PERFORMANCE_MODE_LATENCY_EFFECTS:
            policy = AUDIO_OUTPUT_FLAG_FAST;
            break;
        case ANDROID_PERFORMANCE_MODE_LATENCY:
        default:
            policy = (audio_output_flags_t)(AUDIO_OUTPUT_FLAG_FAST | AUDIO_OUTPUT_FLAG_RAW);
            break;
        }

        int32_t notificationFrames;
        if ((policy & AUDIO_OUTPUT_FLAG_FAST) != 0) {
            // negative notificationFrames is the number of notifications (sub-buffers) per track
            // buffer for details see the explanation at frameworks/av/include/media/AudioTrack.h
            notificationFrames = -pAudioPlayer->mBufferQueue.mNumBuffers;
        } else {
            notificationFrames = 0;
        }

        android::AudioTrack* pat = new android::AudioTrack(
                pAudioPlayer->mStreamType,                           // streamType
                sampleRate,                                          // sampleRate
                sles_to_android_sampleFormat(df_pcm),                // format
                channelMask,                                         // channel mask
                0,                                                   // frameCount
                policy,                                              // flags
                audioTrack_callBack_pullFromBuffQueue,               // callback
                (void *) pAudioPlayer,                               // user
                notificationFrames,                                  // see comment above
                pAudioPlayer->mSessionId);

        // Set it here so it can be logged by the destructor if the open failed.
        pat->setCallerName(ANDROID_OPENSLES_CALLER_NAME);

        android::status_t status = pat->initCheck();
        if (status != android::NO_ERROR) {
            // AudioTracks are meant to be refcounted, so their dtor is protected.
            static_cast<void>(android::sp<android::AudioTrack>(pat));

            SL_LOGE("AudioTrack::initCheck status %u", status);
            // FIXME should return a more specific result depending on status
            result = SL_RESULT_CONTENT_UNSUPPORTED;
            return result;
        }

        pAudioPlayer->mTrackPlayer->init(pat, android::PLAYER_TYPE_SLES_AUDIOPLAYER_BUFFERQUEUE,
                usageForStreamType(pAudioPlayer->mStreamType));

        // update performance mode according to actual flags granted to AudioTrack
        checkAndSetPerformanceModePost(pAudioPlayer);

        // initialize platform-independent CAudioPlayer fields

        pAudioPlayer->mNumChannels = df_pcm->numChannels;
        pAudioPlayer->mSampleRateMilliHz = df_pcm->samplesPerSec; // Note: bad field name in SL ES

        // This use case does not have a separate "prepare" step
        pAudioPlayer->mAndroidObjState = ANDROID_READY;

        // If there is a JavaAudioRoutingProxy associated with this player, hook it up...
        JNIEnv* j_env = NULL;
        jclass clsAudioTrack = NULL;
        jmethodID midRoutingProxy_connect = NULL;
        if (pAudioPlayer->mAndroidConfiguration.mRoutingProxy != NULL &&
                (j_env = android::AndroidRuntime::getJNIEnv()) != NULL &&
                (clsAudioTrack = j_env->FindClass("android/media/AudioTrack")) != NULL &&
                (midRoutingProxy_connect =
                    j_env->GetMethodID(clsAudioTrack, "deferred_connect", "(J)V")) != NULL) {
            j_env->ExceptionClear();
            j_env->CallVoidMethod(pAudioPlayer->mAndroidConfiguration.mRoutingProxy,
                                  midRoutingProxy_connect,
                                  (jlong)pAudioPlayer->mTrackPlayer->mAudioTrack.get());
            if (j_env->ExceptionCheck()) {
                SL_LOGE("Java exception releasing player routing object.");
                result = SL_RESULT_INTERNAL_ERROR;
                pAudioPlayer->mTrackPlayer->mAudioTrack.clear();
                return result;
            }
        }
    }
        break;

    //-----------------------------------
    // MediaPlayer
    case AUDIOPLAYER_FROM_URIFD: {
        pAudioPlayer->mAPlayer = new android::LocAVPlayer(&app, false /*hasVideo*/);
        pAudioPlayer->mAPlayer->init(sfplayer_handlePrefetchEvent,
                        (void*)pAudioPlayer /*notifUSer*/);

        switch (pAudioPlayer->mDataSource.mLocator.mLocatorType) {
            case SL_DATALOCATOR_URI: {
                // The legacy implementation ran Stagefright within the application process, and
                // so allowed local pathnames specified by URI that were openable by
                // the application but were not openable by mediaserver.
                // The current implementation runs Stagefright (mostly) within mediaserver,
                // which runs as a different UID and likely a different current working directory.
                // For backwards compatibility with any applications which may have relied on the
                // previous behavior, we convert an openable file URI into an FD.
                // Note that unlike SL_DATALOCATOR_ANDROIDFD, this FD is owned by us
                // and so we close it as soon as we've passed it (via Binder dup) to mediaserver.
                const char *uri = (const char *)pAudioPlayer->mDataSource.mLocator.mURI.URI;
                if (!isDistantProtocol(uri)) {
                    // don't touch the original uri, we may need it later
                    const char *pathname = uri;
                    // skip over an optional leading file:// prefix
                    if (!strncasecmp(pathname, "file://", 7)) {
                        pathname += 7;
                    }
                    // attempt to open it as a file using the application's credentials
                    int fd = ::open(pathname, O_RDONLY);
                    if (fd >= 0) {
                        // if open is successful, then check to see if it's a regular file
                        struct stat statbuf;
                        if (!::fstat(fd, &statbuf) && S_ISREG(statbuf.st_mode)) {
                            // treat similarly to an FD data locator, but
                            // let setDataSource take responsibility for closing fd
                            pAudioPlayer->mAPlayer->setDataSource(fd, 0, statbuf.st_size, true);
                            break;
                        }
                        // we were able to open it, but it's not a file, so let mediaserver try
                        (void) ::close(fd);
                    }
                }
                // if either the URI didn't look like a file, or open failed, or not a file
                pAudioPlayer->mAPlayer->setDataSource(uri);
                } break;
            case SL_DATALOCATOR_ANDROIDFD: {
                int64_t offset = (int64_t)pAudioPlayer->mDataSource.mLocator.mFD.offset;
                pAudioPlayer->mAPlayer->setDataSource(
                        (int)pAudioPlayer->mDataSource.mLocator.mFD.fd,
                        offset == SL_DATALOCATOR_ANDROIDFD_USE_FILE_SIZE ?
                                (int64_t)PLAYER_FD_FIND_FILE_SIZE : offset,
                        (int64_t)pAudioPlayer->mDataSource.mLocator.mFD.length);
                }
                break;
            default:
                SL_LOGE(ERROR_PLAYERREALIZE_UNKNOWN_DATASOURCE_LOCATOR);
                break;
        }

        if (pAudioPlayer->mObject.mEngine->mAudioManager == 0) {
            SL_LOGE("AudioPlayer realize: no audio service, player will not be registered");
            pAudioPlayer->mPIId = 0;
        } else {
            pAudioPlayer->mPIId = pAudioPlayer->mObject.mEngine->mAudioManager->trackPlayer(
                    android::PLAYER_TYPE_SLES_AUDIOPLAYER_URI_FD,
                    usageForStreamType(pAudioPlayer->mStreamType), AUDIO_CONTENT_TYPE_UNKNOWN,
                    pAudioPlayer->mTrackPlayer);
        }
        }
        break;

    //-----------------------------------
    // StreamPlayer
    case AUDIOPLAYER_FROM_TS_ANDROIDBUFFERQUEUE: {
        android::StreamPlayer* splr = new android::StreamPlayer(&app, false /*hasVideo*/,
                &pAudioPlayer->mAndroidBufferQueue, pAudioPlayer->mCallbackProtector);
        pAudioPlayer->mAPlayer = splr;
        splr->init(sfplayer_handlePrefetchEvent, (void*)pAudioPlayer);
        }
        break;

    //-----------------------------------
    // AudioToCbRenderer
    case AUDIOPLAYER_FROM_URIFD_TO_PCM_BUFFERQUEUE: {
        android::AudioToCbRenderer* decoder = new android::AudioToCbRenderer(&app);
        pAudioPlayer->mAPlayer = decoder;
        // configures the callback for the sink buffer queue
        decoder->setDataPushListener(adecoder_writeToBufferQueue, pAudioPlayer);
        // configures the callback for the notifications coming from the SF code
        decoder->init(sfplayer_handlePrefetchEvent, (void*)pAudioPlayer);

        switch (pAudioPlayer->mDataSource.mLocator.mLocatorType) {
        case SL_DATALOCATOR_URI:
            decoder->setDataSource(
                    (const char*)pAudioPlayer->mDataSource.mLocator.mURI.URI);
            break;
        case SL_DATALOCATOR_ANDROIDFD: {
            int64_t offset = (int64_t)pAudioPlayer->mDataSource.mLocator.mFD.offset;
            decoder->setDataSource(
                    (int)pAudioPlayer->mDataSource.mLocator.mFD.fd,
                    offset == SL_DATALOCATOR_ANDROIDFD_USE_FILE_SIZE ?
                            (int64_t)PLAYER_FD_FIND_FILE_SIZE : offset,
                            (int64_t)pAudioPlayer->mDataSource.mLocator.mFD.length);
            }
            break;
        default:
            SL_LOGE(ERROR_PLAYERREALIZE_UNKNOWN_DATASOURCE_LOCATOR);
            break;
        }

        }
        break;

    //-----------------------------------
    // AacBqToPcmCbRenderer
    case AUDIOPLAYER_FROM_ADTS_ABQ_TO_PCM_BUFFERQUEUE: {
        android::AacBqToPcmCbRenderer* bqtobq = new android::AacBqToPcmCbRenderer(&app,
                &pAudioPlayer->mAndroidBufferQueue);
        // configures the callback for the sink buffer queue
        bqtobq->setDataPushListener(adecoder_writeToBufferQueue, pAudioPlayer);
        pAudioPlayer->mAPlayer = bqtobq;
        // configures the callback for the notifications coming from the SF code,
        // but also implicitly configures the AndroidBufferQueue from which ADTS data is read
        pAudioPlayer->mAPlayer->init(sfplayer_handlePrefetchEvent, (void*)pAudioPlayer);
        }
        break;

    //-----------------------------------
    default:
        SL_LOGE(ERROR_PLAYERREALIZE_UNEXPECTED_OBJECT_TYPE_D, pAudioPlayer->mAndroidObjType);
        result = SL_RESULT_INTERNAL_ERROR;
        break;
    }

    if (result == SL_RESULT_SUCCESS) {
        // proceed with effect initialization
        // initialize EQ
        // FIXME use a table of effect descriptors when adding support for more effects

        // No session effects allowed even in latency with effects performance mode because HW
        // accelerated effects are only tolerated as post processing in this mode
        if ((pAudioPlayer->mAndroidObjType != AUDIOPLAYER_FROM_PCM_BUFFERQUEUE) ||
                ((pAudioPlayer->mPerformanceMode != ANDROID_PERFORMANCE_MODE_LATENCY) &&
                 (pAudioPlayer->mPerformanceMode != ANDROID_PERFORMANCE_MODE_LATENCY_EFFECTS))) {
            if (memcmp(SL_IID_EQUALIZER, &pAudioPlayer->mEqualizer.mEqDescriptor.type,
                    sizeof(effect_uuid_t)) == 0) {
                SL_LOGV("Need to initialize EQ for AudioPlayer=%p", pAudioPlayer);
                android_eq_init(pAudioPlayer->mSessionId, &pAudioPlayer->mEqualizer);
            }
            // initialize BassBoost
            if (memcmp(SL_IID_BASSBOOST, &pAudioPlayer->mBassBoost.mBassBoostDescriptor.type,
                    sizeof(effect_uuid_t)) == 0) {
                SL_LOGV("Need to initialize BassBoost for AudioPlayer=%p", pAudioPlayer);
                android_bb_init(pAudioPlayer->mSessionId, &pAudioPlayer->mBassBoost);
            }
            // initialize Virtualizer
            if (memcmp(SL_IID_VIRTUALIZER, &pAudioPlayer->mVirtualizer.mVirtualizerDescriptor.type,
                       sizeof(effect_uuid_t)) == 0) {
                SL_LOGV("Need to initialize Virtualizer for AudioPlayer=%p", pAudioPlayer);
                android_virt_init(pAudioPlayer->mSessionId, &pAudioPlayer->mVirtualizer);
            }
        }
    }

    // initialize EffectSend
    // FIXME initialize EffectSend

    return result;
}

这儿就会创建AudioTrack,并执行AudioPlayerBase的init,将创建好的AudioTrack传递进去。

接下来再看下AAudio的open流程:

代码语言:javascript
复制
Result AudioStreamAAudio::open() {
    Result result = Result::OK;

    if (mAAudioStream != nullptr) {
        return Result::ErrorInvalidState;
    }

    result = AudioStream::open();
    if (result != Result::OK) {
        return result;
    }

    AAudioStreamBuilder *aaudioBuilder;
    result = static_cast<Result>(mLibLoader->createStreamBuilder(&aaudioBuilder));
    if (result != Result::OK) {
        return result;
    }

    // Do not set INPUT capacity below 4096 because that prevents us from getting a FAST track
    // when using the Legacy data path.
    // If the app requests > 4096 then we allow it but we are less likely to get LowLatency.
    // See internal bug b/80308183 for more details.
    // Fixed in Q but let's still clip the capacity because high input capacity
    // does not increase latency.
    int32_t capacity = mBufferCapacityInFrames;
    constexpr int kCapacityRequiredForFastLegacyTrack = 4096; // matches value in AudioFinger
    if (OboeGlobals::areWorkaroundsEnabled()
            && mDirection == oboe::Direction::Input
            && capacity != oboe::Unspecified
            && capacity < kCapacityRequiredForFastLegacyTrack
            && mPerformanceMode == oboe::PerformanceMode::LowLatency) {
        capacity = kCapacityRequiredForFastLegacyTrack;
        LOGD("AudioStreamAAudio.open() capacity changed from %d to %d for lower latency",
             static_cast<int>(mBufferCapacityInFrames), capacity);
    }
    mLibLoader->builder_setBufferCapacityInFrames(aaudioBuilder, capacity);

    // Channel mask was added in SC_V2. Given the corresponding channel count of selected channel
    // mask may be different from selected channel count, the last set value will be respected.
    // If channel count is set after channel mask, the previously set channel mask will be cleared.
    // If channel mask is set after channel count, the channel count will be automatically
    // calculated from selected channel mask. In that case, only set channel mask when the API
    // is available and the channel mask is specified.
    if (mLibLoader->builder_setChannelMask != nullptr && mChannelMask != ChannelMask::Unspecified) {
        mLibLoader->builder_setChannelMask(aaudioBuilder,
                                           static_cast<aaudio_channel_mask_t>(mChannelMask));
    } else {
        mLibLoader->builder_setChannelCount(aaudioBuilder, mChannelCount);
    }
    mLibLoader->builder_setDeviceId(aaudioBuilder, mDeviceId);
    mLibLoader->builder_setDirection(aaudioBuilder, static_cast<aaudio_direction_t>(mDirection));
    mLibLoader->builder_setFormat(aaudioBuilder, static_cast<aaudio_format_t>(mFormat));
    mLibLoader->builder_setSampleRate(aaudioBuilder, mSampleRate);
    mLibLoader->builder_setSharingMode(aaudioBuilder,
                                       static_cast<aaudio_sharing_mode_t>(mSharingMode));
    mLibLoader->builder_setPerformanceMode(aaudioBuilder,
                                           static_cast<aaudio_performance_mode_t>(mPerformanceMode));

    // These were added in P so we have to check for the function pointer.
    if (mLibLoader->builder_setUsage != nullptr) {
        mLibLoader->builder_setUsage(aaudioBuilder,
                                     static_cast<aaudio_usage_t>(mUsage));
    }

    if (mLibLoader->builder_setContentType != nullptr) {
        mLibLoader->builder_setContentType(aaudioBuilder,
                                           static_cast<aaudio_content_type_t>(mContentType));
    }

    if (mLibLoader->builder_setInputPreset != nullptr) {
        aaudio_input_preset_t inputPreset = mInputPreset;
        if (getSdkVersion() <= __ANDROID_API_P__ && inputPreset == InputPreset::VoicePerformance) {
            LOGD("InputPreset::VoicePerformance not supported before Q. Using VoiceRecognition.");
            inputPreset = InputPreset::VoiceRecognition; // most similar preset
        }
        mLibLoader->builder_setInputPreset(aaudioBuilder,
                                           static_cast<aaudio_input_preset_t>(inputPreset));
    }

    if (mLibLoader->builder_setSessionId != nullptr) {
        mLibLoader->builder_setSessionId(aaudioBuilder,
                                         static_cast<aaudio_session_id_t>(mSessionId));
    }

    // These were added in S so we have to check for the function pointer.
    if (mLibLoader->builder_setPackageName != nullptr && !mPackageName.empty()) {
        mLibLoader->builder_setPackageName(aaudioBuilder,
                                           mPackageName.c_str());
    }

    if (mLibLoader->builder_setAttributionTag != nullptr && !mAttributionTag.empty()) {
        mLibLoader->builder_setAttributionTag(aaudioBuilder,
                                           mAttributionTag.c_str());
    }

    if (isDataCallbackSpecified()) {
        mLibLoader->builder_setDataCallback(aaudioBuilder, oboe_aaudio_data_callback_proc, this);
        mLibLoader->builder_setFramesPerDataCallback(aaudioBuilder, getFramesPerDataCallback());

        if (!isErrorCallbackSpecified()) {
            // The app did not specify a callback so we should specify
            // our own so the stream gets closed and stopped.
            mErrorCallback = &mDefaultErrorCallback;
        }
        mLibLoader->builder_setErrorCallback(aaudioBuilder, internalErrorCallback, this);
    }
    // Else if the data callback is not being used then the write method will return an error
    // and the app can stop and close the stream.

    // ============= OPEN THE STREAM ================
    {
        AAudioStream *stream = nullptr;
        result = static_cast<Result>(mLibLoader->builder_openStream(aaudioBuilder, &stream));
        mAAudioStream.store(stream);
    }
    if (result != Result::OK) {
        // Warn developer because ErrorInternal is not very informative.
        if (result == Result::ErrorInternal && mDirection == Direction::Input) {
            LOGW("AudioStreamAAudio.open() may have failed due to lack of "
                 "audio recording permission.");
        }
        goto error2;
    }

    // Query and cache the stream properties
    mDeviceId = mLibLoader->stream_getDeviceId(mAAudioStream);
    mChannelCount = mLibLoader->stream_getChannelCount(mAAudioStream);
    mSampleRate = mLibLoader->stream_getSampleRate(mAAudioStream);
    mFormat = static_cast<AudioFormat>(mLibLoader->stream_getFormat(mAAudioStream));
    mSharingMode = static_cast<SharingMode>(mLibLoader->stream_getSharingMode(mAAudioStream));
    mPerformanceMode = static_cast<PerformanceMode>(
            mLibLoader->stream_getPerformanceMode(mAAudioStream));
    mBufferCapacityInFrames = mLibLoader->stream_getBufferCapacity(mAAudioStream);
    mBufferSizeInFrames = mLibLoader->stream_getBufferSize(mAAudioStream);
    mFramesPerBurst = mLibLoader->stream_getFramesPerBurst(mAAudioStream);

    // These were added in P so we have to check for the function pointer.
    if (mLibLoader->stream_getUsage != nullptr) {
        mUsage = static_cast<Usage>(mLibLoader->stream_getUsage(mAAudioStream));
    }
    if (mLibLoader->stream_getContentType != nullptr) {
        mContentType = static_cast<ContentType>(mLibLoader->stream_getContentType(mAAudioStream));
    }
    if (mLibLoader->stream_getInputPreset != nullptr) {
        mInputPreset = static_cast<InputPreset>(mLibLoader->stream_getInputPreset(mAAudioStream));
    }
    if (mLibLoader->stream_getSessionId != nullptr) {
        mSessionId = static_cast<SessionId>(mLibLoader->stream_getSessionId(mAAudioStream));
    } else {
        mSessionId = SessionId::None;
    }

    if (mLibLoader->stream_getChannelMask != nullptr) {
        mChannelMask = static_cast<ChannelMask>(mLibLoader->stream_getChannelMask(mAAudioStream));
    }

    LOGD("AudioStreamAAudio.open() format=%d, sampleRate=%d, capacity = %d",
            static_cast<int>(mFormat), static_cast<int>(mSampleRate),
            static_cast<int>(mBufferCapacityInFrames));

    calculateDefaultDelayBeforeCloseMillis();

error2:
    mLibLoader->builder_delete(aaudioBuilder);
    LOGD("AudioStreamAAudio.open: AAudioStream_Open() returned %s",
         mLibLoader->convertResultToText(static_cast<aaudio_result_t>(result)));
    return result;
}

这儿主要是通过dlopen libaaudio.so,并创建aaudio接口,具体在AAudio中的流程,就是通过audiioservice创建采集。

接下来再看下opensl 启动采集的流程,入口是requestStart,和Start的区别是前者是异步的:

代码语言:javascript
复制
Result AudioInputStreamOpenSLES::requestStart() {
    LOGD("AudioInputStreamOpenSLES(): %s() called", __func__);
    std::lock_guard<std::mutex> lock(mLock);
    StreamState initialState = getState();
    switch (initialState) {
        case StreamState::Starting:
        case StreamState::Started:
            return Result::OK;
        case StreamState::Closed:
            return Result::ErrorClosed;
        default:
            break;
    }

    // We use a callback if the user requests one
    // OR if we have an internal callback to fill the blocking IO buffer.
    setDataCallbackEnabled(true);

    setState(StreamState::Starting);
    Result result = setRecordState_l(SL_RECORDSTATE_RECORDING);
    if (result == Result::OK) {
        setState(StreamState::Started);
        // Enqueue the first buffer to start the streaming.
        // This does not call the callback function.
        enqueueCallbackBuffer(mSimpleBufferQueueInterface);
    } else {
        setState(initialState);
    }
    return result;
}

主要实现在setRecordState_l中:

代码语言:javascript
复制
Result AudioInputStreamOpenSLES::setRecordState_l(SLuint32 newState) {
    LOGD("AudioInputStreamOpenSLES::%s(%u)", __func__, newState);
    Result result = Result::OK;

    if (mRecordInterface == nullptr) {
        LOGW("AudioInputStreamOpenSLES::%s() mRecordInterface is null", __func__);
        return Result::ErrorInvalidState;
    }
    SLresult slResult = (*mRecordInterface)->SetRecordState(mRecordInterface, newState);
    //LOGD("AudioInputStreamOpenSLES::%s(%u) returned %u", __func__, newState, slResult);
    if (SL_RESULT_SUCCESS != slResult) {
        LOGE("AudioInputStreamOpenSLES::%s(%u) returned error %s",
                __func__, newState, getSLErrStr(slResult));
        result = Result::ErrorInternal; // TODO review
    }
    return result;
}

再看下SetRecordState中:

代码语言:javascript
复制
static SLresult IRecord_SetRecordState(SLRecordItf self, SLuint32 state)
{
    SL_ENTER_INTERFACE

    switch (state) {
    case SL_RECORDSTATE_STOPPED:
    case SL_RECORDSTATE_PAUSED:
    case SL_RECORDSTATE_RECORDING:
        {
        IRecord *thiz = (IRecord *) self;
        interface_lock_exclusive(thiz);
        thiz->mState = state;
#ifdef ANDROID
        android_audioRecorder_setRecordState(InterfaceToCAudioRecorder(thiz), state);
#endif
        interface_unlock_exclusive(thiz);
        result = SL_RESULT_SUCCESS;
        }
        break;
    default:
        result = SL_RESULT_PARAMETER_INVALID;
        break;
    }

    SL_LEAVE_INTERFACE
}

再看下 android_audioRecorder_setRecordState,就可以看到启动采集了:

代码语言:javascript
复制
void android_audioRecorder_setRecordState(CAudioRecorder* ar, SLuint32 state) {
    SL_LOGV("android_audioRecorder_setRecordState(%p, %u) entering", ar, state);

    if (ar->mAudioRecord == 0) {
        return;
    }

    switch (state) {
     case SL_RECORDSTATE_STOPPED:
         ar->mAudioRecord->stop();
         break;
     case SL_RECORDSTATE_PAUSED:
         // Note that pausing is treated like stop as this implementation only records to a buffer
         //  queue, so there is no notion of destination being "opened" or "closed" (See description
         //  of SL_RECORDSTATE in specification)
         ar->mAudioRecord->stop();
         break;
     case SL_RECORDSTATE_RECORDING:
         ar->mAudioRecord->start();
         break;
     default:
         break;
     }

}

这时候就启动采集了,那采集数据如何回调回来呢? 在之前open的时候有一个注册的地方:

代码语言:javascript
复制
SLresult AudioStreamOpenSLES::finishCommonOpen(SLAndroidConfigurationItf configItf) {
    SLresult result = registerBufferQueueCallback();
    if (SL_RESULT_SUCCESS != result) {
        return result;
    }

    result = updateStreamParameters(configItf);
    if (SL_RESULT_SUCCESS != result) {
        return result;
    }

    Result oboeResult = configureBufferSizes(mSampleRate);
    if (Result::OK != oboeResult) {
        return (SLresult) oboeResult;
    }

    allocateFifo();

    calculateDefaultDelayBeforeCloseMillis();

    return SL_RESULT_SUCCESS;
}

看下registerBufferQueueCallback:

代码语言:javascript
复制
SLresult AudioStreamOpenSLES::registerBufferQueueCallback() {
    // The BufferQueue
    SLresult result = (*mObjectInterface)->GetInterface(mObjectInterface, SL_IID_ANDROIDSIMPLEBUFFERQUEUE,
                                                &mSimpleBufferQueueInterface);
    if (SL_RESULT_SUCCESS != result) {
        LOGE("get buffer queue interface:%p result:%s",
             mSimpleBufferQueueInterface,
             getSLErrStr(result));
    } else {
        // Register the BufferQueue callback
        result = (*mSimpleBufferQueueInterface)->RegisterCallback(mSimpleBufferQueueInterface,
                                                                  bqCallbackGlue, this);
        if (SL_RESULT_SUCCESS != result) {
            LOGE("RegisterCallback result:%s", getSLErrStr(result));
        }
    }
    return result;
}

这时候就又会回到opensl 里,看下RegisterCallback 的内容:

代码语言:javascript
复制
SLresult IBufferQueue_RegisterCallback(SLBufferQueueItf self,
    slBufferQueueCallback callback, void *pContext)
{
    SL_ENTER_INTERFACE

    IBufferQueue *thiz = (IBufferQueue *) self;
    interface_lock_exclusive(thiz);
    // verify pre-condition that media object is in the SL_PLAYSTATE_STOPPED state
    if (SL_PLAYSTATE_STOPPED == getAssociatedState(thiz)) {
        thiz->mCallback = callback;
        thiz->mContext = pContext;
        result = SL_RESULT_SUCCESS;
    } else {
        result = SL_RESULT_PRECONDITIONS_VIOLATED;
    }
    interface_unlock_exclusive(thiz);

    SL_LEAVE_INTERFACE
}

这样在创建AudioRecord的时候也会传递一个callback:

代码语言:javascript
复制
    ar->mAudioRecord = new android::AudioRecord(
            ar->mRecordSource,     // source
            sampleRate,            // sample rate in Hertz
            sles_to_android_sampleFormat(df_pcm),               // format
            channelMask,           // channel mask
            android::String16(),   // app ops
            0,                     // frameCount
            audioRecorder_callback,// callback_t  // 采集回调
            (void*)ar,             // user, callback data, here the AudioRecorder
            0,                     // notificationFrames
            AUDIO_SESSION_ALLOCATE,
            android::AudioRecord::TRANSFER_CALLBACK,
                                   // transfer type
            policy);   

看下回调内容:

代码语言:javascript
复制
static void audioRecorder_callback(int event, void* user, void *info) {
    //SL_LOGV("audioRecorder_callback(%d, %p, %p) entering", event, user, info);

    CAudioRecorder *ar = (CAudioRecorder *)user;

    if (!android::CallbackProtector::enterCbIfOk(ar->mCallbackProtector)) {
        // it is not safe to enter the callback (the track is about to go away)
        return;
    }

    void * callbackPContext = NULL;

    switch (event) {
    case android::AudioRecord::EVENT_MORE_DATA: {
        slBufferQueueCallback callback = NULL;
        android::AudioRecord::Buffer* pBuff = (android::AudioRecord::Buffer*)info;

        // push data to the buffer queue
        interface_lock_exclusive(&ar->mBufferQueue);

        if (ar->mBufferQueue.mState.count != 0) {
            assert(ar->mBufferQueue.mFront != ar->mBufferQueue.mRear);

            BufferHeader *oldFront = ar->mBufferQueue.mFront;
            BufferHeader *newFront = &oldFront[1];

            size_t availSink = oldFront->mSize - ar->mBufferQueue.mSizeConsumed;
            size_t availSource = pBuff->size;
            size_t bytesToCopy = availSink < availSource ? availSink : availSource;
            void *pDest = (char *)oldFront->mBuffer + ar->mBufferQueue.mSizeConsumed;
            memcpy(pDest, pBuff->raw, bytesToCopy);

            if (bytesToCopy < availSink) {
                // can't consume the whole or rest of the buffer in one shot
                ar->mBufferQueue.mSizeConsumed += availSource;
                // pBuff->size is already equal to bytesToCopy in this case
            } else {
                // finish pushing the buffer or push the buffer in one shot
                pBuff->size = bytesToCopy;
                ar->mBufferQueue.mSizeConsumed = 0;
                if (newFront == &ar->mBufferQueue.mArray[ar->mBufferQueue.mNumBuffers + 1]) {
                    newFront = ar->mBufferQueue.mArray;
                }
                ar->mBufferQueue.mFront = newFront;

                ar->mBufferQueue.mState.count--;
                ar->mBufferQueue.mState.playIndex++;

                // data has been copied to the buffer, and the buffer queue state has been updated
                // we will notify the client if applicable
                callback = ar->mBufferQueue.mCallback;
                // save callback data
                callbackPContext = ar->mBufferQueue.mContext;
            }
        } else { // empty queue
            // no destination to push the data
            pBuff->size = 0;
        }

        interface_unlock_exclusive(&ar->mBufferQueue);

        // notify client
        if (NULL != callback) {
            (*callback)(&ar->mBufferQueue.mItf, callbackPContext);
        }
        }
        break;

    case android::AudioRecord::EVENT_OVERRUN:
        audioRecorder_handleOverrun_lockRecord(ar);
        break;

    case android::AudioRecord::EVENT_MARKER:
        audioRecorder_handleMarker_lockRecord(ar);
        break;

    case android::AudioRecord::EVENT_NEW_POS:
        audioRecorder_handleNewPos_lockRecord(ar);
        break;

    case android::AudioRecord::EVENT_NEW_IAUDIORECORD:
        // ignore for now
        break;

    default:
        SL_LOGE("Encountered unknown AudioRecord event %d for CAudioRecord %p", event, ar);
        break;
    }

    ar->mCallbackProtector->exitCb();
}

这儿就会回调queue的callback, 看下注册的回调

代码语言:javascript
复制
// This callback handler is called every time a buffer has been processed by OpenSL ES.
static void bqCallbackGlue(SLAndroidSimpleBufferQueueItf bq, void *context) {
    bool shouldStopStream = (reinterpret_cast<AudioStreamOpenSLES *>(context))
            ->processBufferCallback(bq);
    if (shouldStopStream) {
        (reinterpret_cast<AudioStreamOpenSLES *>(context))->requestStop();
    }
}

再看下processBufferCallback:

代码语言:javascript
复制
bool AudioStreamOpenSLES::processBufferCallback(SLAndroidSimpleBufferQueueItf bq) {
    bool shouldStopStream = false;
    // Ask the app callback to process the buffer.
    DataCallbackResult result =
            fireDataCallback(mCallbackBuffer[mCallbackBufferIndex].get(), mFramesPerCallback);
    if (result == DataCallbackResult::Continue) {
        // Pass the buffer to OpenSLES.
        SLresult enqueueResult = enqueueCallbackBuffer(bq);
        if (enqueueResult != SL_RESULT_SUCCESS) {
            LOGE("%s() returned %d", __func__, enqueueResult);
            shouldStopStream = true;
        }
        // Update Oboe client position with frames handled by the callback.
        if (getDirection() == Direction::Input) {
            mFramesRead += mFramesPerCallback;
        } else {
            mFramesWritten += mFramesPerCallback;
        }
    } else if (result == DataCallbackResult::Stop) {
        LOGD("Oboe callback returned Stop");
        shouldStopStream = true;
    } else {
        LOGW("Oboe callback returned unexpected value = %d", result);
        shouldStopStream = true;
    }
    if (shouldStopStream) {
        mCallbackBufferIndex = 0;
    }
    return shouldStopStream;
}

在fireDataCallback 中就会把数据返回给app:

代码语言:javascript
复制
DataCallbackResult AudioStream::fireDataCallback(void *audioData, int32_t numFrames) {
    if (!isDataCallbackEnabled()) {
        LOGW("AudioStream::%s() called with data callback disabled!", __func__);
        return DataCallbackResult::Stop; // Should not be getting called
    }

    DataCallbackResult result;
    if (mDataCallback) {
        result = mDataCallback->onAudioReady(this, audioData, numFrames);
    } else {
        result = onDefaultCallback(audioData, numFrames);
    }
    // On Oreo, we might get called after returning stop.
    // So block that here.
    setDataCallbackEnabled(result == DataCallbackResult::Continue);

    return result;
}

再看下播放的回调,在创建AudioTrack的时候的回调如下:

代码语言:javascript
复制
        android::AudioTrack* pat = new android::AudioTrack(
                pAudioPlayer->mStreamType,                           // streamType
                sampleRate,                                          // sampleRate
                sles_to_android_sampleFormat(df_pcm),                // format
                channelMask,                                         // channel mask
                0,                                                   // frameCount
                policy,                                              // flags
                audioTrack_callBack_pullFromBuffQueue,               // callback
                (void *) pAudioPlayer,                               // user
                notificationFrames,                                  // see comment above
                pAudioPlayer->mSessionId);

也就是audioTrack_callBack_pullFromBuffQueue,看下audioTrack_callBack_pullFromBuffQueue的内容:

代码语言:javascript
复制
//-----------------------------------------------------------------------------
// Callback associated with an AudioTrack of an SL ES AudioPlayer that gets its data
// from a buffer queue. This will not be called once the AudioTrack has been destroyed.
static void audioTrack_callBack_pullFromBuffQueue(int event, void* user, void *info) {
    CAudioPlayer *ap = (CAudioPlayer *)user;

    if (!android::CallbackProtector::enterCbIfOk(ap->mCallbackProtector)) {
        // it is not safe to enter the callback (the track is about to go away)
        return;
    }

    void * callbackPContext = NULL;
    switch (event) {

    case android::AudioTrack::EVENT_MORE_DATA: {
        //SL_LOGV("received event EVENT_MORE_DATA from AudioTrack TID=%d", gettid());
        slPrefetchCallback prefetchCallback = NULL;
        void *prefetchContext = NULL;
        SLuint32 prefetchEvents = SL_PREFETCHEVENT_NONE;
        android::AudioTrack::Buffer* pBuff = (android::AudioTrack::Buffer*)info;

        // retrieve data from the buffer queue
        interface_lock_exclusive(&ap->mBufferQueue);

        if (ap->mBufferQueue.mCallbackPending) {
            // call callback with lock not held
            slBufferQueueCallback callback = ap->mBufferQueue.mCallback;
            if (NULL != callback) {
                callbackPContext = ap->mBufferQueue.mContext;
                interface_unlock_exclusive(&ap->mBufferQueue);
                (*callback)(&ap->mBufferQueue.mItf, callbackPContext);
                interface_lock_exclusive(&ap->mBufferQueue);
                ap->mBufferQueue.mCallbackPending = false;
            }
        }

        if (ap->mBufferQueue.mState.count != 0) {
            //SL_LOGV("nbBuffers in queue = %u",ap->mBufferQueue.mState.count);
            assert(ap->mBufferQueue.mFront != ap->mBufferQueue.mRear);

            BufferHeader *oldFront = ap->mBufferQueue.mFront;
            BufferHeader *newFront = &oldFront[1];

            size_t availSource = oldFront->mSize - ap->mBufferQueue.mSizeConsumed;
            size_t availSink = pBuff->size;
            size_t bytesToCopy = availSource < availSink ? availSource : availSink;
            void *pSrc = (char *)oldFront->mBuffer + ap->mBufferQueue.mSizeConsumed;
            memcpy(pBuff->raw, pSrc, bytesToCopy);

            if (bytesToCopy < availSource) {
                ap->mBufferQueue.mSizeConsumed += bytesToCopy;
                // pBuff->size is already equal to bytesToCopy in this case
            } else {
                // consumed an entire buffer, dequeue
                pBuff->size = bytesToCopy;
                ap->mBufferQueue.mSizeConsumed = 0;
                if (newFront ==
                        &ap->mBufferQueue.mArray
                            [ap->mBufferQueue.mNumBuffers + 1])
                {
                    newFront = ap->mBufferQueue.mArray;
                }
                ap->mBufferQueue.mFront = newFront;

                ap->mBufferQueue.mState.count--;
                ap->mBufferQueue.mState.playIndex++;
                ap->mBufferQueue.mCallbackPending = true;
            }
        } else { // empty queue
            // signal no data available
            pBuff->size = 0;

            // signal we're at the end of the content, but don't pause (see note in function)
            audioPlayer_dispatch_headAtEnd_lockPlay(ap, false /*set state to paused?*/, false);

            // signal underflow to prefetch status itf
            if (IsInterfaceInitialized(&ap->mObject, MPH_PREFETCHSTATUS)) {
                ap->mPrefetchStatus.mStatus = SL_PREFETCHSTATUS_UNDERFLOW;
                ap->mPrefetchStatus.mLevel = 0;
                // callback or no callback?
                prefetchEvents = ap->mPrefetchStatus.mCallbackEventsMask &
                        (SL_PREFETCHEVENT_STATUSCHANGE | SL_PREFETCHEVENT_FILLLEVELCHANGE);
                if (SL_PREFETCHEVENT_NONE != prefetchEvents) {
                    prefetchCallback = ap->mPrefetchStatus.mCallback;
                    prefetchContext  = ap->mPrefetchStatus.mContext;
                }
            }

            // stop the track so it restarts playing faster when new data is enqueued
            ap->mTrackPlayer->stop();
        }
        interface_unlock_exclusive(&ap->mBufferQueue);

        // notify client
        if (NULL != prefetchCallback) {
            assert(SL_PREFETCHEVENT_NONE != prefetchEvents);
            // spec requires separate callbacks for each event
            if (prefetchEvents & SL_PREFETCHEVENT_STATUSCHANGE) {
                (*prefetchCallback)(&ap->mPrefetchStatus.mItf, prefetchContext,
                        SL_PREFETCHEVENT_STATUSCHANGE);
            }
            if (prefetchEvents & SL_PREFETCHEVENT_FILLLEVELCHANGE) {
                (*prefetchCallback)(&ap->mPrefetchStatus.mItf, prefetchContext,
                        SL_PREFETCHEVENT_FILLLEVELCHANGE);
            }
        }
    }
    break;

    case android::AudioTrack::EVENT_MARKER:
        //SL_LOGI("received event EVENT_MARKER from AudioTrack");
        audioTrack_handleMarker_lockPlay(ap);
        break;

    case android::AudioTrack::EVENT_NEW_POS:
        //SL_LOGI("received event EVENT_NEW_POS from AudioTrack");
        audioTrack_handleNewPos_lockPlay(ap);
        break;

    case android::AudioTrack::EVENT_UNDERRUN:
        //SL_LOGI("received event EVENT_UNDERRUN from AudioTrack");
        audioTrack_handleUnderrun_lockPlay(ap);
        break;

    case android::AudioTrack::EVENT_NEW_IAUDIOTRACK:
        // ignore for now
        break;

    case android::AudioTrack::EVENT_BUFFER_END:
    case android::AudioTrack::EVENT_LOOP_END:
    case android::AudioTrack::EVENT_STREAM_END:
        // These are unexpected so fall through
        FALLTHROUGH_INTENDED;
    default:
        // FIXME where does the notification of SL_PLAYEVENT_HEADMOVING fit?
        SL_LOGE("Encountered unknown AudioTrack event %d for CAudioPlayer %p", event,
                (CAudioPlayer *)user);
        break;
    }

    ap->mCallbackProtector->exitCb();
}

可以看到回调同样也是来自于ap->mBufferQueue.mCallback,和采集一样也是open的时候注册进来的. 再看下aaudio的启动:

代码语言:javascript
复制
Result AudioStreamAAudio::requestStart() {
    std::lock_guard<std::mutex> lock(mLock);
    AAudioStream *stream = mAAudioStream.load();
    if (stream != nullptr) {
        // Avoid state machine errors in O_MR1.
        if (getSdkVersion() <= __ANDROID_API_O_MR1__) {
            StreamState state = static_cast<StreamState>(mLibLoader->stream_getState(stream));
            if (state == StreamState::Starting || state == StreamState::Started) {
                // WARNING: On P, AAudio is returning ErrorInvalidState for Output and OK for Input.
                return Result::OK;
            }
        }
        if (isDataCallbackSpecified()) {
            setDataCallbackEnabled(true);
        }
        mStopThreadAllowed = true;
        return static_cast<Result>(mLibLoader->stream_requestStart(stream));
    } else {
        return Result::ErrorClosed;
    }
}

就是直接调用libaaudio中的接口了。 再继续看下回调怎么来的,在open aaudio的时候有下面一段逻辑:

代码语言:javascript
复制
if (isDataCallbackSpecified()) {
        mLibLoader->builder_setDataCallback(aaudioBuilder, oboe_aaudio_data_callback_proc, this);
        mLibLoader->builder_setFramesPerDataCallback(aaudioBuilder, getFramesPerDataCallback());

        if (!isErrorCallbackSpecified()) {
            // The app did not specify a callback so we should specify
            // our own so the stream gets closed and stopped.
            mErrorCallback = &amp;mDefaultErrorCallback;
        }
        mLibLoader->builder_setErrorCallback(aaudioBuilder, internalErrorCallback, this);
    }

再看下oboe_aaudio_data_callback_proc:

代码语言:javascript
复制
static aaudio_data_callback_result_t oboe_aaudio_data_callback_proc(
        AAudioStream *stream,
        void *userData,
        void *audioData,
        int32_t numFrames) {

    AudioStreamAAudio *oboeStream = reinterpret_cast<AudioStreamAAudio*>(userData);
    if (oboeStream != nullptr) {
        return static_cast<aaudio_data_callback_result_t>(
                oboeStream->callOnAudioReady(stream, audioData, numFrames));

    } else {
        return static_cast<aaudio_data_callback_result_t>(DataCallbackResult::Stop);
    }
}

mmap 机制不需要拷贝数据,直接传递地址就好:

代码语言:javascript
复制
DataCallbackResult AudioStreamAAudio::callOnAudioReady(AAudioStream * /*stream*/,
                                                                 void *audioData,
                                                                 int32_t numFrames) {
    DataCallbackResult result = fireDataCallback(audioData, numFrames);
    if (result == DataCallbackResult::Continue) {
        return result;
    } else {
        if (result == DataCallbackResult::Stop) {
            LOGD("Oboe callback returned DataCallbackResult::Stop");
        } else {
            LOGE("Oboe callback returned unexpected value = %d", result);
        }

        // Returning Stop caused various problems before S. See #1230
        if (OboeGlobals::areWorkaroundsEnabled() &amp;&amp; getSdkVersion() <= __ANDROID_API_R__) {
            launchStopThread();
            return DataCallbackResult::Continue;
        } else {
            return DataCallbackResult::Stop; // OK >= API_S
        }
    }
}

这时候数据就又到fireDataCallback里了,这时候就可以回调给app了:

代码语言:javascript
复制
DataCallbackResult AudioStream::fireDataCallback(void *audioData, int32_t numFrames) {
    if (!isDataCallbackEnabled()) {
        LOGW("AudioStream::%s() called with data callback disabled!", __func__);
        return DataCallbackResult::Stop; // Should not be getting called
    }

    DataCallbackResult result;
    if (mDataCallback) {
        result = mDataCallback->onAudioReady(this, audioData, numFrames);
    } else {
        result = onDefaultCallback(audioData, numFrames);
    }
    // On Oreo, we might get called after returning stop.
    // So block that here.
    setDataCallbackEnabled(result == DataCallbackResult::Continue);

    return result;
}

这时候就介绍完了Oboe的大致流程,可以看出来Oboe的优势在于自己有一套兼容性机制,又利用了opensl和aaudio的低延时机制, 在设计这块,oboe和aaudio的设计的很像的。

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022-09-11,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 本篇介绍
    • 使用Oboe
      • 源码介绍
      领券
      问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档