FFmpeg总结(十一)用ffmpeg进行转格式,Android下播放网络音频流

图:杭州西湖

思路: 1、mp3转成pcm(音频数据),ffmpeg做的事 2、OpenSL ES引擎创建AudioPlayer,实际调用了AudioTrack

遇到的错误: Error #include nested too deeply 原因:c文件互相引用 解决方案:

  • 1、将两个头文件共用的那一部分抽出来单独建一个头文件。
  • 2、加预处理#ifndef.. #define…#endif

x86平台没有编译出来so,怀疑存在版本不兼容,编译别的相关so,x86下没有异常。有空这里再更新下原因

studio写ndk相当方便:

工程结构:

Java代码:

package com.hejunlin.ffmpegaudio;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.View;
import android.widget.EditText;
import android.widget.TextView;
public class MainActivity extends AppCompatActivity {
    private EditText mInput;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        mInput = (EditText) findViewById(R.id.et_input);
     mInput.setText("http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3");
        findViewById(R.id.bt_play).setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                NativePlayer.play(mInput.getText().toString().trim());
            }
        });
        findViewById(R.id.bt_pause).setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                NativePlayer.stop();
            }
        });
    }
}

NativePlayer:

package com.hejunlin.ffmpegaudio;
/**
 * Created by hejunlin on 17/5/6.
 */
public class NativePlayer {
    static {
        System.loadLibrary("NativePlayer");
    }
    public static native void play(String url);
    public static native void stop();
}

布局文件:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/activity_main"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.hejunlin.ffmpegaudio.MainActivity">
    <TextView
        android:id="@+id/tv_input"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:padding="10dp"
        android:layout_marginTop="30dp"
        android:text="播放链接:"
        android:textSize="20sp"/>
    <EditText
        android:id="@+id/et_input"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_toRightOf="@id/tv_input"
        android:padding="10dp"/>
    <LinearLayout
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        android:layout_below="@id/et_input"
        android:orientation="horizontal">
        <Button
            android:id="@+id/bt_play"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="10dp"
            android:layout_marginLeft="60dp"
            android:background="@drawable/button_shape"
            android:textColor="@color/white"
            android:text="播放" />
        <Button
            android:id="@+id/bt_pause"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_marginTop="10dp"
            android:background="@drawable/button_shape"
            android:textColor="@color/white"
            android:layout_marginLeft="80dp"
            android:text="暂停" />
    </LinearLayout>
</RelativeLayout>

jni相关代码: OpenSLESCore.c

//
// Created by hejunlin on 17/5/6.
//
#include "OpenSL_ES_Core.h"
#include "FFmpegCore.h"
#include <assert.h>
#include <jni.h>
#include <string.h>
#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>
// for native asset manager
#include <sys/types.h>
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include "log.h"
// engine interfaces
static SLObjectItf engineObject = NULL;
static SLEngineItf engineEngine;
// output mix interfaces
static SLObjectItf outputMixObject = NULL;
static SLEnvironmentalReverbItf outputMixEnvironmentalReverb = NULL;
// buffer queue player interfaces
static SLObjectItf bqPlayerObject = NULL;
static SLPlayItf bqPlayerPlay;
static SLAndroidSimpleBufferQueueItf bqPlayerBufferQueue;
static SLEffectSendItf bqPlayerEffectSend;
static SLMuteSoloItf bqPlayerMuteSolo;
static SLVolumeItf bqPlayerVolume;
// aux effect on the output mix, used by the buffer queue player
static const SLEnvironmentalReverbSettings reverbSettings =
        SL_I3DL2_ENVIRONMENT_PRESET_STONECORRIDOR;
static void *buffer;
static size_t bufferSize;
// this callback handler is called every time a buffer finishes playing
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
    LOGD(">> buffere queue callback");
    assert(bq == bqPlayerBufferQueue);
    bufferSize = 0;
    //assert(NULL == context);
    getPCM(&buffer, &bufferSize);
    // for streaming playback, replace this test by logic to find and fill the next buffer
    if (NULL != buffer && 0 != bufferSize) {
        SLresult result;
        // enqueue another buffer
        result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, buffer,
                                                 bufferSize);
        // the most likely other result is SL_RESULT_BUFFER_INSUFFICIENT,
        // which for this code example would indicate a programming error
        assert(SL_RESULT_SUCCESS == result);
        (void)result;
    }
}
void initOpenSLES()
{
    LOGD(">> initOpenSLES...");
    SLresult result;
    // 1、create engine
    result = slCreateEngine(&engineObject, 0, NULL, 0, NULL, NULL);
    LOGD(">> initOpenSLES... step 1, result = %d", result);
    // 2、realize the engine
    result = (*engineObject)->Realize(engineObject, SL_BOOLEAN_FALSE);
    LOGD(">> initOpenSLES...step 2, result = %d", result);
    // 3、get the engine interface, which is needed in order to create other objects
    result = (*engineObject)->GetInterface(engineObject, SL_IID_ENGINE, &engineEngine);
    LOGD(">> initOpenSLES...step 3, result = %d", result);
    // 4、create output mix, with environmental reverb specified as a non-required interface
    const SLInterfaceID ids[1] = {SL_IID_ENVIRONMENTALREVERB};
    const SLboolean req[1] = {SL_BOOLEAN_FALSE};
    result = (*engineEngine)->CreateOutputMix(engineEngine, &outputMixObject, 0, 0, 0);
    LOGD(">> initOpenSLES...step 4, result = %d", result);
    // 5、realize the output mix
    result = (*outputMixObject)->Realize(outputMixObject, SL_BOOLEAN_FALSE);
    LOGD(">> initOpenSLES...step 5, result = %d", result);
    // 6、get the environmental reverb interface
    // this could fail if the environmental reverb effect is not available,
    // either because the feature is not present, excessive CPU load, or
    // the required MODIFY_AUDIO_SETTINGS permission was not requested and granted
    result = (*outputMixObject)->GetInterface(outputMixObject, SL_IID_ENVIRONMENTALREVERB,
                                              &outputMixEnvironmentalReverb);
    if (SL_RESULT_SUCCESS == result) {
        result = (*outputMixEnvironmentalReverb)->SetEnvironmentalReverbProperties(
                outputMixEnvironmentalReverb, &reverbSettings);
        LOGD(">> initOpenSLES...step 6, result = %d", result);
    }
}
// init buffer queue
void initBufferQueue(int rate, int channel, int bitsPerSample)
{
    LOGD(">> initBufferQueue");
    SLresult result;
    // configure audio source
    SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
    SLDataFormat_PCM format_pcm;
    format_pcm.formatType = SL_DATAFORMAT_PCM;
    format_pcm.numChannels = channel;
    format_pcm.samplesPerSec = rate * 1000;
    format_pcm.bitsPerSample = bitsPerSample;
    format_pcm.containerSize = 16;
    if (channel == 2)
        format_pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
    else
        format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
    format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
    SLDataSource audioSrc = {&loc_bufq, &format_pcm};
    // configure audio sink
    SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
    SLDataSink audioSnk = {&loc_outmix, NULL};
    // create audio player
    const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND,
            /*SL_IID_MUTESOLO,*/ SL_IID_VOLUME};
    const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE,
            /*SL_BOOLEAN_TRUE,*/ SL_BOOLEAN_TRUE};
    result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
                                                3, ids, req);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // realize the player
    result = (*bqPlayerObject)->Realize(bqPlayerObject, SL_BOOLEAN_FALSE);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // get the play interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_PLAY, &bqPlayerPlay);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // get the buffer queue interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_BUFFERQUEUE,
                                             &bqPlayerBufferQueue);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // register callback on the buffer queue
    result = (*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, NULL);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // get the effect send interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_EFFECTSEND,
                                             &bqPlayerEffectSend);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // get the volume interface
    result = (*bqPlayerObject)->GetInterface(bqPlayerObject, SL_IID_VOLUME, &bqPlayerVolume);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
    // set the player's state to playing
    result = (*bqPlayerPlay)->SetPlayState(bqPlayerPlay, SL_PLAYSTATE_PLAYING);
    assert(SL_RESULT_SUCCESS == result);
    (void)result;
}
// stop the native audio system
void stop()
{
    // destroy buffer queue audio player object, and invalidate all associated interfaces
    if (bqPlayerObject != NULL) {
        (*bqPlayerObject)->Destroy(bqPlayerObject);
        bqPlayerObject = NULL;
        bqPlayerPlay = NULL;
        bqPlayerBufferQueue = NULL;
        bqPlayerEffectSend = NULL;
        bqPlayerMuteSolo = NULL;
        bqPlayerVolume = NULL;
    }
    // destroy output mix object, and invalidate all associated interfaces
    if (outputMixObject != NULL) {
        (*outputMixObject)->Destroy(outputMixObject);
        outputMixObject = NULL;
        outputMixEnvironmentalReverb = NULL;
    }
    // destroy engine object, and invalidate all associated interfaces
    if (engineObject != NULL) {
        (*engineObject)->Destroy(engineObject);
        engineObject = NULL;
        engineEngine = NULL;
    }
    // 释放FFmpeg解码器
    releaseFFmpeg();
}
void play(char *url)
{
    int rate, channel;
    LOGD("...get url=%s", url);
    // 1、初始化FFmpeg解码器
    initFFmpeg(&rate, &channel, url);
    // 2、初始化OpenSLES
    initOpenSLES();
    // 3、初始化BufferQueue
    initBufferQueue(rate, channel, SL_PCMSAMPLEFORMAT_FIXED_16);
    // 4、启动音频播放
    bqPlayerCallback(bqPlayerBufferQueue, NULL);
}

FFmpegCore.c

#include "log.h"
#include "FFmpegCore.h"
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/samplefmt.h"
#include <SLES/OpenSLES.h>
#include <SLES/OpenSLES_Android.h>
uint8_t *outputBuffer;
size_t outputBufferSize;
AVPacket packet;
int audioStream;
AVFrame *aFrame;
SwrContext *swr;
AVFormatContext *aFormatCtx;
AVCodecContext *aCodecCtx;
int initFFmpeg(int *rate, int *channel, char *url) {
    av_register_all();
    aFormatCtx = avformat_alloc_context();
    LOGD("ffmpeg get url=:%s", url);
    // 网络音频流
    char *file_name = url;
    // Open audio file
    if (avformat_open_input(&aFormatCtx, file_name, NULL, NULL) != 0) {
        LOGE("Couldn't open file:%s\n", file_name);
        return -1; // Couldn't open file
    }
    // Retrieve stream information
    if (avformat_find_stream_info(aFormatCtx, NULL) < 0) {
        LOGE("Couldn't find stream information.");
        return -1;
    }
    // Find the first audio stream
    int i;
    audioStream = -1;
    for (i = 0; i < aFormatCtx->nb_streams; i++) {
        if (aFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO &&
            audioStream < 0) {
            audioStream = i;
        }
    }
    if (audioStream == -1) {
        LOGE("Couldn't find audio stream!");
        return -1;
    }
    // Get a pointer to the codec context for the video stream
    aCodecCtx = aFormatCtx->streams[audioStream]->codec;
    // Find the decoder for the audio stream
    AVCodec *aCodec = avcodec_find_decoder(aCodecCtx->codec_id);
    if (!aCodec) {
        fprintf(stderr, "Unsupported codec!\n");
        return -1;
    }
    if (avcodec_open2(aCodecCtx, aCodec, NULL) < 0) {
        LOGE("Could not open codec.");
        return -1; // Could not open codec
    }
    aFrame = av_frame_alloc();
    // 设置格式转换
    swr = swr_alloc();
    av_opt_set_int(swr, "in_channel_layout",  aCodecCtx->channel_layout, 0);
    av_opt_set_int(swr, "out_channel_layout", aCodecCtx->channel_layout,  0);
    av_opt_set_int(swr, "in_sample_rate",     aCodecCtx->sample_rate, 0);
    av_opt_set_int(swr, "out_sample_rate",    aCodecCtx->sample_rate, 0);
    av_opt_set_sample_fmt(swr, "in_sample_fmt",  aCodecCtx->sample_fmt, 0);
    av_opt_set_sample_fmt(swr, "out_sample_fmt", AV_SAMPLE_FMT_S16,  0);
    swr_init(swr);
    // 分配PCM数据缓存
    outputBufferSize = 8196;
    outputBuffer = (uint8_t *) malloc(sizeof(uint8_t) * outputBufferSize);
    // 返回sample rate和channels
    *rate = aCodecCtx->sample_rate;
    *channel = aCodecCtx->channels;
    return 0;
}
// 获取PCM数据, 自动回调获取
int getPCM(void **pcm, size_t *pcmSize) {
    LOGD(">> getPcm");
    while (av_read_frame(aFormatCtx, &packet) >= 0) {
        int frameFinished = 0;
        // Is this a packet from the audio stream?
        if (packet.stream_index == audioStream) {
            avcodec_decode_audio4(aCodecCtx, aFrame, &frameFinished, &packet);
            if (frameFinished) {
                // data_size为音频数据所占的字节数
                int data_size = av_samples_get_buffer_size(
                        aFrame->linesize, aCodecCtx->channels,
                        aFrame->nb_samples, aCodecCtx->sample_fmt, 1);
                LOGD(">> getPcm data_size=%d", data_size);
                // 这里内存再分配可能存在问题
                if (data_size > outputBufferSize) {
                    outputBufferSize = data_size;
                    outputBuffer = (uint8_t *) realloc(outputBuffer,
                                                       sizeof(uint8_t) * outputBufferSize);
                }
                // 音频格式转换
                swr_convert(swr, &outputBuffer, aFrame->nb_samples,
                            (uint8_t const **) (aFrame->extended_data),
                        aFrame->nb_samples);
                // 返回pcm数据
                *pcm = outputBuffer;
                *pcmSize = data_size;
                return 0;
            }
        }
    }
    return -1;
}
// 释放相关资源
int releaseFFmpeg()
{
    av_packet_unref(&packet);
    av_free(outputBuffer);
    av_free(aFrame);
    avcodec_close(aCodecCtx);
    avformat_close_input(&aFormatCtx);
    return 0;

NativePlayer.c

//
// Created by hejunlin on 17/5/6.
//
#include "log.h"
#include "com_hejunlin_ffmpegaudio_NativePlayer.h"
#include "OpenSL_ES_Core.h"
JNIEXPORT void JNICALL
Java_com_hejunlin_ffmpegaudio_NativePlayer_play(JNIEnv *env, jclass type, jstring url_) {
    const char *url = (*env)->GetStringUTFChars(env, url_, 0);
    LOGD("start playaudio... url=%s", url);
    play(url);
    (*env)->ReleaseStringUTFChars(env, url_, url);
}
JNIEXPORT void JNICALL
Java_com_hejunlin_ffmpegaudio_NativePlayer_stop(JNIEnv *env, jclass type) {
    LOGD("stop");
    stop();
}

通过cmake,或ndk-build都可以编译,会生成一个NativePlayer.so

效果图:

log输出如下:

05-07 10:14:04.573 D/Surface: Surface::setBuffersDimensions(this=0xf45b6300,w=1080,h=1920)
05-07 10:14:04.577 W/linker: libNativePlayer.so: unused DT entry: type 0x6ffffffe arg 0x1414
05-07 10:14:04.577 W/linker: libNativePlayer.so: unused DT entry: type 0x6fffffff arg 0x4
05-07 10:14:04.578 W/linker: libavcodec-57.so: unused DT entry: type 0x6ffffffe arg 0x5da4
05-07 10:14:04.578 W/linker: libavcodec-57.so: unused DT entry: type 0x6fffffff arg 0x2
05-07 10:14:04.578 W/linker: libavformat-57.so: unused DT entry: type 0x6ffffffe arg 0x6408
05-07 10:14:04.578 W/linker: libavformat-57.so: unused DT entry: type 0x6fffffff arg 0x2
05-07 10:14:04.578 W/linker: libswresample-2.so: unused DT entry: type 0x6ffffffe arg 0xcd4
05-07 10:14:04.578 W/linker: libswresample-2.so: unused DT entry: type 0x6fffffff arg 0x1
05-07 10:14:04.578 W/linker: libswscale-4.so: unused DT entry: type 0x6ffffffe arg 0xd70
05-07 10:14:04.578 W/linker: libswscale-4.so: unused DT entry: type 0x6fffffff arg 0x1
05-07 10:14:04.589 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/NativePlayer.c: Java_com_hejunlin_ffmpegaudio_NativePlayer_play:start playaudio... url=http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.589 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: play:...get url=http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.590 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: initFFmpeg:ffmpeg get url=:http://qzone.60dj.com/huiyuan/201704/19/201704190533197825_35285.mp3
05-07 10:14:04.696 D/libc-netbsd: getaddrinfo: qzone.60dj.com get result from proxy >>
05-07 10:14:04.949 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES... step 1, result = 0
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 2, result = 0
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 3, result = 0
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 4, result = 0
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initOpenSLES:>> initOpenSLES...step 5, result = 0
05-07 10:14:04.950 W/libOpenSLES: Leaving Object::GetInterface (SL_RESULT_FEATURE_UNSUPPORTED)
05-07 10:14:04.950 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: initBufferQueue:>> initBufferQueue
05-07 10:14:04.951 V/AudioTrack: set(): streamType 3, sampleRate 44100, format 0x1, channelMask 0x3, frameCount 0, flags #0, notificationFrames 0, sessionId 774, transferType 0
05-07 10:14:04.951 V/AudioTrack: set() streamType 3 frameCount 0 flags 0000
05-07 10:14:04.951 D/AudioTrack: audiotrack 0xf459cd80 set Type 3, rate 44100, fmt 1, chn 3, fcnt 0, flags 0000
05-07 10:14:04.951 D/AudioTrack: mChannelMask 0x3
05-07 10:14:04.953 V/AudioTrack: createTrack_l() output 2 afLatency 21
05-07 10:14:04.953 V/AudioTrack: afFrameCount=1024, minBufCount=1, afSampleRate=48000, afLatency=21
05-07 10:14:04.953 V/AudioTrack: minFrameCount: 2822, afFrameCount=1024, minBufCount=3, sampleRate=44100, afSampleRate=48000, afLatency=21
05-07 10:14:04.954 D/AudioTrackCenter: addTrack, trackId:0xdaf0c000, frameCount:2822, sampleRate:44100, trackPtr:0xf459cd80
05-07 10:14:04.954 D/AudioTrackCenter: mAfSampleRate 48000, sampleRate 44100, AfFrameCount 1024 , mAfSampleRate 48000, frameCount 2822
05-07 10:14:04.979 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.979 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.979 D/AudioTrackShared: front(0x0), mIsOut 1, avail 2822, mFrameCount 2822, filled 0
05-07 10:14:04.979 V/AudioTrack: obtainBuffer(940) returned 2822 = 940 + 1882 err 0
05-07 10:14:04.979 D/AudioTrackShared: front(0x0), mIsOut 1, interrupt() FUTEX_WAKE
05-07 10:14:04.979 D/AudioTrack: audiotrack 0xf459cd80 stop done
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=188
05-07 10:14:04.980 D/AudioTrackShared: front(0x0), mIsOut 1, avail 2822, mFrameCount 2822, filled 0
05-07 10:14:04.980 V/AudioTrack: obtainBuffer(940) returned 2822 = 940 + 1882 err 0
05-07 10:14:04.980 D/AudioTrackShared: front(0x0), mIsOut 1, avail 2775, mFrameCount 2822, filled 47
05-07 10:14:04.980 V/AudioTrack: obtainBuffer(893) returned 2775 = 893 + 1882 err 0
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608
05-07 10:14:04.980 D/AudioTrackShared: front(0x0), mIsOut 1, avail 1882, mFrameCount 2822, filled 940
05-07 10:14:04.980 V/AudioTrack: obtainBuffer(940) returned 1882 = 940 + 942 err 0
05-07 10:14:04.980 D/AudioTrackShared: front(0x0), mIsOut 1, avail 1623, mFrameCount 2822, filled 1199
05-07 10:14:04.980 V/AudioTrack: obtainBuffer(681) returned 1623 = 681 + 942 err 0
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.980 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.981 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608
05-07 10:14:04.981 D/AudioTrackShared: front(0x0), mIsOut 1, avail 942, mFrameCount 2822, filled 1880
05-07 10:14:04.981 V/AudioTrack: obtainBuffer(940) returned 942 = 940 + 2 err 0
05-07 10:14:04.981 D/AudioTrackShared: front(0x0), mIsOut 1, avail 471, mFrameCount 2822, filled 2351
05-07 10:14:04.981 V/AudioTrack: obtainBuffer(469) returned 471 = 469 + 2 err 0
05-07 10:14:04.981 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/OpenSL_ES_Core.c: bqPlayerCallback:>> buffere queue callback
05-07 10:14:04.981 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm
05-07 10:14:04.981 D//Users/hejunlin/AndroidStudioProjects/FFmpegAudio/app/src/main/jni/FFmpegCore.c: getPCM:>> getPcm data_size=4608

原文发布于微信公众号 - 何俊林(DriodDeveloper)

原文发表时间:2017-05-08

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Phoenix的Android之旅

伪造客户端IP

XFF字段在我之前的推送中有介绍过具体是什么含义跟作用, 那些伪造IP的软件都是什么原理 但是在那篇推送中没有公开源码,其实也是出于安全考虑。说到这里就要来回答...

2062
来自专栏菩提树下的杨过

ExtJs+WCF+LINQ实现分页Grid

上篇文章《用ExtJs+Linq+Wcf打造简单grid 》,这个网格控件不带分页,本文在上文的基础上添加分页功能,文中会着重介绍如何在用LINQ返回分页数据,...

3527
来自专栏五毛程序员

五毛的cocos2d-x学习笔记08-动画

1945
来自专栏小勇DW3

生产环境下JVM调优参数的设置实例

◆ NewSize较大,old gen 剩余空间64m,一方面可能会带来old区容易增长到报警范围(监控数据显示oldgenused长期在50m左右,接近78%...

2246
来自专栏xingoo, 一个梦想做发明家的程序员

MFC 随机矩形

问题描述:   简单地使用随即的尺寸和颜色不停的绘制一系列的图像。 一种古老的方式:   设置一个向窗口函数发送WM_TIMER消息的windows计时器。  ...

2095
来自专栏C#

C#运用ThoughtWorks生成二维码

      在现在的项目中,较多的使用到二维码,前面介绍过一篇使用Gma生成二维码的操作,现在介绍一个第三方组件,主要介绍生成二维码,二维码的解析,以及对二维...

2538
来自专栏岑玉海

Hbase 学习(二)补充 自定义filter

本来这个内容是不单独讲的,但是因为上一个页面太大,导致Live Writer死机了,不能继续编辑了,所以就放弃了 这里要讲的是自定义filter,从Filt...

3535
来自专栏hrscy

使用 Unity 来实现 iOS 原生弹框

如果你有这些疑虑,那么现在你来对地方了。在这篇博客中,我将使用 Unity 创建 iOS 原生弹框。

3033
来自专栏菩提树下的杨过

.NET:Entity Framework 笔记

有二年没关注EF,今天无意试了下发现跟主流的Hibernate等ORM框架越来越接近了,先看下Entity类的定义: using System; using S...

2168
来自专栏游戏杂谈

基于SOUI开发一个简单的小工具

Duilib 很久不维护了,而很多不同的分支,似乎都不太维护。微信 Windows 的版本是基于 Duilib 进行开发的,说明应该还是很广泛的。

3423

扫码关注云+社区

领取腾讯云代金券