首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >如何使用AudioUnit来播放来自服务器的音频流?

如何使用AudioUnit来播放来自服务器的音频流?
EN

Stack Overflow用户
提问于 2016-01-04 08:33:31
回答 2查看 776关注 0票数 1
代码语言:javascript
运行
复制
- (void)openPlayThreadWithRtmpURL:(NSString *)rtmpURL {
spx_int16_t *input_buffer;

do {
    if (self.rtmpDelegate) {
        [self.rtmpDelegate evenCallbackWithEvent:2000];
    }

    //init speex decoder and config;
    speex_bits_init(&dbits);
    dec_state = speex_decoder_init(&speex_wb_mode);

    speex_decoder_ctl(dec_state, SPEEX_GET_FRAME_SIZE, &dec_frame_size);

    input_buffer = malloc(dec_frame_size * sizeof(short));

    NSLog(@"Init Speex decoder success frame_size = %d",dec_frame_size);

    //init rtmp
    pPlayRtmp = RTMP_Alloc();
    RTMP_Init(pPlayRtmp);
    NSLog(@"Play RTMP_Init %@\n", rtmpURL);

    if (!RTMP_SetupURL(pPlayRtmp, (char*)[rtmpURL UTF8String])) {
        NSLog(@"Play RTMP_SetupURL error\n");
        if(self.rtmpDelegate) {
            [self.rtmpDelegate evenCallbackWithEvent:2002];
        }
        break;
    }

    if (!RTMP_Connect(pPlayRtmp, NULL) || !RTMP_ConnectStream(pPlayRtmp, 0)) {
        NSLog(@"Play RTMP_Connect or RTMP_ConnectStream error\n");
        if(self.rtmpDelegate) {
            [self.rtmpDelegate evenCallbackWithEvent:2002];
        }
        break;
    }

    if(self.rtmpDelegate) {
        [self.rtmpDelegate evenCallbackWithEvent:2001];
    }
    NSLog(@"Player RTMP_Connected \n");

    RTMPPacket rtmp_pakt = {0};
    isStartPlay = YES;
    while (isStartPlay && RTMP_ReadPacket(pPlayRtmp, &rtmp_pakt)) {
        if (RTMPPacket_IsReady(&rtmp_pakt)) {
            if (!rtmp_pakt.m_nBodySize) {
                continue;
            }
            if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_AUDIO) {
                NSLog(@"Audio size = %d head = %d time = %d", rtmp_pakt.m_nBodySize, rtmp_pakt.m_body[0], rtmp_pakt.m_nTimeStamp);
                speex_bits_read_from(&dbits, rtmp_pakt.m_body + 1, rtmp_pakt.m_nBodySize - 1);
                speex_decode_int(dec_state, &dbits, input_buffer);  //audioData in the input_buffer
                //do something...



            } else if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_VIDEO) {
                // 处理视频数据包
            } else if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_INVOKE) {
                // 处理invoke包
                NSLog(@"RTMP_PACKET_TYPE_INVOKE");
                RTMP_ClientPacket(pPlayRtmp,&rtmp_pakt);
            } else if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_INFO) {
                // 处理信息包
                //NSLog(@"RTMP_PACKET_TYPE_INFO");
            } else if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_FLASH_VIDEO) {
                // 其他数据
                int index = 0;
                while (1) {
                    int StreamType; //1-byte
                    int MediaSize; //3-byte
                    int TiMMER; //3-byte
                    int Reserve; //4-byte
                    char* MediaData; //MediaSize-byte
                    int TagLen; //4-byte

                    StreamType = rtmp_pakt.m_body[index];
                    index += 1;
                    MediaSize = bigThreeByteToInt(rtmp_pakt.m_body + index);
                    index += 3;
                    TiMMER = bigThreeByteToInt(rtmp_pakt.m_body + index);
                    index += 3;
                    Reserve = bigFourByteToInt(rtmp_pakt.m_body + index);
                    index += 4;
                    MediaData = rtmp_pakt.m_body + index;
                    index += MediaSize;
                    TagLen = bigFourByteToInt(rtmp_pakt.m_body + index);
                    index += 4;
                    //NSLog(@"bodySize:%d   index:%d",rtmp_pakt.m_nBodySize,index);
                    //LOGI("StreamType:%d MediaSize:%d  TiMMER:%d TagLen:%d\n", StreamType, MediaSize, TiMMER, TagLen);
                    if (StreamType == 0x08) {
                        //音频包
                        //int MediaSize = bigThreeByteToInt(rtmp_pakt.m_body+1);
                        //  LOGI("FLASH audio size:%d  head:%d time:%d\n", MediaSize, MediaData[0], TiMMER);
                        speex_bits_read_from(&dbits, MediaData + 1, MediaSize - 1);
                        speex_decode_int(dec_state, &dbits, input_buffer);

                        //[mAudioPlayer putAudioData:input_buffer];
                        //  putAudioQueue(output_buffer,dec_frame_size);
                    } else if (StreamType == 0x09) {
                        //视频包
                        //  LOGI( "video size:%d  head:%d\n", MediaSize, MediaData[0]);
                    }
                    if (rtmp_pakt.m_nBodySize == index) {
                        break;
                    }
                }
            }
            RTMPPacket_Free(&rtmp_pakt);
        }
    }
    if (isStartPlay) {
        if(self.rtmpDelegate) {
            [self.rtmpDelegate evenCallbackWithEvent:2005];
        }
        isStartPlay = NO;
    }
} while (0);
[mAudioPlayer stopPlay];
if (self.rtmpDelegate) {
    [self.rtmpDelegate evenCallbackWithEvent:2004];
}
if (RTMP_IsConnected(pPlayRtmp)) {
    RTMP_Close(pPlayRtmp);
}
RTMP_Free(pPlayRtmp);
free(input_buffer);
speex_bits_destroy(&dbits);
speex_decoder_destroy(dec_state);

}

这是我的定制方法。RtmpURL是NSString的对象,它是流服务器地址。使用这种方法,我可以从服务器获得音频流的编码,然后,我使用speex解码器来解码我得到的数据,如下所示:

代码语言:javascript
运行
复制
//init speex decoder and config;
    speex_bits_init(&dbits);
    dec_state = speex_decoder_init(&speex_wb_mode);

    speex_decoder_ctl(dec_state, SPEEX_GET_FRAME_SIZE, &dec_frame_size);

    input_buffer = malloc(dec_frame_size * sizeof(short));

    NSLog(@"Init Speex decoder success frame_size = %d",dec_frame_size);
 if (rtmp_pakt.m_packetType == RTMP_PACKET_TYPE_AUDIO) {
                NSLog(@"Audio size = %d head = %d time = %d", rtmp_pakt.m_nBodySize, rtmp_pakt.m_body[0], rtmp_pakt.m_nTimeStamp);
                speex_bits_read_from(&dbits, rtmp_pakt.m_body + 1, rtmp_pakt.m_nBodySize - 1);
                speex_decode_int(dec_state, &dbits, input_buffer);  //audioData in the input_buffer
                //do something...



            }

现在,解码的音频数据存储在input_buffer中,这就是我的困惑。如何使用AudioUnit来播放音频data.And --这是我的回放回调函数:

代码语言:javascript
运行
复制
OSStatus playCallback(void                            *inRefCon,
                  AudioUnitRenderActionFlags      *ioActionFlags,
                  const AudioTimeStamp            *inTimeStamp,
                  UInt32                          inBusNumber,
                  UInt32                          inNumberFrames,
                  AudioBufferList                 *ioData){
AudioPlayer *THIS = (__bridge AudioPlayer *)inRefCon;
//How do I use the AudioUnit to play the audio stream from server?

return noErr;

}

我希望有一些朋友能解决我的困惑,如果你用了audioUnit,谢谢你!

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2016-01-05 07:44:18

在playCallback中,需要将音频复制到缓冲区ioData中。例如

代码语言:javascript
运行
复制
memcpy (ioData->mBuffers[0].mData,  input_buffer + offset, numBytes );
// increase offset based on how many frames it requests.

输入变量inNumberFrames是它准备好的帧数。这可能少于input_buffer中的帧数。所以你需要跟踪你的比赛位置。

我不知道您的音频格式,这在您的音频流基本说明。考虑到单声道/立体声,每个通道的字节数,当然还有inNumberFrames,您需要计算需要复制的字节数。

票数 0
EN

Stack Overflow用户

发布于 2016-01-04 09:22:09

这里有一些非常好的资源,链接

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/34587244

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档