我使用VoiceProcessingIO类型的音频单元来接收没有回声的语音。在RenderCallback中,我获取声音样本,然后将所有缓冲区值设置为零,这样就不会有回放。现在,我需要在收到声音后将其采样率从48000更改为16000,然后将生成的声音通过低通滤波器。
我不知道如何配置几个音频单元来相互连接并链接数据。
我知道我必须为转换器使用kAudioUnitSubType_AUConverter,为过滤器使用kAudioUnitSubType_LowPassFilter。
我已经迫不及待地想找到任何形式的帮助了。
附注:我找到了这个blog post,也有类似的问题,但作者的问题从来没有得到回答。但我不明白作者为什么要使用两个转换器。我还担心他使用远程类型,我不明白他为什么要按这种顺序连接这些总线。
public static class SoundSettings
{
public static readonly int SampleRate = 16000;
public static readonly int Channels = 1;
public static readonly int BytesPerSample = 2;
public static readonly int FramesPerPacket = 1;
}
private void SetupAudioSession()
{
AudioSession.Initialize();
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
AudioSession.Mode = AudioSessionMode.GameChat;
AudioSession.PreferredHardwareIOBufferDuration = 0.08f;
}
private void PrepareAudioUnit()
{
_srcFormat = new AudioStreamBasicDescription
{
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger |
AudioFormatFlags.LinearPCMIsPacked,
SampleRate = AudioSession.CurrentHardwareSampleRate,
FramesPerPacket = SoundSettings.FramesPerPacket,
BytesPerFrame = SoundSettings.BytesPerSample * SoundSettings.Channels,
BytesPerPacket = SoundSettings.FramesPerPacket *
SoundSettings.BytesPerSample *
SoundSettings.Channels,
BitsPerChannel = SoundSettings.BytesPerSample * 8,
ChannelsPerFrame = SoundSettings.Channels,
Reserved = 0
};
var audioComponent = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);
_audioUnit = new AudioUnit.AudioUnit(audioComponent);
_audioUnit.SetEnableIO(true, AudioUnitScopeType.Input, 1);
_audioUnit.SetEnableIO(true, AudioUnitScopeType.Output, 0);
_audioUnit.SetFormat(_srcFormat, AudioUnitScopeType.Input, 0);
_audioUnit.SetFormat(_srcFormat, AudioUnitScopeType.Output, 1);
_audioUnit.SetRenderCallback(this.RenderCallback, AudioUnitScopeType.Input, 0);
}
private AudioUnitStatus RenderCallback(
AudioUnitRenderActionFlags actionFlags,
AudioTimeStamp timeStamp,
uint busNumber,
uint numberFrames,
AudioBuffers data)
{
var status = _audioUnit.Render(ref actionFlags, timeStamp, 1, numberFrames, data);
if (status != AudioUnitStatus.OK)
{
return status;
}
var msgArray = new byte[dataByteSize];
Marshal.Copy(data[0].Data, msgArray, 0, dataByteSize);
var msg = _msgFactory.CreateAudioMsg(msgArray, msgArray.Length, (++_lastIndex));
this.OnMsgReady(msg);
// Disable playback IO
var array = new byte[dataByteSize];
Marshal.Copy(array, 0, data[0].Data, dataByteSize);
return AudioUnitStatus.NoError;
}
发布于 2020-10-03 03:33:34
这里是一个连接两个音频单元的函数示例(请注意,源和目标必须具有相同的流格式,才能成功连接它们):
OSStatus connectAudioUnits(AudioUnit source, AudioUnit destination, AudioUnitElement sourceOutput, AudioUnitElement destinationInput) {
AudioUnitConnection connection;
connection.sourceAudioUnit = source;
connection.sourceOutputNumber = sourceOutput;
connection.destInputNumber = destinationInput;
return AudioUnitSetProperty (
destination,
kAudioUnitProperty_MakeConnection,
kAudioUnitScope_Input,
destinationInput,
&connection,
sizeof(connection)
);
}
https://stackoverflow.com/questions/63968743
复制相似问题