我试图用AudioRecord在安卓上录制一个音频,并将左右通道的录音分离成两个不同的文件,然后将其转换为wav,以便能够在phone.But上播放,录制的文件速度快,音高高。
我阅读了所有的示例并编写了这段代码,但我不确定是哪一部分导致了问题。
这是我的AudioRecord定义。
minBufLength = AudioTrack.getMinBufferSize(48000,AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 48000, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, minBufLength);
然后读取短数据,然后将短数据转换为字节,最后将其分离为两个通道的字节数组。
shortData = new short[minBufLength/2];
int readSize = recorder.read(shortData,0,minBufLength/2);
byte bData[] = short2byte(shortData);
for(int i = 0; i < readSize/2; i++)
{
final int offset = i * 2 * 2; // two bytes per sample and 2 channels
rightChannelFos.write(bData, offset , 2);
leftChannelFos.write(bData, offset + 2 , 2 );
}
File rightChannelF1 = new File("/sdcard/rightChannelaudio"); // The location of your PCM file
File leftChannelF1 = new File("/sdcard/leftChannelaudio"); // The location of your PCM file
File rightChannelF2 = new File("/sdcard/rightChannelaudio.wav"); // The location where you want your WAV file
File leftChannelF2 = new File("/sdcard/leftChannelaudio.wav"); // The location where you want your WAV file
rawToWave(rightChannelF1, rightChannelF2);
rawToWave(leftChannelF1, leftChannelF2);
// convert short to byte
private byte[] short2byte(short[] sData) {
int shortArrsize = sData.length;
byte[] bytes = new byte[shortArrsize * 2];
for (int i = 0; i < shortArrsize; i++) {
bytes[i * 2] = (byte) (sData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (sData[i] >> 8);
sData[i] = 0;
}
return bytes;
}
这是rawToWave函数。我没有包括其他的写函数来保持文章的简单性。
private void rawToWave(final File rawFile, final File waveFile) throws IOException {
byte[] rawData = new byte[(int) rawFile.length()];
DataInputStream input = null;
try {
input = new DataInputStream(new FileInputStream(rawFile));
input.read(rawData);
} finally {
if (input != null) {
input.close();
}
}
DataOutputStream output = null;
try {
output = new DataOutputStream(new FileOutputStream(waveFile));
// WAVE header
// see http://ccrma.stanford.edu/courses/422/projects/WaveFormat/
writeString(output, "RIFF"); // chunk id
writeInt(output, 36 + rawData.length); // chunk size
writeString(output, "WAVE"); // format
writeString(output, "fmt "); // subchunk 1 id
writeInt(output, 16); // subchunk 1 size
writeShort(output, (short) 1); // audio format (1 = PCM)
writeShort(output, (short) 2); // number of channels
writeInt(output, 48000); // sample rate
writeInt(output, 48000 * 2); // byte rate
writeShort(output, (short) 2); // block align
writeShort(output, (short) 16); // bits per sample
writeString(output, "data"); // subchunk 2 id
writeInt(output, rawData.length); // subchunk 2 size
// Audio data (conversion big endian -> little endian)
short[] shorts = new short[rawData.length / 2];
ByteBuffer.wrap(rawData).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
ByteBuffer bytes = ByteBuffer.allocate(shorts.length * 2);
for (short s : shorts) {
bytes.putShort(s);
}
output.write(fullyReadFileToBytes(rawFile));
} finally {
if (output != null) {
output.close();
}
}
}
更新:
我将此添加为一个更新,以防其他人面临这样的问题。由于一些我不明白的原因,信道更新循环不起作用。因此,我分别更新了每个通道的字节数组。既然这是一个16位的方案,那就意味着每个样本有2个字节,所以原始数据中的样本都是这种格式的LLLL,这就是为什么循环应该基于以下内容
for(int i = 0; i < readSize; i= i + 2)
{
leftChannelAudioData[i] = bData[2*i];
leftChannelAudioData[i+1] = bData[2*i+1];
rightChannelAudioData[i] = bData[2*i+2];
rightChannelAudioData[i+1] = bData[2*i+3];
}
发布于 2021-12-21 06:40:19
在WAV-头中,您有两个通道(立体声)输出格式:
writeShort(output, (short) 2); // number of channels
如果是这样,那么byterate应该是48000 *4 (=每个通道*每个样本2个字节),而且由于同样的原因,块对齐应该是4。
另外,您需要编写每个示例两次,因为您的输出是立体声的:每个通道一次。例如:
rightChannelFos.write(bData, offset , 2);
rightChannelFos.write(bData, offset , 2);
leftChannelFos.write(bData, offset + 2 , 2 );
leftChannelFos.write(bData, offset + 2 , 2 );
但更简单的解决方案只是将输出格式更改为单通道(1通道):
writeShort(output, (short) 1); // number of channels
UPD
对于输入缓冲区,您需要选择足够大的大小(例如1秒),以便在以小块读取它时不会出现不足。当您处理数据时,系统将保存它,例如:
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 48000, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, 48000 * 4); // 1 second long
您可以将读取缓冲区保持较小,但某些预定义大小的缓冲区较小。(例如1024-4096个样本)。调用recorder.read
时,它返回获得的数据的实际大小,不超过缓冲区大小(作为参数传递),也不超过缓冲区中可用的数据。
https://stackoverflow.com/questions/70436367
复制相似问题