我正在尝试使用库https://github.com/AnthumChris/opus-stream-decoder/
我有一个来自高质量麦克风的OPUS编码声音流(2ch,48 this) (但我在上面循环播放音乐来测试这一点)。我知道它是有效的,因为我可以听到它,如果我使用:
websocat --binary ws://third-i.local/api/sound - | mpv -
(它打开websocket并将其输出流式传输到mpv (mplayer))。
但当我在浏览器中播放时,每秒听到的声音都很小。但是声音本身听起来很好(我相信它只是音乐的一小部分)。
以下是我为在浏览器中监听而编写的JS代码:
let audioWorker: any;
let exampleSocket;
let opusDecoder: any;
let audioCtx: any;
let startTime = 0;
let counter = 0;
function startAudio() {
/*
const host = document.location.hostname;
const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
const uri = `${scheme}://${host}/api/sound`;
*/
const uri = "ws://third-i.local/api/sound";
audioCtx = new AudioContext();
startTime = 100 / 1000;
exampleSocket = new WebSocket(uri);
exampleSocket.binaryType = "arraybuffer";
opusDecoder = new OpusStreamDecoder({onDecode});
exampleSocket.onmessage = (event) => opusDecoder.ready.then(
() => opusDecoder.decode(new Uint8Array(event.data))
);
exampleSocket.onclose = () => console.log("socket is closed!!");
}
function onDecode({left, right, samplesDecoded, sampleRate}: any) {
const source = audioCtx.createBufferSource();
const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
buffer.copyToChannel(left, 0);
buffer.copyToChannel(right, 1);
source.buffer = buffer;
source.connect(audioCtx.destination);
source.start(startTime);
startTime += buffer.duration;
}
https://github.com/BigBoySystems/third-i-frontend/blob/play-audio/src/App.tsx#L54-L88
发布于 2020-11-14 18:08:07
调度的问题是因为您在创建AudioContext的同时创建了WebSocket,从而将连接时间添加到AudioContext
的调度中。
换句话说,当您创建AudioContext
时,计划将立即开始,但由于AudioContext是在创建WebSocket时创建的(仅开始连接),因此计划将根据WebSocket连接到上游并接收第一个字节所花费的时间量来关闭。
这是您的代码已修复:
let audioStreamSocket;
let opusDecoder: any;
let audioCtx: AudioContext;
let startTime: number;
function startAudio() {
const host = document.location.hostname;
const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
const uri = `${scheme}://${host}/api/sound`;
audioStreamSocket = new WebSocket(uri);
audioStreamSocket.binaryType = "arraybuffer";
opusDecoder = new OpusStreamDecoder({ onDecode });
audioStreamSocket.onmessage = (event) =>
opusDecoder.ready.then(() => opusDecoder.decode(new Uint8Array(event.data)));
}
function onDecode({ left, right, samplesDecoded, sampleRate }: any) {
if (audioCtx === undefined) {
// See how we create the AudioContext only after some data has been received
// and successfully decoded <=====================================
console.log("Audio stream connected");
audioCtx = new AudioContext();
startTime = 0.1;
}
const source = audioCtx.createBufferSource();
const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
buffer.copyToChannel(left, 0);
buffer.copyToChannel(right, 1);
source.buffer = buffer;
source.connect(audioCtx.destination);
source.start(startTime);
startTime += buffer.duration;
}
https://stackoverflow.com/questions/64663507
复制相似问题