首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >从websocket播放OPUS时的声音调度问题

从websocket播放OPUS时的声音调度问题
EN

Stack Overflow用户
提问于 2020-11-03 21:11:27
回答 1查看 119关注 0票数 6

我正在尝试使用库https://github.com/AnthumChris/opus-stream-decoder/

我有一个来自高质量麦克风的OPUS编码声音流(2ch,48 this) (但我在上面循环播放音乐来测试这一点)。我知道它是有效的,因为我可以听到它,如果我使用:

websocat --binary ws://third-i.local/api/sound - | mpv -

(它打开websocket并将其输出流式传输到mpv (mplayer))。

但当我在浏览器中播放时,每秒听到的声音都很小。但是声音本身听起来很好(我相信它只是音乐的一小部分)。

以下是我为在浏览器中监听而编写的JS代码:

代码语言:javascript
运行
复制
let audioWorker: any;
let exampleSocket;
let opusDecoder: any;
let audioCtx: any;
let startTime = 0;
let counter = 0;

function startAudio() {
  /*
  const host = document.location.hostname;
  const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
  const uri = `${scheme}://${host}/api/sound`;
  */
  const uri = "ws://third-i.local/api/sound";
  audioCtx = new AudioContext();
  startTime = 100 / 1000;
  exampleSocket = new WebSocket(uri);
  exampleSocket.binaryType = "arraybuffer";
  opusDecoder = new OpusStreamDecoder({onDecode});
  exampleSocket.onmessage = (event) => opusDecoder.ready.then(
    () => opusDecoder.decode(new Uint8Array(event.data))
  );
  exampleSocket.onclose = () => console.log("socket is closed!!");
}

function onDecode({left, right, samplesDecoded, sampleRate}: any) {
  const source = audioCtx.createBufferSource();
  const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
  buffer.copyToChannel(left, 0);
  buffer.copyToChannel(right, 1);
  source.buffer = buffer;
  source.connect(audioCtx.destination);
  source.start(startTime);
  startTime += buffer.duration;
}

https://github.com/BigBoySystems/third-i-frontend/blob/play-audio/src/App.tsx#L54-L88

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-11-14 18:08:07

调度的问题是因为您在创建AudioContext的同时创建了WebSocket,从而将连接时间添加到AudioContext的调度中。

换句话说,当您创建AudioContext时,计划将立即开始,但由于AudioContext是在创建WebSocket时创建的(仅开始连接),因此计划将根据WebSocket连接到上游并接收第一个字节所花费的时间量来关闭。

这是您的代码已修复:

代码语言:javascript
运行
复制
let audioStreamSocket;
let opusDecoder: any;
let audioCtx: AudioContext;
let startTime: number;

function startAudio() {
  const host = document.location.hostname;
  const scheme = document.location.protocol.startsWith("https") ? "wss" : "ws";
  const uri = `${scheme}://${host}/api/sound`;
  audioStreamSocket = new WebSocket(uri);
  audioStreamSocket.binaryType = "arraybuffer";
  opusDecoder = new OpusStreamDecoder({ onDecode });
  audioStreamSocket.onmessage = (event) =>
    opusDecoder.ready.then(() => opusDecoder.decode(new Uint8Array(event.data)));
}

function onDecode({ left, right, samplesDecoded, sampleRate }: any) {
  if (audioCtx === undefined) {
    // See how we create the AudioContext only after some data has been received
    // and successfully decoded <=====================================
    console.log("Audio stream connected");
    audioCtx = new AudioContext();
    startTime = 0.1;
  }
  const source = audioCtx.createBufferSource();
  const buffer = audioCtx.createBuffer(2, samplesDecoded, sampleRate);
  buffer.copyToChannel(left, 0);
  buffer.copyToChannel(right, 1);
  source.buffer = buffer;
  source.connect(audioCtx.destination);
  source.start(startTime);
  startTime += buffer.duration;
}
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64663507

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档