首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >读取Web-Audio API Analyzer的音频文件(Node.Js服务器,JS前端)

读取Web-Audio API Analyzer的音频文件(Node.Js服务器,JS前端)
EN

Stack Overflow用户
提问于 2019-06-25 05:09:33
回答 1查看 1.7K关注 0票数 2

我设置了一个简单的Node.js服务器来向我的本地前端提供.wav文件。

代码语言:javascript
运行
复制
require('dotenv').config();
const debugBoot = require('debug')('boot');
const cors = require('cors')
const express = require('express');
const app = express();
app.set('port', process.env.PORT || 3000);

app.use(cors());
app.use(express.static('public'));


const server = app.listen(app.get('port'), () => {
    const port = server.address().port;
    debugBoot('Server running at http://localhost:' + port);
});

在我的本地前端,我收到了这个文件:

代码语言:javascript
运行
复制
fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => process(response.body))

function process(stream) {
    console.log(stream);
    const context = new AudioContext();
    const analyser = context.createAnalyser();
    const source = context.createMediaStreamSource(stream);
    source.connect(analyser);
    const data = new Uint8Array(analyser.frequencyBinCount);

我想通过管道将流传输到AudioContext().createMediaStreamSource中。我可以用Media Stream来做这件事,例如,从麦克风。

但是对于ReadableStream,我得到了错误Failed to execute 'createMediaStreamSource' on 'AudioContext': parameter 1 is not of type 'MediaStream'

我希望以一种我可以将其插入到web-audio API中并使用分析器的方式来提供/接收音频。如果有其他解决方案,它就不需要是流了。

EN

回答 1

Stack Overflow用户

发布于 2019-06-26 15:20:52

我基本上将这两个示例合并在一起:

https://www.youtube.com/watch?v=hYNJGPnmwls (https://codepen.io/jakealbaugh/pen/jvQweW/)

下面是来自web-audio api的示例:

https://github.com/mdn/webaudio-examples/blob/master/audio-analyser/index.html

代码语言:javascript
运行
复制
let audioBuffer;
let sourceNode;
let analyserNode;
let javascriptNode;
let audioData = null;
let audioPlaying = false;
let sampleSize = 1024;  // number of samples to collect before analyzing data
let frequencyDataArray;     // array to hold time domain data
// Global Variables for the Graphics
let canvasWidth = 512;
let canvasHeight = 256;
let ctx;

document.addEventListener("DOMContentLoaded", function () {
    ctx = document.body.querySelector('canvas').getContext("2d");
    // the AudioContext is the primary 'container' for all your audio node objects
    try {
        audioContext = new AudioContext();
    } catch (e) {
        alert('Web Audio API is not supported in this browser');
    }
    // When the Start button is clicked, finish setting up the audio nodes, play the sound,
    // gather samples for the analysis, update the canvas
    document.body.querySelector('#start_button').addEventListener('click', function (e) {
        e.preventDefault();
        // Set up the audio Analyser, the Source Buffer and javascriptNode
        initCanvas();
        setupAudioNodes();
        javascriptNode.onaudioprocess = function () {
            // get the Time Domain data for this sample
            analyserNode.getByteFrequencyData(frequencyDataArray);
            // draw the display if the audio is playing
            console.log(frequencyDataArray)
            draw();
        };
        loadSound();
    });

    document.body.querySelector("#stop_button").addEventListener('click', function(e) {
        e.preventDefault();
        sourceNode.stop(0);
        audioPlaying = false;
    });

    function loadSound() {
        fetch('http://localhost:3000/audio/8bars60bpmOnlyKick.wav').then(response => {
            response.arrayBuffer().then(function (buffer) {
                audioContext.decodeAudioData(buffer).then((audioBuffer) => {
                    console.log('audioBuffer', audioBuffer);
                    // {length: 1536000, duration: 32, sampleRate: 48000, numberOfChannels: 2}
                    audioData = audioBuffer;
                    playSound(audioBuffer);
                });
            });
        })
    }

    function setupAudioNodes() {
        sourceNode = audioContext.createBufferSource();
        analyserNode = audioContext.createAnalyser();
        analyserNode.fftSize = 4096;
        javascriptNode = audioContext.createScriptProcessor(sampleSize, 1, 1);
        // Create the array for the data values
        frequencyDataArray = new Uint8Array(analyserNode.frequencyBinCount);
        // Now connect the nodes together
        sourceNode.connect(audioContext.destination);
        sourceNode.connect(analyserNode);
        analyserNode.connect(javascriptNode);
        javascriptNode.connect(audioContext.destination);
    }

    function initCanvas() {
        ctx.fillStyle = 'hsl(280, 100%, 10%)';
        ctx.fillRect(0, 0, canvasWidth, canvasHeight);
    };

    // Play the audio once
    function playSound(buffer) {
        sourceNode.buffer = buffer;
        sourceNode.start(0);    // Play the sound now
        sourceNode.loop = false;
        audioPlaying = true;
    }

    function draw() {
        const data = frequencyDataArray;
        const dataLength = frequencyDataArray.length;
        console.log("data", data);

        const h = canvasHeight / dataLength;
        // draw on the right edge
        const x = canvasWidth - 1;

        // copy the old image and move one left
        let imgData = ctx.getImageData(1, 0, canvasWidth - 1, canvasHeight);
        ctx.fillRect(0, 0, canvasWidth, canvasHeight);
        ctx.putImageData(imgData, 0, 0);

        for (let i = 0; i < dataLength; i++) {
            // console.log(data)
            let rat = data[i] / 255;
            let hue = Math.round((rat * 120) + 280 % 360);
            let sat = '100%';
            let lit = 10 + (70 * rat) + '%';
            // console.log("rat %s, hue %s, lit %s", rat, hue, lit);
            ctx.beginPath();
            ctx.strokeStyle = `hsl(${hue}, ${sat}, ${lit})`;
            ctx.moveTo(x, canvasHeight - (i * h));
            ctx.lineTo(x, canvasHeight - (i * h + h));
            ctx.stroke();
        }
    }
});

我简要地解释一下每个部分的作用:

创建音频上下文

加载DOM时,将创建AudioContext

加载音频文件并将其转换为AudioBuffer

然后我从后台服务器加载声音(代码如下所示)。然后将响应转换到缓冲器,然后将缓冲器解码为AudioBuffer。这基本上是上述问题的主要解决方案。

进程AudioBuffer

为了展示如何使用加载的音频文件的更多上下文,我包含了文件的其余部分。

为了进一步处理AudioBuffer,创建了一个源,并将缓冲区分配给该源:sourceNode.buffer = buffer

javascriptNode的行为就像一个流,您可以在其中访问分析器的输出。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/56743980

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档