# 用 TensorFlow 让你的机器人唱首原创给你听

DeepMind 发表了一篇论文，叫做 `WaveNet`, 这篇论文介绍了音乐生成和文字转语音的艺术。

#### 1.引入packages:

```import numpy as np
import pandas as pd
import msgpack
import glob
import tensorflow as tf
from tensorflow.python.ops import control_flow_ops
from tqdm import tqdm
import midi_manipulation```

#### 2.定义超参数：

```lowest_note = midi_manipulation.lowerBound #the index of the lowest note on the piano roll
highest_note = midi_manipulation.upperBound #the index of the highest note on the piano roll
note_range = highest_note-lowest_note #the note range```

```num_timesteps  = 15 #This is the number of timesteps that we will create at a time
n_visible      = 2*note_range*num_timesteps #This is the size of the visible layer.
n_hidden       = 50 #This is the size of the hidden layer```

```num_epochs = 200 #The number of training epochs that we are going to run. For each epoch we go through the entire data set.
batch_size = 100 #The number of training examples that we are going to send through the RBM at a time.
lr         = tf.constant(0.005, tf.float32) #The learning rate of our model```

#### 3.定义变量：

x 是投入网络的数据 w 用来存储权重矩阵，或者叫做两层之间的关系 此外还需要两种 bias，一个是隐藏层的 bh，一个是可见层的 bv

```x  = tf.placeholder(tf.float32, [None, n_visible], name="x") #The placeholder variable that holds our data
W  = tf.Variable(tf.random_normal([n_visible, n_hidden], 0.01), name="W") #The weight matrix that stores the edge weights
bh = tf.Variable(tf.zeros([1, n_hidden],  tf.float32, name="bh")) #The bias vector for the hidden layer
bv = tf.Variable(tf.zeros([1, n_visible],  tf.float32, name="bv")) #The bias vector for the visible layer```

gibbs_sample 是一种可以从多重概率分布中提取样本的算法。

```#The sample of x
x_sample = gibbs_sample(1)
#The sample of the hidden nodes, starting from the visible state of x
h = sample(tf.sigmoid(tf.matmul(x, W) + bh))
#The sample of the hidden nodes, starting from the visible state of x_sample
h_sample = sample(tf.sigmoid(tf.matmul(x_sample, W) + bh)) ```

#### 4.更新变量：

```size_bt = tf.cast(tf.shape(x)[0], tf.float32)
W_adder  = tf.mul(lr/size_bt, tf.sub(tf.matmul(tf.transpose(x), h), tf.matmul(tf.transpose(x_sample), h_sample)))
bv_adder = tf.mul(lr/size_bt, tf.reduce_sum(tf.sub(x, x_sample), 0, True))
bh_adder = tf.mul(lr/size_bt, tf.reduce_sum(tf.sub(h, h_sample), 0, True))
#When we do sess.run(updt), TensorFlow will run all 3 update steps

#### 5.接下来，运行 Graph 算法图：

##### 1.先初始化变量
```with tf.Session() as sess:
#First, we train the model
#initialize the variables of the model
init = tf.initialize_all_variables()
sess.run(init)```

```    for epoch in tqdm(range(num_epochs)):
for song in songs:
#The songs are stored in a time x notes format. The size of each song is timesteps_in_song x 2*note_range
#Here we reshape the songs so that each training example is a vector with num_timesteps x 2*note_range elements
song = np.array(song)
song = song[:np.floor(song.shape[0]/num_timesteps)*num_timesteps]
song = np.reshape(song, [song.shape[0]/num_timesteps, song.shape[1]*num_timesteps])```
##### 2.接下来就来训练 RBM 模型，一次训练一个样本
```            for i in range(1, len(song), batch_size):
tr_x = song[i:i+batch_size]
sess.run(updt, feed_dict={x: tr_x})```

##### 3.需要训练 Gibbs chain

```    sample = gibbs_sample(1).eval(session=sess, feed_dict={x: np.zeros((10, n_visible))})
for i in range(sample.shape[0]):
if not any(sample[i,:]):
continue
#Here we reshape the vector to be time x notes, and then save the vector as a midi file
S = np.reshape(sample[i,:], (num_timesteps, 2*note_range))```
##### 4.最后，打印出生成的和弦
`       midi_manipulation.noteStateMatrixToMidi(S, "generated_chord_{}".format(i))`

272 篇文章43 人订阅

0 条评论

## 相关文章

3259

36811

1162

3999

1103

### PyTorch实例：用ResNet进行交通标志分类

【导读】本文是机器学习工程师Pavel Surmenok撰写的一篇技术博客，用Pytorch实现ResNet网络，并用德国交通标志识别基准数据集进行实验。文中分...

1.5K0

30711

### 用 TensorFlow 让你的机器人唱首原创给你听

AI 研习社按：这篇文章会用一个简单的模型在 TensorFlow 上来实现一个音频生成器，GitHub 代码链接详见文末“阅读原文”。原文作者杨熹，载于作者的...

3439

### LSTM的简单介绍，附情感分析应用

? 长短期记忆网络，通常称为“LSTM”(Long Short Term Memory network,由Schmidhuber和Hochreiterfa提出...

3496

8978