# 深度学习算法(第30期)----降噪自编码器和稀疏自编码器及其实现

#### 降噪自编码器的TensorFlow实现

```X = tf.placeholder(tf.float32, shape=[None, n_inputs])
X_noisy = X + tf.random_normal(tf.shape(X))
[...]
hidden1 = activation(tf.matmul(X_noisy, weights1) + biases1)
[...]
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
[...]
```

```from tensorflow.contrib.layers import dropout

keep_prob = 0.7

is_training = tf.placeholder_with_default(False, shape=(), name='is_training')
X = tf.placeholder(tf.float32, shape=[None, n_inputs])
X_drop = dropout(X, keep_prob, is_training=is_training)
[...]
hidden1 = activation(tf.matmul(X_drop, weights1) + biases1)
[...]
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
[...]
```

```sess.run(training_op, feed_dict={X: X_batch, is_training: True})
```

#### 稀疏自编码器的TensorFlow实现

```def kl_divergence(p, q):
return p * tf.log(p / q) + (1 - p) * tf.log((1 - p) / (1 - q))

learning_rate = 0.01
sparsity_target = 0.1
sparsity_weight = 0.2

[...] # Build a normal autoencoder (in this example the coding layer is hidden1)

hidden1_mean = tf.reduce_mean(hidden1, axis=0) # batch mean
sparsity_loss = tf.reduce_sum(kl_divergence(sparsity_target, hidden1_mean))
reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
loss = reconstruction_loss + sparsity_weight * sparsity_loss
training_op = optimizer.minimize(loss)
```

```hidden1 = tf.nn.sigmoid(tf.matmul(X, weights1) + biases1)
```

```[...]
logits = tf.matmul(hidden1, weights2) + biases2)
outputs = tf.nn.sigmoid(logits)

reconstruction_loss = tf.reduce_sum(
tf.nn.sigmoid_cross_entropy_with_logits(labels=X, logits=logits))
```

355 篇文章74 人订阅

0 条评论

## 相关文章

7710

21230

### 深度学习四大名著之《机器学习实战：基于Scikit-Learn、Keras和TensorFlow》第二版

《机器学习实战：基于Scikit-Learn、Keras和TensorFlow》第二版

1K50

10920

### 智能八段锦 app 中的身体动作识别

OliveX是一家总部位于香港的公司，致力于健身相关软件的开发，自2018年首次推出以来，已为200万用户提供服务。我们的许多用户都是老年人，智能八段锦应用程序...

14630

9410

5620

12940

10920

8230