# 教程 | 如何基于TensorFlow使用LSTM和CNN实现时序分类任务

Github 项目地址：https://github.com/healthDataScience/deep-learning-HAR

### 卷积神经网络（CNN）

```graph = tf.Graph()
with graph.as_default():
inputs_ = tf.placeholder(tf.float32, [None, seq_len, n_channels],
name = 'inputs')
labels_ = tf.placeholder(tf.float32, [None, n_classes], name = 'labels')
keep_prob_ = tf.placeholder(tf.float32, name = 'keep')
learning_rate_ = tf.placeholder(tf.float32, name = 'learning_rate')```

```with graph.as_default():
# (batch, 128, 9) -> (batch, 32, 18)
conv1 = tf.layers.conv1d(inputs=inputs_, filters=18, kernel_size=2, strides=1,
max_pool_1 = tf.layers.max_pooling1d(inputs=conv1, pool_size=4, strides=4, padding='same')

# (batch, 32, 18) -> (batch, 8, 36)
conv2 = tf.layers.conv1d(inputs=max_pool_1, filters=36, kernel_size=2, strides=1,
max_pool_2 = tf.layers.max_pooling1d(inputs=conv2, pool_size=4, strides=4, padding='same')

# (batch, 8, 36) -> (batch, 2, 72)
conv3 = tf.layers.conv1d(inputs=max_pool_2, filters=72, kernel_size=2, strides=1,
max_pool_3 = tf.layers.max_pooling1d(inputs=conv3, pool_size=4, strides=4, padding='same')```

1. 计算 softmax 交叉熵函数，该损失函数在多类别问题中是标准的损失度量。
2. 在最大化概率和准确度的情况下预测类别标签。

```with graph.as_default():
flat = tf.reshape(max_pool_3, (-1, 2*72))
flat = tf.nn.dropout(flat, keep_prob=keep_prob_)

# Predictions
logits = tf.layers.dense(flat, n_classes)

# Cost function and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits,

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_, 1))    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')```

### 长短期记忆网络（LSTM）

LSTM 在处理文本数据上十分流行，它在情感分析、机器翻译、和文本生成等方面取得了十分显著的成果。因为本问题涉及相似分类的序列，所以 LSTM 是比较优秀的方法。

```with graph.as_default():
# Construct the LSTM inputs and LSTM cells
lstm_in = tf.transpose(inputs_, [1,0,2]) # reshape into (seq_len, N, channels)
lstm_in = tf.reshape(lstm_in, [-1, n_channels]) # Now (seq_len*N, n_channels)

# To cells
lstm_in = tf.layers.dense(lstm_in, lstm_size, activation=None)

# Open up the tensor into a list of seq_len pieces
lstm_in = tf.split(lstm_in, seq_len, 0)

lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob_)
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
initial_state = cell.zero_state(batch_size, tf.float32)```

```with graph.as_default():
outputs, final_state = tf.contrib.rnn.static_rnn(cell, lstm_in, dtype=tf.float32,
initial_state = initial_state)

# We only need the last output tensor to pass into a classifier
logits = tf.layers.dense(outputs[-1], n_classes, name='logits')

# Cost function and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_))

# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_, 1))    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')```

### 对比传统方法

HAR 任务经典机器学习方法：https://github.com/bhimmetoglu/talks-and-lectures/tree/master/MachineLearning/HAR

0 条评论

## 相关文章

882

6847

5708

3727

3927

2935

7761

### 对象检测网络中的mAP到底怎么计算出来的

mAP是英文mean Average Precision的全称，同时也是衡量深度学习中对象检测算法准确率的一个重要指标，mAP的计算涉及到很多专业的术语与解释，...

1.1K4

2011

3895