# 用 LSTM 来做一个分类小问题

```import numpy as np
from random import shuffle```

input 一共有 2^20 种组合，就生成这么多的数据

```train_input = ['{0:020b}'.format(i) for i in range(2**20)]
shuffle(train_input)
train_input = [map(int,i) for i in train_input]```

train_input： [1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0] [0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1] [0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1]

```ti  = []
for i in train_input:
temp_list = []
for j in i:
temp_list.append([j])
ti.append( np.array(temp_list) )

train_input = ti```

train_input ： [[1][0][0][0][1][1][1][0][1][0][0][0][0][1][0][0][0][1][0][0]]

```train_output = []

for i in train_input:
count = 0
for j in i:
if j[0] == 1:
count+=1
temp_list = ([0]*21)
temp_list[count]=1
train_output.append(temp_list)```

train_output：在第几个位置上有一个 1 ，说明 input 里面就有几个 1，长度为 21 [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]

```NUM_EXAMPLES = 10000
test_input = train_input[NUM_EXAMPLES:]
test_output = train_output[NUM_EXAMPLES:] #everything beyond 10,000

train_input = train_input[:NUM_EXAMPLES]
train_output = train_output[:NUM_EXAMPLES] #till 10,000```

```data = tf.placeholder(tf.float32, [None, 20,1])
target = tf.placeholder(tf.float32, [None, 21])```

```num_hidden = 24
# cell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True)
cell = tf.contrib.rnn.LSTMCell(num_hidden,state_is_tuple=True)```

`val, _ = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)`

```val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)```

```weight = tf.Variable(tf.truncated_normal( [num_hidden, int(target.get_shape()[1])] ))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))```

`prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)`

```cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction,1e-10,1.0)))

minimize = optimizer.minimize(cross_entropy)```

```mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))```

```init_op = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init_op)```

```batch_size = 1000
no_of_batches = int(len(train_input)) / batch_size
epoch = 600```
```for i in range(epoch):
ptr = 0
for j in range(no_of_batches):
inp, out = train_input[ptr:ptr+batch_size], train_output[ptr:ptr+batch_size]
ptr += batch_size
sess.run(minimize,{data: inp, target: out})
print "Epoch ",str(i)

incorrect = sess.run(error,{data: test_input, target: test_output})

print sess.run(prediction, {data: [[[1],[0],[0],[1],[1],[0],[1],[1],[1],[0],[1],[0],[0],[1],[1],[0],[1],[1],[1],[0]]]})
print('Epoch {:2d} error {:3.1f}%'.format(i + 1, 100 * incorrect))

sess.close()```

```[[  2.80220238e-08   3.24575727e-10   5.68697936e-11   3.57573054e-10
9.62089857e-08   1.30921896e-08   2.14473985e-08   5.21751364e-10
2.29034747e-08   8.47907577e-10   3.60394756e-06   2.30961153e-03
9.82593179e-01   1.50928665e-02   4.23395448e-07   1.06428047e-07
6.70640388e-09   1.78888765e-10   3.22445395e-08   3.09186134e-08
3.70296416e-09]]

Epoch 600 error 0.3%```

280 篇文章46 人订阅

0 条评论

## 相关文章

1950

3151

983

### 函数玩一玩 | 【SAS Says·扩展篇】IML：2.函数

【SAS Says·扩展篇】IML 分6集，回复【SASIML】查看全部： 入门 | SAS里的平行世界 函数 | 函数玩一玩 编程 | IML的条件与循环 模...

3869

### tensorflow学习笔记(三十九):双向rnn

tensorflow 双向 rnn 如何在tensorflow中实现双向rnn 单层双向rnn ? 单层双向rnn (cs224d) tensorfl...

8485

### 02:奇数单增序列 个人博客doubleq.win

个人博客doubleq.win 02:奇数单增序列 查看 提交 统计 提问 总时间限制: 1000ms 内存限制: 65536kB描述 给定一个长度为N（不...

3458

### Tensorflow使用的预训练的resnet_v2_50，resnet_v2_101，resnet_v2_152等模型预测，训练

tensorflow 实现：Inception，ResNet ， VGG ， MobileNet， Inception-ResNet； 地址： https:/...

1.1K8

3636