我觉得这张图就够了,实际上tf.nn.embedding_lookup的作用就是找到要寻找的embedding data中的对应的行下的vector。
tf.nn.embedding_lookup(params, ids, partition_strategy=‘mod’, name=None, validate_indices=True, max_norm=None)
参数说明:
官方文档位置,其中,params是我们给出的,是服从[0,1]的均匀分布或者标准分布,tf.convert_to_tensor
转化我们现有的array
然后,ids是我们要找的params中对应位置。
举个例子:
import numpy as np
import tensorflow as tf
data = np.array([[[2],[1]],[[3],[4]],[[6],[7]]])
data = tf.convert_to_tensor(data)
lk = [[0,1],[1,0],[0,0]]
lookup_data = tf.nn.embedding_lookup(data,lk)
init = tf.global_variables_initializer()
先让我们看下不同数据对应的维度:
In [76]: data.shape
Out[76]: (3, 2, 1)
In [77]: np.array(lk).shape
Out[77]: (3, 2)
In [78]: lookup_data
Out[78]: <tf.Tensor 'embedding_lookup_8:0' shape=(3, 2, 2, 1) dtype=int64>
这个是怎么做到的呢?关键的部分来了,看下图:
lk中的值,在要寻找的embedding数据中下找对应的index下的vector进行拼接。永远是look(lk)部分的维度+embedding(data)部分的除了第一维后的维度拼接。很明显,我们也可以得到,lk里面值是必须要小于等于embedding(data)的最大维度减一的。
以上的结果就是:
In [79]: data
Out[79]:
array([[[2],
[1]],
[[3],
[4]],
[[6],
[7]]])
In [80]: lk
Out[80]: [[0, 1], [1, 0], [0, 0]]
# lk[0]也就是[0,1]对应着下面sess.run(lookup_data)的结果恰好是把data中的[[2],[1]],[[3],[4]]
In [81]: sess.run(lookup_data)
Out[81]:
array([[[[2],
[1]],
[[3],
[4]]],
[[[3],
[4]],
[[2],
[1]]],
[[[2],
[1]],
[[2],
[1]]]])
import tensorflow as tf;
import numpy as np;
c = np.random.random([10,1])
b = tf.nn.embedding_lookup(c, [1, 3])
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(b))
print(c)
输出:
[[ 0.77505197]
[ 0.20635818]]
[[ 0.23976515]
[ 0.77505197]
[ 0.08798201]
[ 0.20635818]
[ 0.37183035]
[ 0.24753178]
[ 0.17718483]
[ 0.38533808]
[ 0.93345168]
[ 0.02634772]]
分析:输出为张量的第一和第三个元素。
import tensorflow as tf
import numpy as np
a = [[0.1, 0.2, 0.3], [1.1, 1.2, 1.3], [2.1, 2.2, 2.3], [3.1, 3.2, 3.3], [4.1, 4.2, 4.3]]
a = np.asarray(a)
idx1 = tf.Variable([0, 2, 3, 1], tf.int32)
idx2 = tf.Variable([[0, 2, 3, 1], [4, 0, 2, 2]], tf.int32)
out1 = tf.nn.embedding_lookup(a, idx1)
out2 = tf.nn.embedding_lookup(a, idx2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print(sess.run(out1))
print(out1)
print('==================')
print(sess.run(out2))
print(out2)
输出:
[[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 3.1 3.2 3.3]
[ 1.1 1.2 1.3]]
Tensor("embedding_lookup:0", shape=(4, 3), dtype=float64)
==================
[[[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 3.1 3.2 3.3]
[ 1.1 1.2 1.3]]
[[ 4.1 4.2 4.3]
[ 0.1 0.2 0.3]
[ 2.1 2.2 2.3]
[ 2.1 2.2 2.3]]]
Tensor("embedding_lookup_1:0", shape=(2, 4, 3), dtype=float64)
参考:https://blog.csdn.net/uestc_c2_403/article/details/72779417 https://www.jianshu.com/p/abea0d9d2436 https://www.cnblogs.com/gaofighting/p/9625868.html https://blog.csdn.net/yangfengling1023/article/details/82910951