前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Word2vec原理浅析及tensorflow实现

Word2vec原理浅析及tensorflow实现

作者头像
用户1332428
发布2018-07-30 15:28:31
5490
发布2018-07-30 15:28:31
举报
文章被收录于专栏:人工智能LeadAI人工智能LeadAI

正文共3499个字,9张图,预计阅读时间13分钟。

word2vec简介

Word2Vec是由Google的Mikolov等人提出的一个词向量计算模型。

  • 输入:大量已分词的文本
  • 输出:用一个稠密向量来表示每个词

词向量的重要意义在于将自然语言转换成了计算机能够理解的向量。相对于词袋模型、TF-IDF等模型,词向量能抓住词的上下文、语义,衡量词与词的相似性,在文本分类、情感分析等许多自然语言处理领域有重要作用。

词向量经典例子: http://latex.codecogs.com/png.latex?\vec{man}-\vec{woman}\approx\vec{king}-\vec{queen}

gensim已经用python封装好了word2vec的实现,有语料的话可以直接训练了,参考中英文维基百科语料上的Word2Vec实验。

会使用gensim训练词向量,并不表示真的掌握了word2vec,只表示会读文档会调接口而已。

Word2vec详细实现

word2vec的详细实现,简而言之,就是一个三层的神经网络。要理解word2vec的实现,需要的预备知识是神经网络和Logistic Regression。

神经网络结构

word2vec原理图

上图是Word2vec的简要流程图。首先假设,词库里的词数为10000; 词向量的长度为300(根据斯坦福CS224d的讲解,词向量一般为25-1000维,300维是一个好的选择)。下面以单个训练样本为例,依次介绍每个部分的含义。

1、输入层:输入为一个词的one-hot向量表示。这个向量长度为10000。假设这个词为ants,ants在词库中的ID为i,则输入向量的第i个分量为1,其余为0。[0, 0, ..., 0, 0, 1, 0, 0, ..., 0, 0]

2、隐藏层:隐藏层的神经元个数就是词向量的长度。隐藏层的参数是一个[10000 ,300]的矩阵。 实际上,这个参数矩阵就是词向量。回忆一下矩阵相乘,一个one-hot行向量和矩阵相乘,结果就是矩阵的第i行。经过隐藏层,实际上就是把10000维的one-hot向量映射成了最终想要得到的300维的词向量。

矩阵乘法

3、输出层: 输出层的神经元个数为总词数10000,参数矩阵尺寸为[300,10000]。词向量经过矩阵计算后再加上softmax归一化,重新变为10000维的向量,每一维对应词库中的一个词与输入的词(在这里是ants)共同出现在上下文中的概率。

输出层

上图中计算了car与ants共现的概率,car所对应的300维列向量就是输出层参数矩阵中的一列。输出层的参数矩阵是[300,10000],也就是计算了词库中所有词与ants共现的概率。输出层的参数矩阵在训练完毕后没有作用。

4、训练:训练样本(x, y)有输入也有输出,我们知道哪个词实际上跟ants共现,因此y也是一个10000维的向量。损失函数跟Logistic Regression相似,是神经网络的最终输出向量和y的交叉熵(cross-entropy)。最后用随机梯度下降来求解

交叉熵(cross-entropy)

上述步骤是一个词作为输入和一个上下文中的词作为输出的情况,但实际情况显然更复杂,什么是上下文呢?用一个词去预测周围的其他词,还是用周围的好多词来预测一个词?这里就要引入实际训练时的两个模型skip-gram和CBOW。

skip-gram和CBOW

skip-gram: 核心思想是根据中心词来预测周围的词。假设中心词是cat,窗口长度为2,则根据cat预测左边两个词和右边两个词。这时,cat作为神经网络的input,预测的词作为label。下图为一个例子:

skip-gram

在这里窗口长度为2,中心词一个一个移动,遍历所有文本。每一次中心词的移动,最多会产生4对训练样本(input,label)。

CBOW(continuous-bag-of-words):如果理解了skip-gram,那CBOW模型其实就是倒过来,用周围的所有词来预测中心词。这时候,每一次中心词的移动,只能产生一个训练样本。如果还是用上面的例子,则CBOW模型会产生下列4个训练样本:

这时候,input很可能是4个词,label只是一个词,怎么办呢?其实很简单,只要求平均就行了。经过隐藏层后,输入的4个词被映射成了4个300维的向量,对这4个向量求平均,然后就可以作为下一层的输入了。

  1. ([quick, brown], the)
  2. ([the, brown, fox], quick)
  3. ([the, quick, fox, jumps], brown)
  4. ([quick, brown, jumps, over], fox)

两个模型相比,skip-gram模型能产生更多训练样本,抓住更多词与词之间语义上的细节,在语料足够多足够好的理想条件下,skip-gram模型是优于CBOW模型的。在语料较少的情况下,难以抓住足够多词与词之间的细节,CBOW模型求平均的特性,反而效果可能更好。

负采样

实际训练时,还是假设词库有10000个词,词向量300维,那么每一层神经网络的参数是300万个,输出层相当于有一万个可能类的多分类问题。可以想象,这样的计算量非常非常非常大。

作者Mikolov等人提出了许多优化的方法,在这里着重讲一下负采样。

负采样的思想非常简单,简单地令人发指:我们知道最终神经网络经过softmax输出一个向量,只有一个概率最大的对应正确的单词,其余的称为negative sample。现在只选择5个negative sample,所以输出向量就只是一个6维的向量。要考虑的参数不是300万个,而减少到了1800个! 这样做看上去很偷懒,实际效果却很好,大大提升了运算效率。

我们知道,训练神经网络时,每一次训练会对神经网络的参数进行微小的修改。在word2vec中,每一个训练样本并不会对所有参数进行修改。假设输入的词是cat,我们的隐藏层参数有300万个,但这一步训练只会修改cat相对应的300个参数,因为此时隐藏层的输出只跟这300个参数有关!

负采样是有效的,我们不需要那么多negative sample。Mikolov等人在论文中说:对于小数据集,负采样的个数在5-20个;对于大数据集,负采样的个数在2-5个。

那具体如何选择负采样的词呢?论文给出了如下公式:

负采样的选择

其中f(w)是词频。可以看到,负采样的选择只跟词频有关,词频越大,越有可能选中。

Tensorflow实现

最后用tensorflow动手实践一下。参考Udacity Deep Learning的一次作业

这里只是训练了128维的词向量,并通过TSNE的方法可视化。作为练手和深入理解word2vec不错,实战还是推荐gensim。

代码语言:javascript
复制
 1# These are all the modules we'll be using later. Make           sure you can import them
 2# before proceeding further.
 3%matplotlib inline
 4from __future__ import print_function
 5import collections
 6import math
 7import numpy as np
 8import os
 9import random
10import tensorflow as tf
11import zipfile
12from matplotlib import pylab
13from six.moves import range
14from six.moves.urllib.request import urlretrieve
15from sklearn.manifold import TSNE

Download the data from the source website if necessary.

代码语言:javascript
复制
 1url = 'http://mattmahoney.net/dc/'
 2def maybe_download(filename, expected_bytes):
 3"""Download a file if not present, and make sure it's the    right size."""
 4if not os.path.exists(filename):
 5filename, _ = urlretrieve(url + filename, filename)
 6statinfo = os.stat(filename)
 7if statinfo.st_size == expected_bytes:
 8print('Found and verified %s' % filename)
 9else:
10print(statinfo.st_size)
11raise Exception(
12'Failed to verify ' + filename + '. Can you get to it with a browser?')
13return filename
14filename = maybe_download('text8.zip', 31344016)
15Found and verified text8.zip

Read the data into a string.

代码语言:javascript
复制
1def read_data(filename):
2"""Extract the first file enclosed in a zip file as a list of words"""
3with zipfile.ZipFile(filename) as f:
4data = tf.compat.as_str(f.read(f.namelist()[0])).split()
5return data
6words = read_data(filename)
7print('Data size %d' % len(words))
8Data size 17005207

Build the dictionary and replace rare words with UNK token.

代码语言:javascript
复制
 1vocabulary_size = 50000
 2def build_dataset(words):
 3count = [['UNK', -1]]
 4count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
 5dictionary = dict()
 6for word, _ in count:
 7dictionary[word] = len(dictionary)
 8data = list()
 9unk_count = 0
10for word in words:
11if word in dictionary:
12index = dictionary[word]
13else:
14index = 0  # dictionary['UNK']
15unk_count = unk_count + 1
16data.append(index)
17count[0][1] = unk_count
18reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) 
19return data, count, dictionary, reverse_dictionary
20data, count, dictionary, reverse_dictionary = build_dataset(words)
21print('Most common words (+UNK)', count[:5])
22print('Sample data', data[:10])
23del words  # Hint to reduce memory.
代码语言:javascript
复制
1Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)]
2Sample data [5239, 3084, 12, 6, 195, 2, 3137, 46, 59, 156]

Function to generate a training batch for the skip-gram model.

代码语言:javascript
复制
 1data_index = 0
 2def generate_batch(batch_size, num_skips, skip_window):
 3global data_index
 4assert batch_size % num_skips == 0
 5assert num_skips <= 2 * skip_window
 6batch = np.ndarray(shape=(batch_size), dtype=np.int32)
 7labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
 8span = 2 * skip_window + 1 # [ skip_window target skip_window ]
 9buffer = collections.deque(maxlen=span)
10for _ in range(span):
11buffer.append(data[data_index])
12data_index = (data_index + 1) % len(data)
13for i in range(batch_size // num_skips):
14target = skip_window  # target label at the center of the buffer
15targets_to_avoid = [ skip_window ]
16for j in range(num_skips):
17    while target in targets_to_avoid:
18        target = random.randint(0, span - 1)
19    targets_to_avoid.append(target)
20    batch[i * num_skips + j] = buffer[skip_window]
21    labels[i * num_skips + j, 0] = buffer[target]
22buffer.append(data[data_index])
23data_index = (data_index + 1) % len(data)
24return batch, labels
25print('data:', [reverse_dictionary[di] for di in data[:8]])
26for num_skips, skip_window in [(2, 1), (4, 2)]:
27data_index = 0
28batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
29print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
30print('    batch:', [reverse_dictionary[bi] for bi in batch])
31print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
代码语言:javascript
复制
1data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']
2with num_skips = 2 and skip_window = 1:
3batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term']
4labels: ['anarchism', 'as', 'originated', 'a', 'as', 'term', 'a', 'of']
5with num_skips = 4 and skip_window = 2:
6batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a']
7labels: ['originated', 'term', 'anarchism', 'a', 'of', 'as', 'originated', 'term']

Skip-Gram

Train a skip-gram model.

代码语言:javascript
复制
 1batch_size = 128
 2embedding_size = 128 # Dimension of the embedding vector.
 3skip_window = 1 # How many words to consider left and right.
 4num_skips = 2 # How many times to reuse an input to generate a label.
 5# We pick a random validation set to sample nearest neighbors. here we limit the
 6# validation samples to the words that have a low numeric ID, which by
 7# construction are also the most frequent. 
 8valid_size = 16 # Random set of words to evaluate similarity on.
 9valid_window = 100 # Only pick dev samples in the head of the distribution.
10valid_examples = np.array(random.sample(range(valid_window), valid_size))
11#######important#########
12num_sampled = 64 # Number of negative examples to sample.
13graph = tf.Graph()
14with graph.as_default(), tf.device('/cpu:0'):
15# Input data.
16train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
17train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
18valid_dataset =  tf.constant(valid_examples,dtype=tf.int32)
19# Variables.
20embeddings = tf.Variable(
21tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
22softmax_weights = tf.Variable(
23tf.truncated_normal([vocabulary_size, embedding_size],
24                 stddev=1.0 / math.sqrt(embedding_size)))
25softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
26# Model.
27# Look up embeddings for inputs.
28embed = tf.nn.embedding_lookup(embeddings, train_dataset)
29# Compute the softmax loss, using a sample of the negative labels each time.
30loss = tf.reduce_mean(
31tf.nn.sampled_softmax_loss(weights=softmax_weights,      biases=softmax_biases, inputs=embed,
32labels=train_labels,num_sampled=num_sampled, num_classes=vocabulary_size))
33# Optimizer.
34# Note: The optimizer will optimize the softmax_weights AND the embeddings.
35# This is because the embeddings are defined as a variable quantity and the
36# optimizer's `minimize` method will by default modify all variable quantities 
37# that contribute to the tensor it is passed.
38# See docs on `tf.train.Optimizer.minimize()` for more details.
39optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
40# Compute the similarity between minibatch examples and all embeddings.
41# We use the cosine distance:
42norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
43normalized_embeddings = embeddings / norm
44valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
45similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
代码语言:javascript
复制
 1num_steps = 100001
 2with tf.Session(graph=graph) as session:
 3tf.global_variables_initializer().run()
 4print('Initialized')
 5average_loss = 0
 6for step in range(num_steps):
 7batch_data, batch_labels = generate_batch(
 8batch_size, num_skips, skip_window)
 9feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
10 _, l = session.run([optimizer, loss],feed_dict=feed_dict)average_loss += l
11if step % 2000 == 0:
12if step > 0:
13average_loss = average_loss / 2000
14# The average loss is an estimate of the loss over the last 2000 batches.
15 print('Average loss at step %d: %f' % (step,average_loss))
16 average_loss = 0
17# note that this is expensive (~20% slowdown if computed every 500 steps)
18if step % 10000 == 0:
19sim = similarity.eval()
20for i in range(valid_size):
21valid_word = reverse_dictionary[valid_examples[i]]
22top_k = 8 # number of nearest neighbors
23nearest = (-sim[i, :]).argsort()[1:top_k+1]
24log = 'Nearest to %s:' % valid_word
25for k in range(top_k):
26  close_word = reverse_dictionary[nearest[k]]
27  log = '%s %s,' % (log, close_word)
28print(log)
29final_embeddings = normalized_embeddings.eval()
代码语言:javascript
复制
1num_points = 400
2tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
3two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
代码语言:javascript
复制
 1def plot(embeddings, labels):
 2assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
 3pylab.figure(figsize=(15,15))  # in inches
 4for i, label in enumerate(labels):
 5x, y = embeddings[i,:]
 6pylab.scatter(x, y)
 7pylab.annotate(label, xy=(x, y), xytext=(5, 2),textcoords='offset points',
 8           ha='right', va='bottom')
 9pylab.show()
10words = [reverse_dictionary[i] for i in range(1, num_points+1)]
11plot(two_d_embeddings, words)

skip-gram可视化

CBOW

代码语言:javascript
复制
 1data_index_cbow = 0
 2def get_cbow_batch(batch_size, num_skips, skip_window):
 3global data_index_cbow
 4assert batch_size % num_skips == 0
 5assert num_skips <= 2 * skip_window
 6batch = np.ndarray(shape=(batch_size), dtype=np.int32)
 7labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
 8span = 2 * skip_window + 1 # [ skip_window target skip_window ]
 9buffer = collections.deque(maxlen=span)
10for _ in range(span):
11buffer.append(data[data_index_cbow])
12data_index_cbow = (data_index_cbow + 1) % len(data)
13for i in range(batch_size // num_skips):
14target = skip_window  # target label at the center of the buffer
15targets_to_avoid = [ skip_window ]
16for j in range(num_skips):
17    while target in targets_to_avoid:
18        target = random.randint(0, span - 1)
19    targets_to_avoid.append(target)
20    batch[i * num_skips + j] = buffer[skip_window]
21    labels[i * num_skips + j, 0] = buffer[target]
22buffer.append(data[data_index_cbow])
23data_index_cbow = (data_index_cbow + 1) % len(data)
24cbow_batch = np.ndarray(shape=(batch_size), dtype=np.int32)
25cbow_labels = np.ndarray(shape=(batch_size // (skip_window * 2), 1), dtype=np.int32)
26for i in range(batch_size):
27cbow_batch[i] = labels[i]
28cbow_batch = np.reshape(cbow_batch, [batch_size // (skip_window * 2), skip_window * 2])
29for i in range(batch_size // (skip_window * 2)):
30# center word
31cbow_labels[i] = batch[2 * skip_window * i]
32return cbow_batch, cbow_labels

代码语言:javascript
复制
 1# actual batch_size = batch_size // (2 * skip_window)
 2batch_size = 128
 3embedding_size = 128 # Dimension of the embedding vector.
 4skip_window = 1 # How many words to consider left and right.
 5num_skips = 2 # How many times to reuse an input to generate a label.
 6# We pick a random validation set to sample nearest neighbors. here we limit the
 7# validation samples to the words that have a low numeric ID, which by
 8# construction are also the most frequent. 
 9valid_size = 16 # Random set of words to evaluate similarity on.
10valid_window = 100 # Only pick dev samples in the head of the distribution.
11valid_examples = np.array(random.sample(range(valid_window), valid_size))
12#######important#########
13num_sampled = 64 # Number of negative examples to sample.
14graph = tf.Graph()
15with graph.as_default(), tf.device('/cpu:0'):
16# Input data.
17train_dataset = tf.placeholder(tf.int32, shape=[batch_size // (skip_window * 2), skip_window * 2])
18train_labels = tf.placeholder(tf.int32, shape=[batch_size // (skip_window * 2), 1])
19valid_dataset =   tf.constant(valid_examples,dtype=tf.int32)
20# Variables.
21embeddings = tf.Variable(
22tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
23softmax_weights = tf.Variable(
24tf.truncated_normal([vocabulary_size, embedding_size],
25                 stddev=1.0 / math.sqrt(embedding_size)))
26softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
27# Model.
28# Look up embeddings for inputs.
29embed = tf.nn.embedding_lookup(embeddings, train_dataset)
30# reshape embed
31embed = tf.reshape(embed, (skip_window * 2, batch_size // (skip_window * 2), embedding_size))
32# average embedembed = tf.reduce_mean(embed, 0)
33# Compute the softmax loss, using a sample of the negative labels each time.
34loss = tf.reduce_mean(
35tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed,labels=train_labels,    num_sampled=num_sampled, num_classes=vocabulary_size))
36# Optimizer.
37# Note: The optimizer will optimize the softmax_weights AND the embeddings.
38# This is because the embeddings are defined as a variable quantity and the
39# optimizer's `minimize` method will by default modify all variable quantities 
40# that contribute to the tensor it is passed.
41# See docs on `tf.train.Optimizer.minimize()` for more details.
42optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
43# Compute the similarity between minibatch examples and all embeddings.
44# We use the cosine distance:
45norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
46normalized_embeddings = embeddings / norm
47valid_embeddings = tf.nn.embedding_lookup(
48normalized_embeddings, valid_dataset)
49similarity = tf.matmul(valid_embeddings,tf.transpose(normalized_embeddings))
代码语言:javascript
复制
 1num_steps = 100001
 2with tf.Session(graph=graph) as session:
 3tf.global_variables_initializer().run()
 4print('Initialized')
 5average_loss = 0
 6for step in range(num_steps):
 7batch_data, batch_labels = get_cbow_batch(
 8batch_size, num_skips, skip_window)
 9feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
10_, l = session.run([optimizer, loss],feed_dict=feed_dict)
11average_loss += l
12if step % 2000 == 0:
13if step > 0:
14average_loss = average_loss / 2000
15# The average loss is an estimate of the loss over the last 2000 batches.
16print('Average loss at step %d: %f' % (step, average_loss))
17average_loss = 0
18# note that this is expensive (~20% slowdown if computed every 500 steps)
19if step % 10000 == 0:
20sim = similarity.eval()
21for i in range(valid_size):
22valid_word = reverse_dictionary[valid_examples[i]]
23top_k = 8 # number of nearest neighbors
24nearest = (-sim[i, :]).argsort()[1:top_k+1]
25log = 'Nearest to %s:' % valid_word
26for k in range(top_k):
27  close_word = reverse_dictionary[nearest[k]]
28  log = '%s %s,' % (log, close_word)
29print(log)
30final_embeddings = normalized_embeddings.eval()
代码语言:javascript
复制
1num_points = 400
2tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
3two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
4words = [reverse_dictionary[i] for i in range(200, num_points+1)]
5plot(two_d_embeddings, words)

CBOW可视化

参考资料

1、Le Q V, Mikolov T. Distributed Representations of Sentences and Documents[J]. 2014, 4:II-1188.

2、Mikolov T, Sutskever I, Chen K, et al. Distributed Representations of Words and Phrases and their Compositionality[J]. Advances in Neural Information Processing Systems, 2013, 26:3111-3119.

3、Word2Vec Tutorial - The Skip-Gram Model

4、Udacity Deep Learning

5、Stanford CS224d Lecture2,3

原文链接:https://www.jianshu.com/p/b779f8219f74

本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2018-05-08,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 人工智能LeadAI 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
NLP 服务
NLP 服务(Natural Language Process,NLP)深度整合了腾讯内部的 NLP 技术,提供多项智能文本处理和文本生成能力,包括词法分析、相似词召回、词相似度、句子相似度、文本润色、句子纠错、文本补全、句子生成等。满足各行业的文本智能需求。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档