Word2vec原理浅析及tensorflow实现

正文共3499个字,9张图,预计阅读时间13分钟。

word2vec简介

Word2Vec是由Google的Mikolov等人提出的一个词向量计算模型。

  • 输入:大量已分词的文本
  • 输出:用一个稠密向量来表示每个词

词向量的重要意义在于将自然语言转换成了计算机能够理解的向量。相对于词袋模型、TF-IDF等模型,词向量能抓住词的上下文、语义,衡量词与词的相似性,在文本分类、情感分析等许多自然语言处理领域有重要作用。

词向量经典例子: http://latex.codecogs.com/png.latex?\vec{man}-\vec{woman}\approx\vec{king}-\vec{queen}

gensim已经用python封装好了word2vec的实现,有语料的话可以直接训练了,参考中英文维基百科语料上的Word2Vec实验。

会使用gensim训练词向量,并不表示真的掌握了word2vec,只表示会读文档会调接口而已。

Word2vec详细实现

word2vec的详细实现,简而言之,就是一个三层的神经网络。要理解word2vec的实现,需要的预备知识是神经网络和Logistic Regression。

神经网络结构

word2vec原理图

上图是Word2vec的简要流程图。首先假设,词库里的词数为10000; 词向量的长度为300(根据斯坦福CS224d的讲解,词向量一般为25-1000维,300维是一个好的选择)。下面以单个训练样本为例,依次介绍每个部分的含义。

1、输入层:输入为一个词的one-hot向量表示。这个向量长度为10000。假设这个词为ants,ants在词库中的ID为i,则输入向量的第i个分量为1,其余为0。[0, 0, ..., 0, 0, 1, 0, 0, ..., 0, 0]

2、隐藏层:隐藏层的神经元个数就是词向量的长度。隐藏层的参数是一个[10000 ,300]的矩阵。 实际上,这个参数矩阵就是词向量。回忆一下矩阵相乘,一个one-hot行向量和矩阵相乘,结果就是矩阵的第i行。经过隐藏层,实际上就是把10000维的one-hot向量映射成了最终想要得到的300维的词向量。

矩阵乘法

3、输出层: 输出层的神经元个数为总词数10000,参数矩阵尺寸为[300,10000]。词向量经过矩阵计算后再加上softmax归一化,重新变为10000维的向量,每一维对应词库中的一个词与输入的词(在这里是ants)共同出现在上下文中的概率。

输出层

上图中计算了car与ants共现的概率,car所对应的300维列向量就是输出层参数矩阵中的一列。输出层的参数矩阵是[300,10000],也就是计算了词库中所有词与ants共现的概率。输出层的参数矩阵在训练完毕后没有作用。

4、训练:训练样本(x, y)有输入也有输出,我们知道哪个词实际上跟ants共现,因此y也是一个10000维的向量。损失函数跟Logistic Regression相似,是神经网络的最终输出向量和y的交叉熵(cross-entropy)。最后用随机梯度下降来求解

交叉熵(cross-entropy)

上述步骤是一个词作为输入和一个上下文中的词作为输出的情况,但实际情况显然更复杂,什么是上下文呢?用一个词去预测周围的其他词,还是用周围的好多词来预测一个词?这里就要引入实际训练时的两个模型skip-gram和CBOW。

skip-gram和CBOW

skip-gram: 核心思想是根据中心词来预测周围的词。假设中心词是cat,窗口长度为2,则根据cat预测左边两个词和右边两个词。这时,cat作为神经网络的input,预测的词作为label。下图为一个例子:

skip-gram

在这里窗口长度为2,中心词一个一个移动,遍历所有文本。每一次中心词的移动,最多会产生4对训练样本(input,label)。

CBOW(continuous-bag-of-words):如果理解了skip-gram,那CBOW模型其实就是倒过来,用周围的所有词来预测中心词。这时候,每一次中心词的移动,只能产生一个训练样本。如果还是用上面的例子,则CBOW模型会产生下列4个训练样本:

这时候,input很可能是4个词,label只是一个词,怎么办呢?其实很简单,只要求平均就行了。经过隐藏层后,输入的4个词被映射成了4个300维的向量,对这4个向量求平均,然后就可以作为下一层的输入了。

  1. ([quick, brown], the)
  2. ([the, brown, fox], quick)
  3. ([the, quick, fox, jumps], brown)
  4. ([quick, brown, jumps, over], fox)

两个模型相比,skip-gram模型能产生更多训练样本,抓住更多词与词之间语义上的细节,在语料足够多足够好的理想条件下,skip-gram模型是优于CBOW模型的。在语料较少的情况下,难以抓住足够多词与词之间的细节,CBOW模型求平均的特性,反而效果可能更好。

负采样

实际训练时,还是假设词库有10000个词,词向量300维,那么每一层神经网络的参数是300万个,输出层相当于有一万个可能类的多分类问题。可以想象,这样的计算量非常非常非常大。

作者Mikolov等人提出了许多优化的方法,在这里着重讲一下负采样。

负采样的思想非常简单,简单地令人发指:我们知道最终神经网络经过softmax输出一个向量,只有一个概率最大的对应正确的单词,其余的称为negative sample。现在只选择5个negative sample,所以输出向量就只是一个6维的向量。要考虑的参数不是300万个,而减少到了1800个! 这样做看上去很偷懒,实际效果却很好,大大提升了运算效率。

我们知道,训练神经网络时,每一次训练会对神经网络的参数进行微小的修改。在word2vec中,每一个训练样本并不会对所有参数进行修改。假设输入的词是cat,我们的隐藏层参数有300万个,但这一步训练只会修改cat相对应的300个参数,因为此时隐藏层的输出只跟这300个参数有关!

负采样是有效的,我们不需要那么多negative sample。Mikolov等人在论文中说:对于小数据集,负采样的个数在5-20个;对于大数据集,负采样的个数在2-5个。

那具体如何选择负采样的词呢?论文给出了如下公式:

负采样的选择

其中f(w)是词频。可以看到,负采样的选择只跟词频有关,词频越大,越有可能选中。

Tensorflow实现

最后用tensorflow动手实践一下。参考Udacity Deep Learning的一次作业

这里只是训练了128维的词向量,并通过TSNE的方法可视化。作为练手和深入理解word2vec不错,实战还是推荐gensim。

 1# These are all the modules we'll be using later. Make           sure you can import them
 2# before proceeding further.
 3%matplotlib inline
 4from __future__ import print_function
 5import collections
 6import math
 7import numpy as np
 8import os
 9import random
10import tensorflow as tf
11import zipfile
12from matplotlib import pylab
13from six.moves import range
14from six.moves.urllib.request import urlretrieve
15from sklearn.manifold import TSNE

Download the data from the source website if necessary.

 1url = 'http://mattmahoney.net/dc/'
 2def maybe_download(filename, expected_bytes):
 3"""Download a file if not present, and make sure it's the    right size."""
 4if not os.path.exists(filename):
 5filename, _ = urlretrieve(url + filename, filename)
 6statinfo = os.stat(filename)
 7if statinfo.st_size == expected_bytes:
 8print('Found and verified %s' % filename)
 9else:
10print(statinfo.st_size)
11raise Exception(
12'Failed to verify ' + filename + '. Can you get to it with a browser?')
13return filename
14filename = maybe_download('text8.zip', 31344016)
15Found and verified text8.zip

Read the data into a string.

1def read_data(filename):
2"""Extract the first file enclosed in a zip file as a list of words"""
3with zipfile.ZipFile(filename) as f:
4data = tf.compat.as_str(f.read(f.namelist()[0])).split()
5return data
6words = read_data(filename)
7print('Data size %d' % len(words))
8Data size 17005207

Build the dictionary and replace rare words with UNK token.

 1vocabulary_size = 50000
 2def build_dataset(words):
 3count = [['UNK', -1]]
 4count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
 5dictionary = dict()
 6for word, _ in count:
 7dictionary[word] = len(dictionary)
 8data = list()
 9unk_count = 0
10for word in words:
11if word in dictionary:
12index = dictionary[word]
13else:
14index = 0  # dictionary['UNK']
15unk_count = unk_count + 1
16data.append(index)
17count[0][1] = unk_count
18reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) 
19return data, count, dictionary, reverse_dictionary
20data, count, dictionary, reverse_dictionary = build_dataset(words)
21print('Most common words (+UNK)', count[:5])
22print('Sample data', data[:10])
23del words  # Hint to reduce memory.
1Most common words (+UNK) [['UNK', 418391], ('the', 1061396), ('of', 593677), ('and', 416629), ('one', 411764)]
2Sample data [5239, 3084, 12, 6, 195, 2, 3137, 46, 59, 156]

Function to generate a training batch for the skip-gram model.

 1data_index = 0
 2def generate_batch(batch_size, num_skips, skip_window):
 3global data_index
 4assert batch_size % num_skips == 0
 5assert num_skips <= 2 * skip_window
 6batch = np.ndarray(shape=(batch_size), dtype=np.int32)
 7labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
 8span = 2 * skip_window + 1 # [ skip_window target skip_window ]
 9buffer = collections.deque(maxlen=span)
10for _ in range(span):
11buffer.append(data[data_index])
12data_index = (data_index + 1) % len(data)
13for i in range(batch_size // num_skips):
14target = skip_window  # target label at the center of the buffer
15targets_to_avoid = [ skip_window ]
16for j in range(num_skips):
17    while target in targets_to_avoid:
18        target = random.randint(0, span - 1)
19    targets_to_avoid.append(target)
20    batch[i * num_skips + j] = buffer[skip_window]
21    labels[i * num_skips + j, 0] = buffer[target]
22buffer.append(data[data_index])
23data_index = (data_index + 1) % len(data)
24return batch, labels
25print('data:', [reverse_dictionary[di] for di in data[:8]])
26for num_skips, skip_window in [(2, 1), (4, 2)]:
27data_index = 0
28batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
29print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
30print('    batch:', [reverse_dictionary[bi] for bi in batch])
31print('    labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
1data: ['anarchism', 'originated', 'as', 'a', 'term', 'of', 'abuse', 'first']
2with num_skips = 2 and skip_window = 1:
3batch: ['originated', 'originated', 'as', 'as', 'a', 'a', 'term', 'term']
4labels: ['anarchism', 'as', 'originated', 'a', 'as', 'term', 'a', 'of']
5with num_skips = 4 and skip_window = 2:
6batch: ['as', 'as', 'as', 'as', 'a', 'a', 'a', 'a']
7labels: ['originated', 'term', 'anarchism', 'a', 'of', 'as', 'originated', 'term']

Skip-Gram

Train a skip-gram model.

 1batch_size = 128
 2embedding_size = 128 # Dimension of the embedding vector.
 3skip_window = 1 # How many words to consider left and right.
 4num_skips = 2 # How many times to reuse an input to generate a label.
 5# We pick a random validation set to sample nearest neighbors. here we limit the
 6# validation samples to the words that have a low numeric ID, which by
 7# construction are also the most frequent. 
 8valid_size = 16 # Random set of words to evaluate similarity on.
 9valid_window = 100 # Only pick dev samples in the head of the distribution.
10valid_examples = np.array(random.sample(range(valid_window), valid_size))
11#######important#########
12num_sampled = 64 # Number of negative examples to sample.
13graph = tf.Graph()
14with graph.as_default(), tf.device('/cpu:0'):
15# Input data.
16train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
17train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
18valid_dataset =  tf.constant(valid_examples,dtype=tf.int32)
19# Variables.
20embeddings = tf.Variable(
21tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
22softmax_weights = tf.Variable(
23tf.truncated_normal([vocabulary_size, embedding_size],
24                 stddev=1.0 / math.sqrt(embedding_size)))
25softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
26# Model.
27# Look up embeddings for inputs.
28embed = tf.nn.embedding_lookup(embeddings, train_dataset)
29# Compute the softmax loss, using a sample of the negative labels each time.
30loss = tf.reduce_mean(
31tf.nn.sampled_softmax_loss(weights=softmax_weights,      biases=softmax_biases, inputs=embed,
32labels=train_labels,num_sampled=num_sampled, num_classes=vocabulary_size))
33# Optimizer.
34# Note: The optimizer will optimize the softmax_weights AND the embeddings.
35# This is because the embeddings are defined as a variable quantity and the
36# optimizer's `minimize` method will by default modify all variable quantities 
37# that contribute to the tensor it is passed.
38# See docs on `tf.train.Optimizer.minimize()` for more details.
39optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
40# Compute the similarity between minibatch examples and all embeddings.
41# We use the cosine distance:
42norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
43normalized_embeddings = embeddings / norm
44valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
45similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
 1num_steps = 100001
 2with tf.Session(graph=graph) as session:
 3tf.global_variables_initializer().run()
 4print('Initialized')
 5average_loss = 0
 6for step in range(num_steps):
 7batch_data, batch_labels = generate_batch(
 8batch_size, num_skips, skip_window)
 9feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
10 _, l = session.run([optimizer, loss],feed_dict=feed_dict)average_loss += l
11if step % 2000 == 0:
12if step > 0:
13average_loss = average_loss / 2000
14# The average loss is an estimate of the loss over the last 2000 batches.
15 print('Average loss at step %d: %f' % (step,average_loss))
16 average_loss = 0
17# note that this is expensive (~20% slowdown if computed every 500 steps)
18if step % 10000 == 0:
19sim = similarity.eval()
20for i in range(valid_size):
21valid_word = reverse_dictionary[valid_examples[i]]
22top_k = 8 # number of nearest neighbors
23nearest = (-sim[i, :]).argsort()[1:top_k+1]
24log = 'Nearest to %s:' % valid_word
25for k in range(top_k):
26  close_word = reverse_dictionary[nearest[k]]
27  log = '%s %s,' % (log, close_word)
28print(log)
29final_embeddings = normalized_embeddings.eval()
1num_points = 400
2tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
3two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
 1def plot(embeddings, labels):
 2assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
 3pylab.figure(figsize=(15,15))  # in inches
 4for i, label in enumerate(labels):
 5x, y = embeddings[i,:]
 6pylab.scatter(x, y)
 7pylab.annotate(label, xy=(x, y), xytext=(5, 2),textcoords='offset points',
 8           ha='right', va='bottom')
 9pylab.show()
10words = [reverse_dictionary[i] for i in range(1, num_points+1)]
11plot(two_d_embeddings, words)

skip-gram可视化

CBOW

 1data_index_cbow = 0
 2def get_cbow_batch(batch_size, num_skips, skip_window):
 3global data_index_cbow
 4assert batch_size % num_skips == 0
 5assert num_skips <= 2 * skip_window
 6batch = np.ndarray(shape=(batch_size), dtype=np.int32)
 7labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
 8span = 2 * skip_window + 1 # [ skip_window target skip_window ]
 9buffer = collections.deque(maxlen=span)
10for _ in range(span):
11buffer.append(data[data_index_cbow])
12data_index_cbow = (data_index_cbow + 1) % len(data)
13for i in range(batch_size // num_skips):
14target = skip_window  # target label at the center of the buffer
15targets_to_avoid = [ skip_window ]
16for j in range(num_skips):
17    while target in targets_to_avoid:
18        target = random.randint(0, span - 1)
19    targets_to_avoid.append(target)
20    batch[i * num_skips + j] = buffer[skip_window]
21    labels[i * num_skips + j, 0] = buffer[target]
22buffer.append(data[data_index_cbow])
23data_index_cbow = (data_index_cbow + 1) % len(data)
24cbow_batch = np.ndarray(shape=(batch_size), dtype=np.int32)
25cbow_labels = np.ndarray(shape=(batch_size // (skip_window * 2), 1), dtype=np.int32)
26for i in range(batch_size):
27cbow_batch[i] = labels[i]
28cbow_batch = np.reshape(cbow_batch, [batch_size // (skip_window * 2), skip_window * 2])
29for i in range(batch_size // (skip_window * 2)):
30# center word
31cbow_labels[i] = batch[2 * skip_window * i]
32return cbow_batch, cbow_labels

 1# actual batch_size = batch_size // (2 * skip_window)
 2batch_size = 128
 3embedding_size = 128 # Dimension of the embedding vector.
 4skip_window = 1 # How many words to consider left and right.
 5num_skips = 2 # How many times to reuse an input to generate a label.
 6# We pick a random validation set to sample nearest neighbors. here we limit the
 7# validation samples to the words that have a low numeric ID, which by
 8# construction are also the most frequent. 
 9valid_size = 16 # Random set of words to evaluate similarity on.
10valid_window = 100 # Only pick dev samples in the head of the distribution.
11valid_examples = np.array(random.sample(range(valid_window), valid_size))
12#######important#########
13num_sampled = 64 # Number of negative examples to sample.
14graph = tf.Graph()
15with graph.as_default(), tf.device('/cpu:0'):
16# Input data.
17train_dataset = tf.placeholder(tf.int32, shape=[batch_size // (skip_window * 2), skip_window * 2])
18train_labels = tf.placeholder(tf.int32, shape=[batch_size // (skip_window * 2), 1])
19valid_dataset =   tf.constant(valid_examples,dtype=tf.int32)
20# Variables.
21embeddings = tf.Variable(
22tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
23softmax_weights = tf.Variable(
24tf.truncated_normal([vocabulary_size, embedding_size],
25                 stddev=1.0 / math.sqrt(embedding_size)))
26softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
27# Model.
28# Look up embeddings for inputs.
29embed = tf.nn.embedding_lookup(embeddings, train_dataset)
30# reshape embed
31embed = tf.reshape(embed, (skip_window * 2, batch_size // (skip_window * 2), embedding_size))
32# average embedembed = tf.reduce_mean(embed, 0)
33# Compute the softmax loss, using a sample of the negative labels each time.
34loss = tf.reduce_mean(
35tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=embed,labels=train_labels,    num_sampled=num_sampled, num_classes=vocabulary_size))
36# Optimizer.
37# Note: The optimizer will optimize the softmax_weights AND the embeddings.
38# This is because the embeddings are defined as a variable quantity and the
39# optimizer's `minimize` method will by default modify all variable quantities 
40# that contribute to the tensor it is passed.
41# See docs on `tf.train.Optimizer.minimize()` for more details.
42optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
43# Compute the similarity between minibatch examples and all embeddings.
44# We use the cosine distance:
45norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
46normalized_embeddings = embeddings / norm
47valid_embeddings = tf.nn.embedding_lookup(
48normalized_embeddings, valid_dataset)
49similarity = tf.matmul(valid_embeddings,tf.transpose(normalized_embeddings))
 1num_steps = 100001
 2with tf.Session(graph=graph) as session:
 3tf.global_variables_initializer().run()
 4print('Initialized')
 5average_loss = 0
 6for step in range(num_steps):
 7batch_data, batch_labels = get_cbow_batch(
 8batch_size, num_skips, skip_window)
 9feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
10_, l = session.run([optimizer, loss],feed_dict=feed_dict)
11average_loss += l
12if step % 2000 == 0:
13if step > 0:
14average_loss = average_loss / 2000
15# The average loss is an estimate of the loss over the last 2000 batches.
16print('Average loss at step %d: %f' % (step, average_loss))
17average_loss = 0
18# note that this is expensive (~20% slowdown if computed every 500 steps)
19if step % 10000 == 0:
20sim = similarity.eval()
21for i in range(valid_size):
22valid_word = reverse_dictionary[valid_examples[i]]
23top_k = 8 # number of nearest neighbors
24nearest = (-sim[i, :]).argsort()[1:top_k+1]
25log = 'Nearest to %s:' % valid_word
26for k in range(top_k):
27  close_word = reverse_dictionary[nearest[k]]
28  log = '%s %s,' % (log, close_word)
29print(log)
30final_embeddings = normalized_embeddings.eval()
1num_points = 400
2tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
3two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
4words = [reverse_dictionary[i] for i in range(200, num_points+1)]
5plot(two_d_embeddings, words)

CBOW可视化

参考资料

1、Le Q V, Mikolov T. Distributed Representations of Sentences and Documents[J]. 2014, 4:II-1188.

2、Mikolov T, Sutskever I, Chen K, et al. Distributed Representations of Words and Phrases and their Compositionality[J]. Advances in Neural Information Processing Systems, 2013, 26:3111-3119.

3、Word2Vec Tutorial - The Skip-Gram Model

4、Udacity Deep Learning

5、Stanford CS224d Lecture2,3

原文链接:https://www.jianshu.com/p/b779f8219f74

原文发布于微信公众号 - 人工智能LeadAI(atleadai)

原文发表时间:2018-05-08

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏自学笔记

Some methods of deep learning and dimensionality reduction

上一篇主要是讲了全连接神经网络,这里主要讲的就是深度学习网络的一些设计以及一些权值的设置。神经网络可以根据模型的层数,模型的复杂度和神经元的多少大致可以分成两类...

10320
来自专栏yw的数据分析

R语言进行机器学习方法及实例(一)

  机器学习的研究领域是发明计算机算法,把数据转变为智能行为。机器学习和数据挖掘的区别可能是机器学习侧重于执行一个已知的任务,而数据发掘是在大数据中寻找有价值的...

98170
来自专栏程序生活

CS224n 笔记1-自然语言处理与深度学习简介1 自然语言处理简介2 词向量(Word Vectors)3 基于奇异值分解(SVD)的方法4 基于迭代的算法-Word2vec

1 自然语言处理简介 我们从讨论“什么是NLP”开始本章的内容 1.1 NLP有什么特别之处 自然(人工)语言为什么如此特别?自然语言是一个专门用来表达语义的系...

29830
来自专栏专知

【专知荟萃09】目标检测知识资料全集(入门/进阶/论文/综述/视频/代码等)

目标检测(物体检测, Object Detection) 专知荟萃 入门学习 进阶文章 综述 Tutorial 视频教程 代码 领域专家 入门学习 图像目标...

622110
来自专栏杨熹的专栏

权重初始化的几个方法

其中第一步 权重的初始化 对模型的训练速度和准确性起着重要的作用,所以需要正确地进行初始化。

26020
来自专栏ATYUN订阅号

机器学习入门——使用python进行监督学习

? 什么是监督学习? 在监督学习中,我们首先要导入包含训练特征和目标特征的数据集。监督式学习算法会学习训练样本与其相关的目标变量之间的关系,并应用学到的关系对...

512100
来自专栏社区的朋友们

机器学习概念总结笔记(三)

C4.5 算法有如下优点:产生的分类规则易于理解,准确率较高。其缺点是:在构造树的过程中,需要对数据集进行多次的顺序扫描和排序,因而导致算法的低效。此外,C4....

77310
来自专栏SIGAI学习与实践平台

理解神经网络的激活函数

激活函数在神经网络中具有重要的地位,对于常用的函数如sigmoid,tanh,ReLU,不少读者都已经非常熟悉。但是你是否曾想过这几个问题:

12520
来自专栏社区的朋友们

机器学习概念总结笔记(一)

本部分介绍了机器学习算法的四大分类,即:监督学习、半监督学习、无监督学习和增强学习以及包括最小二乘回归、岭回归、LASSO回归、LARS回归在内的26大常见算法...

1.5K40
来自专栏智能算法

主成分分析到底怎么分析?

PCA(Principal Component Analysis)是一种常用的数据分析方法。PCA通过线性变换将原始数据变换为一组各维度线性无关的表示,可用于提...

345100

扫码关注云+社区

领取腾讯云代金券