前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【原】KMeans与深度学习自编码AutoEncoder结合提高聚类效果

【原】KMeans与深度学习自编码AutoEncoder结合提高聚类效果

作者头像
Charlotte77
发布2018-01-09 16:42:44
1.8K0
发布2018-01-09 16:42:44
举报

这几天在做用户画像,特征是用户的消费商品的消费金额,原始数据(部分)是这样的:

代码语言:javascript
复制
 1 id      goods_name goods_amount
 2 1    男士手袋       1882.0
 3 2    淑女装       2491.0
 4 2    女士手袋       345.0
 5 4    基础内衣       328.0
 6 5    商务正装       4985.0
 7 5    时尚               969.0
 8 5    女饰品       86.0
 9 6    专业运动       399.0
10 6    童装(中大童) 2033.0
11 6    男士配件       38.0

我们看到同一个id下面有不同的消费记录,这个数据不能直接拿来用,写了python程序来进行处理:test.py

代码语言:javascript
复制
 1 #!/usr/bin/python
 2 #coding:utf-8
 3 #Author:Charlotte
 4 import pandas as pd
 5 import numpy as np
 6 import time
 7 
 8 #加载数据文件(你可以加载自己的文件,文件格式如上所示)
 9 x=pd.read_table('test.txt',sep = " ")
10 
11 #去除NULL值
12 x.dropna()
13 
14 a1=list(x.iloc[:,0])
15 a2=list(x.iloc[:,1])
16 a3=list(x.iloc[:,2])
17 
18 #A是商品类别
19 dicta=dict(zip(a2,zip(a1,a3)))
20 A=list(dicta.keys())
21 #B是用户id
22 B=list(set(a1))
23 
24 # data_class = pd.DataFrame(A,lista)
25 
26 #创建商品类别字典
27 a = np.arange(len(A))
28 lista = list(a)
29 dict_class = dict(zip(A,lista))
30 print dict_class
31 
32 f=open('class.txt','w')
33 for k ,v in dict_class.items():
34     f.write(str(k)+'\t'+str(v)+'\n')
35 f.close()
36 
37 #计算运行时间
38 start=time.clock()
39 
40 #创建大字典存储数据
41 dictall = {}
42 for i in xrange(len(a1)):
43     if a1[i] in dictall.keys():
44         value = dictall[a1[i]]
45         j = dict_class[a2[i]]
46         value[j] = a3[i]
47         dictall[a1[i]]=value
48     else:
49         value = list(np.zeros(len(A)))
50         j = dict_class[a2[i]]
51         value[j] = a3[i]
52         dictall[a1[i]]=value
53 
54 #将字典转化为dataframe
55 dictall1 = pd.DataFrame(dictall)
56 dictall_matrix = dictall1.T
57 print dictall_matrix
58 
59 end = time.clock()
60 print "赋值过程运行时间是:%f s"%(end-start)

输出结果:

代码语言:javascript
复制
{'\xe4\xb8\x93\xe4\xb8\x9a\xe8\xbf\x90\xe5\x8a\xa8': 4, '\xe7\x94\xb7\xe5\xa3\xab\xe6\x89\x8b\xe8\xa2\x8b': 1, '\xe5\xa5\xb3\xe5\xa3\xab\xe6\x89\x8b\xe8\xa2\x8b': 2, '\xe7\xab\xa5\xe8\xa3\x85\xef\xbc\x88\xe4\xb8\xad\xe5\xa4\xa7\xe7\xab\xa5)': 3, '\xe7\x94\xb7\xe5\xa3\xab\xe9\x85\x8d\xe4\xbb\xb6': 9, '\xe5\x9f\xba\xe7\xa1\x80\xe5\x86\x85\xe8\xa1\xa3': 8, '\xe6\x97\xb6\xe5\xb0\x9a': 6, '\xe6\xb7\x91\xe5\xa5\xb3\xe8\xa3\x85': 7, '\xe5\x95\x86\xe5\x8a\xa1\xe6\xad\xa3\xe8\xa3\x85': 5, '\xe5\xa5\xb3\xe9\xa5\xb0\xe5\x93\x81': 0}

    0     1    2     3    4     5    6     7    8   9
1   0  1882    0     0    0     0    0     0    0   0
2   0     0  345     0    0     0    0  2491    0   0
4   0     0    0     0    0     0    0     0  328   0
5  86     0    0     0    0  4985  969     0    0   0
6   0     0    0  2033  399     0    0     0    0  38
赋值过程运行时间是:0.004497 s

linux环境下字符编码不同,class.txt:
专业运动    4
男士手袋    1
女士手袋    2
童装(中大童)    3
男士配件    9
基础内衣    8
时尚    6
淑女装    7
商务正装    5
女饰品    0

得到的dicta_matrix 就是我们拿来跑数据的格式,每一列是商品名称,每一行是用户id

   现在我们来跑AE模型(Auto-encoder),简单说说AE模型,主要步骤很简单,有三层,输入-隐含-输出,把数据input进去,encode然后再decode,cost_function就是output与input之间的“差值”(有公式),差值越小,目标函数值越优。简单地说,就是你输入n维的数据,输出的还是n维的数据,有人可能会问,这有什么用呢,其实也没什么用,主要是能够把数据缩放,如果你输入的维数比较大,譬如实际的特征是几千维的,全部拿到算法里跑,效果不见得好,因为并不是所有特征都是有用的,用AE模型后,你可以压缩成m维(就是隐含层的节点数),如果输出的数据和原始数据的大小变换比例差不多,就证明这个隐含层的数据是可用的。这样看来好像和降维的思想类似,当然AE模型的用法远不止于此,具体贴一篇梁博的博文

不过梁博的博文是用c++写的,这里使用python写的代码(开源代码,有少量改动):

代码语言:javascript
复制
  1 #/usr/bin/python
  2 #coding:utf-8
  3 
  4 import pandas as pd
  5 import numpy as np
  6 import matplotlib.pyplot as plt
  7 from sklearn import preprocessing
  8 
  9 class AutoEncoder():
 10     """ Auto Encoder  
 11     layer      1     2    ...    ...    L-1    L
 12       W        0     1    ...    ...    L-2
 13       B        0     1    ...    ...    L-2
 14       Z              0     1     ...    L-3    L-2
 15       A              0     1     ...    L-3    L-2
 16     """
 17     
 18     def __init__(self, X, Y, nNodes):
 19         # training samples
 20         self.X = X
 21         self.Y = Y
 22         # number of samples
 23         self.M = len(self.X)
 24         # layers of networks
 25         self.nLayers = len(nNodes)
 26         # nodes at layers
 27         self.nNodes = nNodes
 28         # parameters of networks
 29         self.W = list()
 30         self.B = list()
 31         self.dW = list()
 32         self.dB = list()
 33         self.A = list()
 34         self.Z = list()
 35         self.delta = list()
 36         for iLayer in range(self.nLayers - 1):
 37             self.W.append( np.random.rand(nNodes[iLayer]*nNodes[iLayer+1]).reshape(nNodes[iLayer],nNodes[iLayer+1]) ) 
 38             self.B.append( np.random.rand(nNodes[iLayer+1]) )
 39             self.dW.append( np.zeros([nNodes[iLayer], nNodes[iLayer+1]]) )
 40             self.dB.append( np.zeros(nNodes[iLayer+1]) )
 41             self.A.append( np.zeros(nNodes[iLayer+1]) )
 42             self.Z.append( np.zeros(nNodes[iLayer+1]) )
 43             self.delta.append( np.zeros(nNodes[iLayer+1]) )
 44             
 45         # value of cost function
 46         self.Jw = 0.0
 47         # active function (logistic function)
 48         self.sigmod = lambda z: 1.0 / (1.0 + np.exp(-z))
 49         # learning rate 1.2
 50         self.alpha = 2.5
 51         # steps of iteration 30000
 52         self.steps = 10000
 53         
 54     def BackPropAlgorithm(self):
 55         # clear values
 56         self.Jw -= self.Jw
 57         for iLayer in range(self.nLayers-1):
 58             self.dW[iLayer] -= self.dW[iLayer]
 59             self.dB[iLayer] -= self.dB[iLayer]
 60         # propagation (iteration over M samples)    
 61         for i in range(self.M):
 62             # Forward propagation
 63             for iLayer in range(self.nLayers - 1):
 64                 if iLayer==0: # first layer
 65                     self.Z[iLayer] = np.dot(self.X[i], self.W[iLayer])
 66                 else:
 67                     self.Z[iLayer] = np.dot(self.A[iLayer-1], self.W[iLayer])
 68                 self.A[iLayer] = self.sigmod(self.Z[iLayer] + self.B[iLayer])            
 69             # Back propagation
 70             for iLayer in range(self.nLayers - 1)[::-1]: # reserve
 71                 if iLayer==self.nLayers-2:# last layer
 72                     self.delta[iLayer] = -(self.X[i] - self.A[iLayer]) * (self.A[iLayer]*(1-self.A[iLayer]))
 73                     self.Jw += np.dot(self.Y[i] - self.A[iLayer], self.Y[i] - self.A[iLayer])/self.M
 74                 else:
 75                     self.delta[iLayer] = np.dot(self.W[iLayer].T, self.delta[iLayer+1]) * (self.A[iLayer]*(1-self.A[iLayer]))
 76                 # calculate dW and dB 
 77                 if iLayer==0:
 78                     self.dW[iLayer] += self.X[i][:, np.newaxis] * self.delta[iLayer][:, np.newaxis].T
 79                 else:
 80                     self.dW[iLayer] += self.A[iLayer-1][:, np.newaxis] * self.delta[iLayer][:, np.newaxis].T
 81                 self.dB[iLayer] += self.delta[iLayer] 
 82         # update
 83         for iLayer in range(self.nLayers-1):
 84             self.W[iLayer] -= (self.alpha/self.M)*self.dW[iLayer]
 85             self.B[iLayer] -= (self.alpha/self.M)*self.dB[iLayer]
 86         
 87     def PlainAutoEncoder(self):
 88         for i in range(self.steps):
 89             self.BackPropAlgorithm()
 90             print "step:%d" % i, "Jw=%f" % self.Jw
 91 
 92     def ValidateAutoEncoder(self):
 93         for i in range(self.M):
 94             print self.X[i]
 95             for iLayer in range(self.nLayers - 1):
 96                 if iLayer==0: # input layer
 97                     self.Z[iLayer] = np.dot(self.X[i], self.W[iLayer])
 98                 else:
 99                     self.Z[iLayer] = np.dot(self.A[iLayer-1], self.W[iLayer])
100                 self.A[iLayer] = self.sigmod(self.Z[iLayer] + self.B[iLayer])
101                 print "\t layer=%d" % iLayer, self.A[iLayer]        
102 
103 data=[]
104 index=[]
105 f=open('./data_matrix.txt','r')
106 for line in f.readlines():
107     ss=line.replace('\n','').split('\t')
108     index.append(ss[0])
109     ss1=ss[1].split(' ')
110     tmp=[]
111     for i in xrange(len(ss1)):
112         tmp.append(float(ss1[i]))
113     data.append(tmp)
114 f.close()
115 
116 x = np.array(data)
117 #归一化处理
118 xx = preprocessing.scale(x)
119 nNodes = np.array([ 10, 5, 10])
120 ae3 = AutoEncoder(xx,xx,nNodes)
121 ae3.PlainAutoEncoder()
122 ae3.ValidateAutoEncoder()
123 
124 #这是个例子,输出的结果也是这个
125 # xx = np.array([[0,0,0,0,0,0,0,1], [0,0,0,0,0,0,1,0], [0,0,0,0,0,1,0,0], [0,0,0,0,1,0,0,0],[0,0,0,1,0,0,0,0], [0,0,1,0,0,0,0,0]])
126 # nNodes = np.array([ 8, 3, 8 ])
127 # ae2 = AutoEncoder(xx,xx,nNodes)
128 # ae2.PlainAutoEncoder()
129 # ae2.ValidateAutoEncoder()

这里我拿的例子做的结果,真实数据在服务器上跑,大家看看这道啥意思就行了

代码语言:javascript
复制
[0 0 0 0 0 0 0 1]
     layer=0 [ 0.76654705  0.04221051  0.01185895]
     layer=1 [  4.67403977e-03   5.18624788e-03   2.03185410e-02   1.24383559e-02
   1.54423619e-02   1.69197292e-03   2.34471751e-05   9.72956513e-01]
[0 0 0 0 0 0 1 0]
     layer=0 [ 0.08178768  0.96348458  0.98583155]
     layer=1 [  8.18926274e-04   7.30041977e-04   1.06452565e-02   9.94423121e-03
   3.47329848e-03   1.32582980e-02   9.80648863e-01   8.42319408e-08]
[0 0 0 0 0 1 0 0]
     layer=0 [ 0.04752084  0.01144966  0.67313608]
     layer=1 [  4.38577163e-03   4.12704649e-03   1.83408905e-02   1.59209302e-05
   2.32400619e-02   9.71429772e-01   1.78538577e-02   2.20897151e-03]
[0 0 0 0 1 0 0 0]
     layer=0 [ 0.00819346  0.37410028  0.0207633 ]
     layer=1 [  8.17965283e-03   7.94760145e-03   4.59916741e-05   2.03558668e-02
   9.68811657e-01   2.09241369e-02   6.19909778e-03   1.51964053e-02]
[0 0 0 1 0 0 0 0]
     layer=0 [ 0.88632868  0.9892662   0.07575306]
     layer=1 [  1.15787916e-03   1.25924912e-03   3.72748604e-03   9.79510789e-01
   1.09439392e-02   7.81892291e-08   1.06705286e-02   1.77993321e-02]
[0 0 1 0 0 0 0 0]
     layer=0 [ 0.9862938   0.2677048   0.97331042]
     layer=1 [  6.03115828e-04   6.37411444e-04   9.75530999e-01   4.06825647e-04
   2.66386294e-07   1.27802666e-02   8.66599313e-03   1.06025228e-02]

可以很明显看layer1和原始数据是对应的,所以我们可以把layer0作为降维后的新数据。

最后在进行聚类,这个就比较简单了,用sklearn的包,就几行代码:

代码语言:javascript
复制
 1 # !/usr/bin/python
 2 # coding:utf-8
 3 # Author :Charlotte
 4 
 5 from matplotlib import pyplot
 6 import scipy as sp
 7 import numpy as np
 8 import matplotlib.pyplot as plt
 9 from sklearn.cluster   import KMeans
10 from scipy import sparse
11 import pandas as pd 
12 import Pycluster as pc
13 from sklearn import preprocessing
14 from sklearn.preprocessing import StandardScaler
15 from sklearn import metrics
16 import pickle
17 from sklearn.externals import joblib
18 
19 
20 #加载数据
21 data = pd.read_table('data_new.txt',header = None,sep = " ")
22 x = data.ix[:,1:141]
23 card = data.ix[:,0]
24 x1 = np.array(x)
25 xx = preprocessing.scale(x1)
26 num_clusters = 5
27 
28 clf = KMeans(n_clusters=num_clusters,  n_init=1, n_jobs = -1,verbose=1)
29 clf.fit(xx)
30 print(clf.labels_)
31 labels = clf.labels_
32 #score是轮廓系数
33 score = metrics.silhouette_score(xx, labels)
34 # clf.inertia_用来评估簇的个数是否合适,距离越小说明簇分的越好
35 print clf.inertia_
36 print score

  这个数据是拿来做例子的,维度少,效果不明显,真实环境下的数据是30W*142维的,写的mapreduce程序进行数据处理,然后通过AE模型降到50维后,两者的clf.inertia_和silhouette(轮廓系数)有显著差异:

clf.inertia_

silhouette

base版本

252666.064229

0.676239435

AE模型跑后的版本

662.704257502

0.962147623

所以可以看到没有用AE模型直接聚类的模型跑完后的clf.inertia_比用了AE模型之后跑完的clf.inertia_大了几个数量级,AE的效果还是很显著的。

以上是随手整理的,如有错误,欢迎指正:)

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016-04-08 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档