TVP

# 《统计学习方法》之感知机Python实现

* In what form is information stored, or remembered?

* How does information contained in storage, or in memory, influence recognition and behavior?

The theory to be presented here takes the empiricist, or "connectionist" position with regard to these questions. The theory has been developed for a hypothetical nervous system, or machine, called a perceptron. The perceptron is designed to illustrate some of the fundamental properties of intelligent systems in general.

《统计机器学习》对这个算法有详细的描述，对其收敛性也有比较精巧的证明。下面是实现该书感知机章节算法的代码。

-------------------------------------------------------

importnumpy

classPerceptron:

def__init__(self, training_set = [], iterations =5, learning_rate =0.5):

self._training_set = training_set

self._iterations = iterations

self._learning_rate = learning_rate

self._setup()

def_setup(self):

iflen(self._training_set) >:

self._weight = numpy.zeros(len(self._training_set[][:-1]))

self._bias =0.0

self._alpha = numpy.zeros(len(self._training_set[]))

else:

self._weight =None

self._bias =None

self._alpha =None

defraw_training(self):

self._setup()

ifself._weightisNone:

return

foriterinxrange(self._iterations):

forsampleinself._training_set:

prediction = numpy.dot(self._weight, sample[:-1]) +self._bias

ifsample[-1] * prediction

self._weight =self._weight + numpy.multiply(self._learning_rate * sample[-1], sample[:-1])

self._bias =self._bias +self._learning_rate * sample[-1]

print("[+] Raw Perceptron - Stochatic Gradient Descent")

print("[+] weight ",self._weight)

print("[+] bias ",self._bias)

print("[+] learing rate ",self._learning_rate)

print("[+] iterations ",self._iterations)

defcalculate_gram_matrix(self):

dim =len(self._training_set)

self._gram = numpy.zeros((dim,dim))

shape = numpy.shape(self._gram)

forrowinrange(shape[]):

forcolinrange(shape[1]):

self._gram[row][col] = numpy.dot(self._training_set[row][:-1],self._training_set[col][:-1])

print("[+] Gram matrix for duality traing:")

print(self._gram)

defduality_training(self):

self._setup()

ifself._weightisNone:

return

# calculate Gram Matrix

self.calculate_gram_matrix()

dim =len(self._training_set)

foriterinxrange(self._iterations):

sample_idx =

forsampleinself._training_set:

prediction =0.0

alpha_idx=

foridxinxrange(dim):

prediction +=self._alpha[idx] *self._training_set[idx][-1] *self._gram[sample_idx][idx]

prediction = sample[-1] * (prediction +self._bias)

ifprediction

self._alpha[sample_idx] =self._alpha[sample_idx] +self._learning_rate

self._bias =self._bias +self._learning_rate * sample[-1]

sample_idx +=1

#calculate weight

idx =

self._weight =0.0

foriterinxrange(dim):

idx+=1

print("[+] Duality Perceptron - Stochatic Gradient Descent")

print("[+] weight ",self._weight)

print("[+] bias ",self._bias)

print("[+] learing rate ",self._learning_rate)

print("[+] iterations ",self._iterations)

if__name__ =='__main__':

training_set = [[3,3,1],[4,3,1],[1,1, -1]]

perceptron = Perceptron(training_set=training_set,iterations=5,learning_rate=1.0)

perceptron.raw_training()

perceptron.duality_training()

-----------------------------------------------------

* duality_training, 对应于书中感知机的对偶形式，先计算训练样本的Gram矩阵，这是由calculate_gram_matrix函数来完成，然后，通过查_gram

Reference:

* http://www.cs.columbia.edu/~mcollins/courses/6998-2012/notes/perc.converge.pdf

• 发表于:
• 原文链接https://kuaibao.qq.com/s/20181009G1QCN700?refer=cp_1026
• 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号（企鹅号）传播渠道之一，根据《腾讯内容开放平台服务协议》转载发布内容。
• 如有侵权，请联系 cloudcommunity@tencent.com 删除。

2022-12-02

2022-12-02

2018-05-16

2018-06-19

2022-12-02

2022-12-02