编程英语之KNN算法

School of Computer Science

The University of Adelaide

Artificial Intelligence

Assignment 2

Semester 1, 2018

due 11:55pm, Thursday 14th May 2018

Introduction

介绍

In this assignment, you will develop several classification models to classify noisy input images into the classes square or circle, as shown in Fig. 1

在这个作业中,你将开发各自的分类模块对嘈杂的输入图像对正方形或则圆形进行分类,如图一所示

Figure 1: Samples of noisy images labelled as square (left)and circle (right).

图一被标记为正方形(左)和圆形(右)的噪声图像的样本。

Your classification models will use the training and testing sets (that are available with this assignment) containing many image samples labelled as square or circle.  Your task is to write a Python code that can be run on a Jupyter Notebook session, which will train and validate the following classification models:

您的分类模型将使用包含许多标记为正方形或圆形的图像样本的训练和测试集(该任务可用)。您的任务是编写一个python代码,该代码可以在一个jupyter笔记本会话上运行,它将训练并验证以下分类模型:

1) K Nearest neighbour (KNN) classifier [35 marks].  For the KNN classifier, you can only use standard Python libraries (e.g., numpy) in order to implement all aspects of the training and testing algorithms.  You will need to implement two functions: a) one to build a K-d tree from the training set (this function takes the training samples and labels as its parameters), and b) another to test the KNN classifier and compute the classification accuracy, where the parameters are K and the test images and labels.  Using matplotlib, plot a graph of the evolution of classification accuracy for the training and testing sets as a function of K, where K = 1 to 10.  Clearly identify the value of K, where generalisation is best.

1)k近邻(KNN)分类器[35分]。

对于KNN分类器,您只能使用标准的Python库(例如numpy)来实现训练和测试算法的所有方面。你需要实现两个方法:a)一个从训练集创建K-d树(该函数以训练样本和标签为参数),和b)另一个去测试KNN分类器和计算分类精度

Decision tree classifier [35 marks].  For the decision tree classifier, you can only use standard Python libraries (e.g., numpy) in order to implement all aspects of the training and testing algorithms.  Essentially you will need to implement two functions: a) one to train the decision tree using the training samples and labels plus a pre-pruning parameter indicating the minimum information content before stop splitting, and b) another to test the decision tree and compute the classification accuracy (similarly to the KNN classifier, the test function takes as one of its parameters the test images and labels and returns the classification accuracy).  Using matplotlib, plot a graph of the evolution of classification accuracy for the training and testing sets as a function of the information content, where information content = 0 to 0.5 bits.  Clearly identify the value of information content, where generalisation is best.

决策树分类器[35分]对于决策树分类器,您只能使用标准的Python库(例如,numpy)来实现训练和测试算法的所有方面。本质上需要实现两个功能:1)使用训练样本的决策树和标签+ pre-pruning参数表示停止分裂之前的最小信息内容的一个训练,和b)另一个测试决策树,计算分类精度(类似于KNN分类器,测试函数接受一个参数并返回测试图片和标签分类精度)。使用matplotlib,将训练和测试集的分类精度的演变图作为信息内容的函数,其中信息内容= 0到0.5位。清楚地确定信息内容的价值,概括是最好的。

2) Convolutional neural network (CNN) classifier [20 marks]. For the convolutional neural network, you are allowed to use Keras using TensorFlow backend, similar to the example shown in the code provided.  The CNN structure is the lenet structure used in lecture.  Using matplotlib, please plot a graph of the evolution of accuracy for the training and testing sets as a function of the number of epochs, where the max number of epochs is 200.  Clearly identify the value of information content, where generalisation is best.

卷积神经网络(CNN)分类器[20分].对于卷积神经网络,您可以使用TensorFlow(张量流)后端使用Keras,类似于提供的示例代码中的那样。CNN的结构是课堂上使用的lenet结构。使用matplotlib,请绘制将训练和测试集的精度的进化图成一个时代的函数,eprochs(时期,时代)的最大数是200,最好清楚确定概括信息内容的价值

A sample code that trains and tests a multi-layer perceptron classifier that can run on a Jupyter Notebook session is provided, and it is expected that the submitted code can run on a Jupyter Notebook session in a similar manner.  A held-out test set will be used to test the generalisation of the implemented classification models, but this held-out set will only be available after the assignment deadline – please note that this held-out set will contain samples obtained from the same distributions used to generate the training and testing sets.

提供了一种测试多层感知器分类器的示例代码,它可以运行在Jupyter笔记本会话上,并且预期所提交的代码可以以类似的方式运行在Jupyter笔记本上。

held-out测试集将用于测试实现的分类模型的概括,但这held-out集在作业的最后期限之后提供,请注意,这个held-out集将包含获从被用于生成训练和测试的集合的相同的分布中获得的样本。

You must write the program yourself in Python, and the code must be a single file that can run on a Jupyter Notebook session (file type .ipynb).   You will only get marks for the parts that you implemented yourself.  If you use a library package or language function call for training or testing a KNN or a Decision Tree classifier, then you will be limited to 50% of the available marks (noting that this assignment is a hurdle for the course). If there is evidence you have simply copied code from the web, you will be awarded no marks and referred for plagiarism

你必须使用python编写程序,并且代码必须是一个可以运行在Jupyter笔记本会话(文件类型ipynb)单独的文件,你将只有在哪些你自己实现的部分获得分数,如果您使用一个库包或语言函数调用训练或测试一个KNN或决策树分类器,那么你的有效分数将被限制在50%(注意这个作业是本课程的一个障碍),如果有证据表明你只是从网上复制了代码,那么你将不会被授予任何分数,并且视为剽窃。

Submission

提交

You must submit, by the due date, two files:

你必须在截止日期之前提交两个文件

1. ipynb file containing your code with the three classifiers and all implementations described above

ipynb 文件包含你的三个分类器和以上描述的所有实现代码

2. pdf file with a short written report detailing your implementation in no more than 1 page, and the following results:

pdf文件,一个简短的书面报告,在不超过1页的情况下详细说明你的实现,和以下结果:

a) The training and testing accuracies at the best generalisation operating point for each type of classifier, using a table [5 marks]: 【通过表最好概括每种类型分类器训练和测试精度工作点】

Training Accuracy

Testing Accuracy

K=1 NN

K=10 NN

DT (IC = 0 bits)

DT (IC = 0.5 bits)

CNN

b) Running time for training and testing algorithms accuracies of each type of classifier, using a table [5 marks]: 通过表记录训练和测试算法的每一种分类器的精度

Training Time

Testing Time

K=1 NN

K=10 NN

DT (IC = 0 bits)

DT (IC = 0.5 bits)

CNN

c) Bonus question: How can the classification accuracy of the decision tree classifier be improved?  Please implement your idea (hint: dimensionality reduction) [10 marks].附加问题,如何提高决策树分类器的精度?请实现你的想法(提示,维度减少)。,

Total number of marks: 100 + 10 bonus marks

总分数:100 + 10分。

This assignment is due 11.55pm on Thursday 14th May, 2018. If your submission is late, the maximum mark you can obtain will be reduced by 25% per day (or part thereof) past the due date or any extension you are granted.

这个作业的截止时间是五月十四星期三的晚上十一点五十五分,如果你提交的晚,每超过一天你获得的最大分数会减少25%(或则一部分)

This assignment relates to the following ACS CBOK areas: abstraction, design, hardware and software, data and information, HCI and programming.

这个作业涉及以下ACS CBOK领域:抽象、设计、硬件和软件、数据和信息、HCI和编程。

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏鸿的学习笔记

写给开发者的机器学习指南(十三)

在我们实际使用支持向量机(SVM)之前,我先简要介绍一下SVM是什么。 基本SVM是一个二元分类器,它通过选取代表数据点之间最大间隔的超平面将数据集分成2部分。...

8510
来自专栏Python专栏

Python | 21行轻松搞定拼写检查器

链接:http://blog.csdn.net/Pwiling/article/details/50573650

21330
来自专栏北京马哥教育

python实现拼写检查器21行轻松搞定

除了这段代码外,作为机器学习的一部分,肯定还应该有大量的样本数据,准备了big.txt作为我们的样本数据。

17150
来自专栏C语言及其他语言

【编程经验】优秀题解

今天给大家带来一个我站“Manchester”大神写的一份优质题解(j解题思路很清晰):原题问题:1709: 算法7-16:弗洛伊德最短路径算法:

12410
来自专栏决胜机器学习

机器学习(十八) ——SVM实战

机器学习(十八)——SVM实战 (原创内容,转载请注明来源,谢谢) 一、概述 本篇主要用python来实现SVM算法,并用SVM算法进行预测分类结果。对于SM...

38050
来自专栏窗户

平方根的C语言实现(三) ——最终程序实现

  了解了浮点数的存储以及手算平方根的原理,我们可以考虑程序实现了。   先实现一个64位整数的平方根,根据之前的手算平方根,程序也不是那么难写了。 #incl...

26080
来自专栏北京马哥教育

python实现拼写检查器21行轻松搞定

引入 大家在使用谷歌或者百度搜索时,输入搜索内容时,谷歌总是能提供非常好的拼写检查,比如你输入 speling,谷歌会马上返回 spelling。 下面是用...

37840
来自专栏Leetcode名企之路

【Leetcode】63. 不同路径 II

一个机器人位于一个 m x n 网格的左上角 (起始点在下图中标记为“Start” )。 机器人每次只能向下或者向右移动一步。机器人试图达到网格的右下角(在下图...

31120
来自专栏量化投资与机器学习

【致敬周杰伦】基于TensorFlow让机器生成周董的歌词(附源码)

? 周杰伦 深深地 影响了我们 一代人 这句话 不足为过 前言 今日推文将介绍如何使用TensorFlow一步步来搭建一个序列建模的应用——机器创作歌词,训练...

1.8K50
来自专栏人工智能LeadAI

Tensorflow高级API的进阶--利用tf.contrib.learn建立输入函数

在实际的业务中,可能会遇到很大量的特征,这些特征良莠不齐,层次不一,可能有缺失,可能有噪声,可能规模不一致,可能类型不一样,等等问题都需要我们在建模之前,先预处...

367100

扫码关注云+社区

领取腾讯云代金券