深度学习代码系列之——Deep Learning Toolbox For Matlab

A Matlab toolbox for Deep Learning

Matlab/Octave toolbox for deep learning. Includes Deep Belief Nets, Stacked Autoencoders, Convolutional Neural Nets, Convolutional Autoencoders and vanilla Neural Nets. Each method has examples to get you started.

Deep Learning is a new subfield of machine learning that focuses on learning deep hierarchical models of data. It is inspired by the human brain's apparent deep (layered, hierarchical) architecture. A good overview of the theory of Deep Learning theory is Learning Deep Architectures for AI.

For a more informal introduction, see the following videos by Geoffrey Hinton and Andrew Ng.

· The Next Generation of Neural Networks (Hinton, 2007)

· Recent Developments in Deep Learning (Hinton, 2010)

· Unsupervised Feature Learning and Deep Learning (Ng, 2011)

Directories included in the toolbox

NN/ - A library for Feedforward Backpropagation Neural Networks

CNN/ - A library for Convolutional Neural Networks

DBN/ - A library for Deep Belief Networks

SAE/ - A library for Stacked Auto-Encoders

CAE/ - A library for Convolutional Auto-Encoders

UTIL/ - Utility functions used by the libraries

DATA/ - Data used by the examples

TESTS/ - unit tests to verify toolbox is working

For references on each library check REFS.md

Example: Deep Belief Network

function test_example_DBN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x  = double(test_x)  / 255;
train_y = double(train_y);
test_y  = double(test_y);
%%  ex1 train a 100 hidden unit RBM and visualize its weights
rand('state',0)
dbn.sizes = [100];
opts.numepochs =   1;
opts.batchsize = 100;
opts.momentum  =   0;
opts.alpha     =   1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
figure; visualize(dbn.rbm{1}.W');   %  Visualize the RBM weights
%%  ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN
rand('state',0)
%train dbn
dbn.sizes = [100 100];
opts.numepochs =   1;
opts.batchsize = 100;
opts.momentum  =   0;
opts.alpha     =   1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10);
nn.activation_function = 'sigm';
%train nn
opts.numepochs =  1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.10, 'Too big error');

Example: Stacked Auto-Encoders

function test_example_SAE
load mnist_uint8;
train_x = double(train_x)/255;
test_x  = double(test_x)/255;
train_y = double(train_y);
test_y  = double(test_y);
%%  ex1 train a 100 hidden unit SDAE and use it to initialize a FFNN
%  Setup and train a stacked denoising autoencoder (SDAE)
rand('state',0)
sae = saesetup([784 100]);
sae.ae{1}.activation_function       = 'sigm';
sae.ae{1}.learningRate              = 1;
sae.ae{1}.inputZeroMaskedFraction   = 0.5;
opts.numepochs =   1;
opts.batchsize = 100;
sae = saetrain(sae, train_x, opts);
visualize(sae.ae{1}.W{1}(:,2:end)')
% Use the SDAE to initialize a FFNN
nn = nnsetup([784 100 10]);
nn.activation_function              = 'sigm';
nn.learningRate                     = 1;
nn.W{1} = sae.ae{1}.W{1};
% Train the FFNN
opts.numepochs =   1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.16, 'Too big error');

Example: Convolutional Neural Nets

function test_example_CNN
load mnist_uint8;
train_x = double(reshape(train_x',28,28,60000))/255;
test_x = double(reshape(test_x',28,28,10000))/255;
train_y = double(train_y');
test_y = double(test_y');
%% ex1 Train a 6c-2s-12c-2s Convolutional neural network 
%will run 1 epoch in about 200 second and get around 11% error. 
%With 100 epochs you'll get around 1.2% error
rand('state',0)
cnn.layers = {
 struct('type', 'i') %input layer
 struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5)%convolution layer
 struct('type', 's', 'scale', 2) %sub sampling layer
 struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer
 struct('type', 's', 'scale', 2) %subsampling layer
};
cnn = cnnsetup(cnn, train_x, train_y);
opts.alpha = 1;
opts.batchsize = 50;
opts.numepochs = 1;
cnn = cnntrain(cnn, train_x, train_y, opts);
[er, bad] = cnntest(cnn, test_x, test_y);
%plot mean squared error
figure; plot(cnn.rL);
assert(er<0.12, 'Too big error');

Example: Neural Networks

function test_example_NN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x  = double(test_x)  / 255;
train_y = double(train_y);
test_y  = double(test_y);
% normalize
[train_x, mu, sigma] = zscore(train_x);
test_x = normalize(test_x, mu, sigma);
%% ex1 vanilla neural net
rand('state',0)
nn = nnsetup([784 100 10]);
opts.numepochs =  1;   %  Number of full sweeps through data
opts.batchsize = 100;  %  Take a mean gradient step over this many samples
[nn, L] = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.08, 'Too big error');
%% ex2 neural net with L2 weight decay
rand('state',0)
nn = nnsetup([784 100 10]);
nn.weightPenaltyL2 = 1e-4;  %  L2 weight decay
opts.numepochs =  1;        %  Number of full sweeps through data
opts.batchsize = 100;       %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex3 neural net with dropout
rand('state',0)
nn = nnsetup([784 100 10]);
nn.dropoutFraction = 0.5;   %  Dropout fraction 
opts.numepochs =  1;        %  Number of full sweeps through data
opts.batchsize = 100;       %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex4 neural net with sigmoid activation function
rand('state',0)
nn = nnsetup([784 100 10]);
nn.activation_function = 'sigm';    %  Sigmoid activation function
nn.learningRate = 1;                %  Sigm require a lower learning rate
opts.numepochs =  1;                %  Number of full sweeps through data
opts.batchsize = 100;               %  Take a mean gradient step over this many samples
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex5 plotting functionality
rand('state',0)
nn = nnsetup([784 20 10]);
opts.numepochs         = 5;            %  Number of full sweeps through data
nn.output              = 'softmax';    %  use softmax output
opts.batchsize         = 1000;         %  Take a mean gradient step over this many samples
opts.plot              = 1;            %  enable plotting
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');
%% ex6 neural net with sigmoid activation and plotting of validation and training error
% split training data into training and validation data
vx   = train_x(1:10000,:);
tx = train_x(10001:end,:);
vy   = train_y(1:10000,:);
ty = train_y(10001:end,:);
rand('state',0)
nn                      = nnsetup([784 20 10]);     
nn.output               = 'softmax';                   %  use softmax output
opts.numepochs          = 5;                           %  Number of full sweeps through data
opts.batchsize          = 1000;                        %  Take a mean gradient step over this many samples
opts.plot               = 1;                           %  enable plotting
nn = nntrain(nn, tx, ty, opts, vx, vy);                %  nntrain takes validation set as last two arguments (optionally)
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.1, 'Too big error');

源代码关注后可向博主咨询

量化投资与机器学习

知识、能力、深度、专业

勤奋、天赋、耐得住寂寞

原文发布于微信公众号 - 量化投资与机器学习(ZXL_LHTZ_JQXX)

原文发表时间:2015-11-07

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏tkokof 的技术,小趣及杂念

对于"矩阵连乘问题"的一点想法

在算法设计的学习中,每到“动态规划”一节,一般都会涉及到“矩阵连乘”问题(例如《Algorithms》,中文译名《算法概论》),可想而知该题的经典程度 :)

14130
来自专栏专知

【论文推荐】最新八篇情感分析相关论文—注意力网络、多模态情感分析、情感分析局限性、跨语言情感分类、多语言情感分析

【导读】专知内容组今天推出最新八篇情感分析(Sentiment Analysis)相关论文,欢迎查看!

79920
来自专栏程序生活

TensorFlow实现Attention机制原理介绍论文阅读代码实现

3.1K90
来自专栏数说工作室

文本分析 | 常用距离/相似度 一览

这个系列打算以文本相似度为切入点,逐步介绍一些文本分析的干货,包括分词、词频、词频向量、TF-IDF、文本匹配等等。 第一篇中,介绍了文本相似度是干什么的; 第...

50340
来自专栏量化投资与机器学习

七步理解深度学习

原文链接请点击阅读原文。 There are many deep learning resources freely available online,but...

21390
来自专栏小樱的经验随笔

线性规划之单纯形法【超详解+图解】

1.作用     单纯形法是解决线性规划问题的一个有效的算法。线性规划就是在一组线性约束条件下,求解目标函数最优解的问题。 2.线性规划的一般形式     在约...

4.7K60
来自专栏贾志刚-OpenCV学堂

OpenCV人脸检测与三角剖分绘制

三角剖分最早是俄国数学家Delaunay提出来的,而他获得博士学位时候的老师是Georgy Voronoy,是维诺图概念的提出者,而且维诺是马尔可夫的学生,就是...

27920
来自专栏Hadoop数据仓库

HAWQ + MADlib 玩转数据挖掘之(六)——主成分分析与主成分投影

一、主成分分析(Principal Component Analysis,PCA)简介         在数据挖掘中经常会遇到多个变量的问题,而且在多数情况下,...

19960
来自专栏AI研习社

用 Python 分析四年NBA比赛数据,实力最强的球队浮出水面

分类作为一种监督学习方法,要求必须事先明确知道各个类别的信息,并且断言所有待分类项都有一个类别与之对应。但是很多时候上述条件得不到满足,尤其是在处理海量数据的时...

45030
来自专栏iOSer成长记录

iOS-识别图片中二维码

在iOS的CoreImage的Api中,有一个CIDetector的类,Detector的中文翻译有探测器的意思,那么CIDetector是用来做哪些的呢?它可...

24010

扫码关注云+社区

领取腾讯云代金券