机器学习实战 | 第二章:线性回归模型

线性回归(Linear Regression)

这个类是传统最小二乘回归的类.是最基础的线性回归的类.

class sklearn.linear_model.LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=1)

参数: fit_intercept : 布尔型,可选.是否计算模型的截距.要是设置为False的话,就不会计算截距了.(表明数据已经中心化了.) normalize : 布尔型,可选,默认是False.如果是True的话,X就会在回归之前标准化.当fit_intercept被设置为False后,这个参数会被忽略. copy_X : 布尔型,可选,默认是True.表示X会被拷贝.否则的话,X可能被重写改变. n_jobs : int类型,可选,默认是1. 表示计算的时候使用的多个线程.如果设置为-1的话,那么所有CPU都会被使用到.

属性

coef_ : array类型, 形状可以是 (n_features, )或者(n_targets, n_features) (至于原因可以看理论笔记). 这个表示的是线性模型的系数 residues_ : array, shape (n_targets,) or (1,) or empty Sum of residuals. Squared Euclidean 2-norm for each target passed during the fit. If the linear regression problem is under-determined (the number of linearly independent rows of the training matrix is less than its number of linearly independent columns), this is an empty array. If the target vector passed during the fit is 1-dimensional, this is a (1,) shape array. New in version 0.18. intercept_ : array类型,表示截距.

函数

fit(X, y, sample_weight=None)

拟合线性模型.这个函数在以后的很多其他的机器学习方法类中都会有. 参数: X : numpy array类型或者系数矩阵类型,形状为[n_samples,n_features] 表述训练数据集. y : numpy array类型,形状为[n_samples, n_targets],标签值. sample_weight : numpy array类型,形状为[n_samples]每个样本的权重.

get_params(deep=True)

Get parameters for this estimator. Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params : mapping of string to any Parameter names mapped to their values.

predict(X)

使用训练好的线性模型去预测.返回的是形状为(n_samples,)的array,表示预测值. 参数: X : {array-like, sparse matrix}, 形状为 (n_samples, n_features),表示测试集合.

score(X, y, sample_weight=None)

Returns the coefficient of determination R^2 of the prediction. The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) * 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) * 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Parameters: X : array-like, shape = (n_samples, n_features) Test samples. y : array-like, shape = (n_samples) or (n_samples, n_outputs) True values for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns: score : float R^2 of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form __ so that it’s possible to update each component of a nested object. Returns: self :

岭回归(Ridge)

class sklearn.linear_model.Ridge(alpha=1.0,fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver=’auto’, random_state=None)

岭回归是以损失函数为线性最小二乘函数,同时带L2正则的线性回归形式.

参数: alpha : {float, array-like}, 形状为 (n_targets).这个是正则项的参数,表示调节的强度.必须是正的浮点型. 一般来说,越大的值,表示越强有力的调节强度. copy_X : 布尔型,可选,默认是True.表示X会被拷贝.否则的话,X可能被重写改变. fit_intercept : 布尔型,可选.是否计算模型的截距.要是设置为False的话,就不会计算截距了.(表明数据已经中心化了.) max_iter : 整形,可选.表示共轭梯度求解器(conjugate gradient solver)最大的迭代次数. 对于 ‘sparse_cg’ 和‘lsqr’ 来说,默认值为scipy.sparse.linalg中的默认值.对于‘sag’来说,默认值是1000 normalize : 布尔型,可选,默认是False.如果是True的话,X就会在回归之前标准化.当fit_intercept被设置为False后,这个参数会被忽略. solver : {‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’} 计算方式. ‘auto’ 根据数据的类型自动选择s ‘svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’. ‘cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution. ‘sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter). ‘lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest but may not be available in old scipy versions. It also uses an iterative procedure. ‘sag’ uses a Stochastic Average Gradient descent. It also uses an iterative procedure, and is often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing. All last four solvers support both dense and sparse data. However, only ‘sag’ supports sparse input when fit_intercept is True. New in version 0.17: Stochastic Average Gradient descent solver. tol : 浮点型,表示结果的精度. random_state : int seed, RandomState instance, or None (default) The seed of the pseudo random number generator to use when shuffling the data. Used only in ‘sag’ solver. New in version 0.17: random_state to support Stochastic Average Gradient.

属性

coef_ : array类型, 形状可以是 (n_features, )或者(n_targets, n_features) (至于原因可以看理论笔记). 这个表示的是线性模型的系数 intercept_ : array类型,表示截距. n_iter_ : 表示每个target实际上迭代的次数.仅仅对sag和lsqr有用.其他的会返回None.

函数

fit(X, y, sample_weight=None)

拟合岭回归模型. 参数: X : numpy array类型或者系数矩阵类型,形状为[n_samples,n_features] 表述训练数据集. y : numpy array类型,形状为[n_samples, n_targets],标签值. sample_weight : numpy array类型,形状为[n_samples]每个样本的权重.

get_params(deep=True)[source] Get parameters for this estimator. Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params : mapping of string to any Parameter names mapped to their values. predict(X)[source] Predict using the linear model Parameters: X : {array-like, sparse matrix}, shape = (n_samples, n_features) Samples. Returns: C : array, shape = (n_samples,) Returns predicted values. score(X, y, sample_weight=None)[source] Returns the coefficient of determination R^2 of the prediction. The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) * 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) * 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Parameters: X : array-like, shape = (n_samples, n_features) Test samples. y : array-like, shape = (n_samples) or (n_samples, n_outputs) True values for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns: score : float R^2 of self.predict(X) wrt. y. set_params(**params)[source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form __ so that it’s possible to update each component of a nested object. Returns: self :

例子:

1.from sklearn.linear_model import Ridgeimport numpy as np 2.n_samples, n_features = 10, 5np.random.seed(0) 3.y = np.random.randn(n_samples) 4.X = np.random.randn(n_samples, n_features) 5.clf = Ridge(alpha=1.0) 6.clf.fit(X, y)

原文发布于微信公众号 - 人工智能LeadAI(atleadai)

原文发表时间:2017-09-04

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏数据结构与算法

一种递推组合数前缀和的Trick

\(S(n, m)\)可以看做是杨辉三角上的一行,而\(S(n+1, m)\)是他的下一行

763
来自专栏随笔小哥哥

程序员必须知道的10大基础实用算法及其讲解:排序、查找、搜索和分类等

快速排序是由东尼·霍尔所发展的一种排序算法。在平均状况下,排序 n 个项目要Ο(n log n)次比较。在最坏状况下则需要Ο(n2)次比较,但这种状况并不常见。...

220
来自专栏CSDN技术头条

程序员都应该知道的10大算法

快速排序是由东尼·霍尔所发展的一种排序算法。在平均状况下,排序 n 个项目要Ο(n log n)次比较。在最坏状况下则需要Ο(n2)次比较,但这种状况并不常见。

651
来自专栏算法channel

动态规划|相邻约束下的最优解(House Robber II )

01 House Robber II This time, all houses at this place are arranged in a circl...

3554
来自专栏智能算法

程序员必须知道的十大基础实用算法及其讲解

出自博客园 原文地址:http://kb.cnblogs.com/page/210687/ 算法一:快速排序算法   快速排序是由东尼·霍尔所发展的一种排序算法...

3538
来自专栏华章科技

程序员必须知道的10大基础实用算法及其讲解

快速排序是由东尼·霍尔所发展的一种排序算法。在平均状况下,排序n个项目要Ο(nlogn)次比较。在最坏状况下则需要Ο(n2)次比较,但这种状况并不常见。事实上,...

752
来自专栏杂七杂八

numpy中random模块使用

在python数据分析的学习和应用过程中,经常需要用到numpy的随机函数,下面我们学习一下具体的使用,本文着重说明各个分布随机数的生成。 numpy.rand...

3225
来自专栏程序员互动联盟

程序员必须要掌握的十大经典算法

算法一:快速排序算法 快速排序是由东尼·霍尔所发展的一种排序算法。在平均状况下,排序 n 个项目要Ο(n log n)次比较。在最坏状况下则需要Ο(n2)次比较...

35413
来自专栏数据结构与算法

cf868F. Yet Another Minimization Problem(决策单调性 分治dp)

给定一个长度为\(n\)的序列。你需要将它分为\(m\)段,每一段的代价为这一段内相同的数的对数,最小化代价总和。

382
来自专栏CreateAMind

keras doc 4 使用陷阱与模型

711

扫码关注云+社区