# 机器学习实战 | 第二章：线性回归模型

class sklearn.linear_model.LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=1)

coef_ : array类型, 形状可以是 (n_features, )或者(n_targets, n_features) (至于原因可以看理论笔记). 这个表示的是线性模型的系数 residues_ : array, shape (n_targets,) or (1,) or empty Sum of residuals. Squared Euclidean 2-norm for each target passed during the fit. If the linear regression problem is under-determined (the number of linearly independent rows of the training matrix is less than its number of linearly independent columns), this is an empty array. If the target vector passed during the fit is 1-dimensional, this is a (1,) shape array. New in version 0.18. intercept_ : array类型,表示截距.

fit(X, y, sample_weight=None)

get_params(deep=True)

Get parameters for this estimator. Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params : mapping of string to any Parameter names mapped to their values.

predict(X)

score(X, y, sample_weight=None)

Returns the coefficient of determination R^2 of the prediction. The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) * 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) * 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Parameters: X : array-like, shape = (n_samples, n_features) Test samples. y : array-like, shape = (n_samples) or (n_samples, n_outputs) True values for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns: score : float R^2 of self.predict(X) wrt. y.

set_params(**params)

Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form __ so that it’s possible to update each component of a nested object. Returns: self :

class sklearn.linear_model.Ridge(alpha=1.0,fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver=’auto’, random_state=None)

coef_ : array类型, 形状可以是 `(n_features, )`或者`(n_targets, n_features)` (至于原因可以看理论笔记). 这个表示的是线性模型的系数 intercept_ : array类型,表示截距. n_iter_ : 表示每个target实际上迭代的次数.仅仅对sag和lsqr有用.其他的会返回None.

fit(X, y, sample_weight=None)

get_params(deep=True)[source] Get parameters for this estimator. Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: params : mapping of string to any Parameter names mapped to their values. predict(X)[source] Predict using the linear model Parameters: X : {array-like, sparse matrix}, shape = (n_samples, n_features) Samples. Returns: C : array, shape = (n_samples,) Returns predicted values. score(X, y, sample_weight=None)[source] Returns the coefficient of determination R^2 of the prediction. The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) * 2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) * 2).sum(). Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0. Parameters: X : array-like, shape = (n_samples, n_features) Test samples. y : array-like, shape = (n_samples) or (n_samples, n_outputs) True values for X. sample_weight : array-like, shape = [n_samples], optional Sample weights. Returns: score : float R^2 of self.predict(X) wrt. y. set_params(**params)[source] Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form __ so that it’s possible to update each component of a nested object. Returns: self :

1.from sklearn.linear_model import Ridgeimport numpy as np 2.n_samples, n_features = 10, 5np.random.seed(0) 3.y = np.random.randn(n_samples) 4.X = np.random.randn(n_samples, n_features) 5.clf = Ridge(alpha=1.0) 6.clf.fit(X, y)

424 篇文章81 人订阅

0 条评论

## 相关文章

6615

### 【干货】深入理解自编码器（附代码实现）

【导读】自编码器可以认为是一种数据压缩算法，或特征提取算法。本文作者Nathan Hubens 介绍了autoencoders的基本体系结构。首先介绍了编码器和...

1.2K7

3745

2474

9030

3169

1543

662

### Python标准库random用法精要

random标准库主要提供了伪随机数生成函数和相关的类，同时也提供了SystemRandom类（也可以直接使用os.urandom()函数）来支持生成加密级别要...

2906

1633