scikit-learn linearRegression 1.1.1 普通最小二乘法

xiaoxiao2021-02-27  171

普通线性回归公式:

在这个公式中,为权值,有些书籍和文章也称为参数和权重,再线性回归中,通过优化算法求出最佳拟合的w和b(偏值),来进行预测

sklaern实例应用:

LinearRegression 用系数 :math:w = (w_1,...,w_p) 来拟合一个线性模型, 使得数据集实际观测数据和预测数据(估计值)之间误差平方和最小,这也是最小二乘法的核心思想。数学形式可表达为:

   (Xw:为预测值,y为真实值)

LinearRegression 模型会调用 fit 方法来拟合X,y(X为输入,y为输出把拟).并且会合的线性模型的系数  存储到成员变量 coef_ 中

>>> from sklearn import linear_model >>> clf = linear_model.LinearRegression() >>> clf.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2]) LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False) >>> clf.coef_ array([ 0.5, 0.5]) 然而,对于普通最小二乘问题,其系数估计依赖模型各项相互独立。当各项是相关的,设计矩阵(Design Matrix)   的各列近似线性相关, 那么,设计矩阵会趋向于奇异矩阵,这会导致最小二乘估计对于随机误差非常敏感,会产生很大的方差。这种  多重共线性(multicollinearity)  的情况可能真的会出现,比如未经实验设计收集的数据.

LinearRegression具体使用实例:

本例子利用了sklearn本身自带的数据集datasets中的糖尿病患者的第一个特征,并结合label,训练和绘制出简单的二维图像,散点图,并拟合出一条直线,二维图像点到直线的距离之和最小(y轴距离label),同时还计算了方差  偏差等值

Script output:

Coefficients: [ 938.23786125] Residual sum of squares: 2548.07 Variance score: 0.47 这里的Coefficient是系数w,通过最小二乘法拟合出来的数据

import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model # Load the diabetes dataset 加载糖尿病患者数据 diabetes = datasets.load_diabetes() # Use only one feature 使用一个特征 diabetes_X = diabetes.data[:, np.newaxis, 2] # Split the data into training/testing sets 划分X的训练集和测试集 diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] # Split the targets into training/testing sets 划分目标label训练集和测试集,数量与x划分对应 diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object 创建一个线性回归模型对象 regr = linear_model.LinearRegression() # Train the model using the training sets 讲训练集和对应训练集的label放到fit()中训练模型 regr.fit(diabetes_X_train, diabetes_y_train) # The coefficients 得出训练模型参数W的值 print('Coefficients: \n', regr.coef_) # The mean square error 预测值与真实值之间的误差平方和 predict输出预测值 print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs 运用matplotlib绘制出图像 plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show()

部分参数和函数使用方法汇总:

属性:  coef_ : array类型,形状为 (n_features, ) 或者 (n_targets, n_features),这个是表示的线性回归求出来的系数。即方程里面经常见到的w。  Estimated coefficients for the linear regression problem. If multiple targets are passed during the fit (y 2D), this is a 2D array of shape (n_targets, n_features), while if only one target is passed, this is a 1D array of length n_features.  residues_ : array, shape (n_targets,) or (1,) or empty  Sum of residuals. Squared Euclidean 2-norm for each target passed during the fit. If the linear regression problem is under-determined (the number of linearly independent rows of the training matrix is less than its number of linearly independent columns), this  is an empty array. If the target vector passed during the fit is 1-dimensional, this is a (1,) shape array.  New in version 0.18.  intercept_ : 截距,b值

方法:

__init__(fit_intercept=True, normalize=False, copy_X=True, n_jobs=1)

fit(X, y, sample_weight=None)

作用:  拟合线性模型  参数:  X : 训练集(自变量),numpy array类型,且形状为[n_samples,n_features]  y : 标签(因变量)numpy array类型,形状为 [n_samples, n_targets]  sample_weight : 每个样本的权重,形状为 [n_samples]

get_params(deep=True)

Get parameters for this estimator.  Parametersdeep : boolean, optional  If True, will return the parameters for this estimator and contained subobjects that are estimators.  Returnsparams : mapping of string to any  Parameter names mapped to their values.

predict(X)

作用:利用这个线性模型来做预测  参数:  X :预测的数据,形状为 (n_samples, n_features)  返回:  array类型,形状为 (n_samples,)

score(X, y, sample_weight=None)  Returns the coefficient of determination R^2 of the prediction.  The coefficient R^2 is defined as (1 - u/v), where u is the regression sum of squares ((y_true - y_pred) **  2).sum() and v is the residual sum of squares ((y_true - y_true.mean()) ** 2).sum(). Best possible score  is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always  predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.  ParametersX : array-like, shape = (n_samples, n_features)  Test samples.  y : array-like, shape = (n_samples) or (n_samples, n_outputs)  True values for X.  sample_weight : array-like, shape = [n_samples], optional  Sample weights.  Returnsscore : float  R^2 of self.predict(X) wrt. y.  29.18. sklearn.linear_model: Generalized Linear Models 1531scikit-learn user guide, Release 0.18.1  set_params(**params)  Set the parameters of this estimator.  The method works on simple estimators as well as on nested objects (such as pipelines). The latter have  parameters of the form __ so that it’s possible to update each component  of a nested object.  Returnsself :

本文参考了 github上这位大神带来的翻译 import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model # Load the diabetes dataset diabetes = datasets.load_diabetes() # Use only one feature diabetes_X = diabetes.data[:, np.newaxis, 2] # Split the data into training/testing sets diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) # The coefficients print('Coefficients: \n', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show() import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model # Load the diabetes dataset diabetes = datasets.load_diabetes() # Use only one feature diabetes_X = diabetes.data[:, np.newaxis, 2] # Split the data into training/testing sets diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) # The coefficients print('Coefficients: \n', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show()
转载请注明原文地址: https://www.6miu.com/read-15105.html

最新回复(0)