首页
学习
活动
专区
工具
TVP
发布
精选内容/技术社群/优惠产品,尽在小程序
立即前往

机器学习一百天——第三天,多元线性回归

第1步: 数据预处理

导入库

importpandasaspdimportnumpyasnp

导入数据集

dataset=pd.read_csv('50_Startups.csv')X=dataset.iloc[ : , :-1].valuesY=dataset.iloc[ : ,4].values

将类别数据数字化

fromsklearn.preprocessingimportLabelEncoder, OneHotEncoderlabelencoder=LabelEncoder()X[: ,3]=labelencoder.fit_transform(X[ : ,3])onehotencoder=OneHotEncoder(categorical_features=[3])X=onehotencoder.fit_transform(X).toarray()

躲避虚拟变量陷阱

X=X[: ,1:]

拆分数据集为训练集和测试集

fromsklearn.model_selectionimporttrain_test_splitX_train, X_test, Y_train, Y_test=train_test_split(X, Y,test_size=0.2,random_state=)

第2步: 在训练集上训练多元线性回归模型

fromsklearn.linear_modelimportLinearRegressionregressor=LinearRegression()regressor.fit(X_train, Y_train)

Step 3: 在测试集上预测结果

y_pred=regressor.predict(X_test)

◆◆◆◆◆

· 往· 期· 内· 容·

数据:

Country,Age,Salary,Purchased

France,44,72000,No

Spain,27,48000,Yes

Germany,30,54000,No

Spain,38,61000,No

Germany,40,,Yes

France,35,58000,Yes

Spain,,52000,No

France,48,79000,Yes

Germany,50,83000,No

France,37,67000,Yes

代码:

#Day 1: Data Prepocessing

#Step 1: Importing the libraries

importnumpyasnp

importpandasaspd

#Step 2: Importing dataset

dataset=pd.read_csv('../datasets/Data.csv')

X=dataset.iloc[ : , :-1].values

Y=dataset.iloc[ : ,3].values

print("Step 2: Importing dataset")

print("X")

print(X)

print("Y")

print(Y)

#Step 3: Handling the missing data

fromsklearn.preprocessingimportImputer

imputer=Imputer(missing_values ="NaN",strategy ="mean",axis =)

imputer=imputer.fit(X[ : , 1:3])

X[ : ,1:3]=imputer.transform(X[ : ,1:3])

print("---------------------")

print("Step 3: Handling the missing data")

print("step2")

print("X")

print(X)

#Step 4: Encoding categorical data

fromsklearn.preprocessingimportLabelEncoder, OneHotEncoder

labelencoder_X=LabelEncoder()

X[ : ,]=labelencoder_X.fit_transform(X[ : ,])

#Creating a dummy variable

onehotencoder=OneHotEncoder(categorical_features =[])

X=onehotencoder.fit_transform(X).toarray()

labelencoder_Y=LabelEncoder()

Y=labelencoder_Y.fit_transform(Y)

print("---------------------")

print("Step 4: Encoding categorical data")

print("X")

print(X)

print("Y")

print(Y)

#Step 5: Splitting the datasets into training sets and Test sets

fromsklearn.model_selectionimporttrain_test_split

X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2,random_state=)

print("---------------------")

print("Step 5: Splitting the datasets into training sets and Test sets")

print("X_train")

print(X_train)

print("X_test")

print(X_test)

print("Y_train")

print(Y_train)

print("Y_test")

print(Y_test)

#Step 6: Feature Scaling

fromsklearn.preprocessingimportStandardScaler

sc_X=StandardScaler()

X_train=sc_X.fit_transform(X_train)

X_test=sc_X.transform(X_test)

print("---------------------")

print("Step 6: Feature Scaling")

print("X_train")

print(X_train)

print("X_test")

print(X_test)

  • 发表于:
  • 原文链接https://kuaibao.qq.com/s/20190202G0K00H00?refer=cp_1026
  • 腾讯「腾讯云开发者社区」是腾讯内容开放平台帐号(企鹅号)传播渠道之一,根据《腾讯内容开放平台服务协议》转载发布内容。
  • 如有侵权,请联系 cloudcommunity@tencent.com 删除。

扫码

添加站长 进交流群

领取专属 10元无门槛券

私享最新 技术干货

扫码加入开发者社群
领券