我建立了一个knn分类模型。不幸的是,我的模型有超过80%的准确性,我想得到一个更好的结果。我能要些小费吗?也许我用了太多的预测器?
我的数据= https://www.openml.org/search?type=data&sort=runs&id=53&status=active
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, f1_score
from sklearn.model_selection import GridSearchCV
heart_disease = pd.read_csv('heart_disease.csv', sep=';', decimal=',')
y = heart_disease['heart_disease']
X = heart_disease.drop(["heart_disease"], axis=1)
correlation_matrix = heart_disease.corr()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
scaler = MinMaxScaler(feature_range=(-1,1))
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
knn_3 = KNeighborsClassifier(3, n_jobs = -1)
knn_3.fit(X_train, y_train)
y_train_pred = knn_3.predict(X_train)
labels = ['0', '1']
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred))
print(f1_score(y_train, y_train_pred))
y_test_pred = knn_3.predict(X_test)
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred))
print(f1_score(y_test, y_test_pred))
hyperparameters = {'n_neighbors' : range(1, 15), 'weights': ['uniform','distance']}
knn_best = GridSearchCV(KNeighborsClassifier(), hyperparameters, n_jobs = -1, error_score = 'raise')
knn_best.fit(X_train,y_train)
knn_best.best_params_
y_train_pred_best = knn_best.predict(X_train)
y_test_pred_best = knn_best.predict(X_test)
print('Training set')
print(pd.DataFrame(confusion_matrix(y_train, y_train_pred_best), index = labels, columns = labels))
print(accuracy_score(y_train, y_train_pred_best))
print(f1_score(y_train, y_train_pred_best))
print('Test set')
print(pd.DataFrame(confusion_matrix(y_test, y_test_pred_best), index = labels, columns = labels))
print(accuracy_score(y_test, y_test_pred_best))
print(f1_score(y_test, y_test_pred_best))
发布于 2022-12-03 04:53:57
有几件事你可以尝试提高你的KNN模型的准确性。
首先,您可以尝试调优模型的超参数,例如要考虑的最近邻居的数目,或者用来度量点间相似性的距离度量。
要调优KNN模型的超参数,可以使用诸如cross-validation、、网格搜索、等技术来尝试不同的超参数组合,并找到最适合您的数据的组合。
您还可以尝试预处理您的数据,使其更适合于KNN。例如,您可以尝试使用主成分分析 (PCA)等技术来降低数据的维度。这可以帮助消除数据中的冗余,减少维数,这可以使KNN更容易找到最近的邻居。
此外,您可以尝试完全使用不同的分类算法,例如logistic回归或决策树。这些算法可能更适合您的数据,并可能产生比KNN更好的结果。
您可以尝试的另一件事是使用集成方法,例如打包或增强,将多个KNN模型组合起来,并可能提高它们的准确性。集成方法可以有效地减少过度拟合和提高模型的通用性。
发布于 2022-12-03 05:10:35
这只是答案的一小部分,以便为k_neighbors找到最佳的数字。
errlist = [] #an error list to append
for i in range(1,40): #from 0-40 numbers to use in k_neighbors
knn_i = KNeighborsClassifier(k_neighbors=i)
knn_i.fit(X_train,y_train)
errlist.append(np.mean(knn_i.predict(X_test)!=y_test)) # append the mean of failed-predict numbers
绘制一行以查看最佳k_neighbors:
plt.plot(range(1,40),errlist)
请随意更改范围内的号码。
https://stackoverflow.com/questions/74666866
复制相似问题