我是一个在项目中工作的NLP新手,我需要计算几种不同方法的准确性;但是,在运行代码时,我总是遇到内存错误。例如,我一直收到"Unable to allocation 14.2 GiB for an array with shape (38045,50000) and data type float64“(无法为具有shape(38045,50000)和数据类型float64的数组分配14.2%的内存),即使我强制转换为uint8数据类型并修改了Windows高级设置以更改内存分配。我的代码如下。
import sklearn
import numpy as np
import sklearn.feature_extraction.text
import pandas as pd
df = pd.read_csv ('amprocessed.csv')
labels = df.iloc[:, 0]
import sklearn.model_selection
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(max_features=50000, dtype="uint8")
#vectorizer = TfidfVectorizer()
X = (vectorizer.fit_transform(df["Source"]).toarray()).astype(dtype="uint8")
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
xscale = scaler.fit_transform(X).astype(dtype=np.uint8)
from sklearn import svm
x_train, x_test, y_train, y_test = sklearn.model_selection.train_test_split(xscale, labels, test_size=0.2, random_state=42)
clf = svm.SVC(kernel='linear') # Linear Kernel
clf.fit(x_train, y_train).astype(dtype=np.uint8)
y_pred = clf.predict(x_test)
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred)
发布于 2021-10-14 07:43:21
这里的问题是,您正在将CountVectorizer
的输出转换为np.array
。CountVectorizer
输出稀疏矩阵scipy.sparse.csr.csr_matrix
,这是存储此类数据的一种有效方式。
对于每个文档,稀疏矩阵将只引用不等于0
的值,而不是具有形状为(50000,0)
的np.array
,其中几乎所有值都等于0,只有极少数值等于1。这将极大地减少内存占用,如本例所示:
from scipy.sparse import csr_matrix
import numpy as np
import sys
X = np.zeros((100_000))
X[0] = 1
print(f'size (bytes) of np.array {sys.getsizeof(X)}')
X_sparse = csr_matrix(X)
print(f'size (bytes) of Sparse Matrix {sys.getsizeof(X_sparse)}')
输出:
size (bytes) of np.array 800104
size (bytes) of Sparse Matrix 48
因此,您应该使用以下命令修改预处理代码:
X = (vectorizer.fit_transform(df["Source"]).toarray())
除此之外,fit函数应该简单地编写如下:
clf.fit(x_train, y_train)
https://stackoverflow.com/questions/69558340
复制相似问题