我正在尝试训练一个用于机器翻译的序列到序列模型。我使用了一个公开可用的.txt
数据集,有两列,从英语到德语(每行一对,用制表符分隔语言):http://www.manythings.org/anki/deu-eng.zip这很好用。然而,当我尝试使用我自己的数据集时,我遇到了一个问题。
我自己的DataFrame
如下所示:
Column 1 Column 2
0 English a German a
1 English b German b
2 English c German c
3 English d German d
4 ... ...
为了在相同的脚本中使用它,我将这个DataFrame
保存到一个.txt
文件中,如下所示(目的是再次获得每行一对,用一个制表符分隔语言):
df.to_csv("dataset.txt", index=False, sep='\t')
问题出现在清理数据的代码中:
# load doc into memory
def load_doc(filename):
# open the file as read only
file = open(filename, mode='rt', encoding='utf-8')
# read all text
text = file.read()
# close the file
file.close()
return text
# split a loaded document into sentences
def to_pairs(doc):
lines = doc.strip().split('\n')
pairs = [line.split('\t') for line in lines]
# clean a list of lines
def clean_pairs(lines):
cleaned = list()
# prepare regex for char filtering
re_print = re.compile('[^%s]' % re.escape(string.printable))
# prepare translation table for removing punctuation
table = str.maketrans('', '', string.punctuation)
for pair in lines:
clean_pair = list()
for line in pair:
# normalize unicode characters
line = normalize('NFD', line).encode('ascii', 'ignore')
line = line.decode('UTF-8')
# tokenize on white space
line = line.split()
# convert to lowercase
line = [word.lower() for word in line]
# remove punctuation from each token
line = [word.translate(table) for word in line]
# remove non-printable chars form each token
line = [re_print.sub('', w) for w in line]
# remove tokens with numbers in them
line = [word for word in line if word.isalpha()]
# store as string
clean_pair.append(' '.join(line))
# print(clean_pair)
cleaned.append(clean_pair)
# print(cleaned)
print(array(cleaned))
return array(cleaned) # something goes wrong here
# save a list of clean sentences to file
def save_clean_data(sentences, filename):
dump(sentences, open(filename, 'wb'))
print('Saved: %s' % filename)
# load dataset
filename = 'data/dataset.txt'
doc = load_doc(filename)
# split into english-german pairs
pairs = to_pairs(doc)
# clean sentences
clean_pairs = clean_pairs(pairs)
# save clean pairs to file
save_clean_data(clean_pairs, 'english-german.pkl')
# spot check
for i in range(100):
print('[%s] => [%s]' % (clean_pairs[i,0], clean_pairs[i,1]))
最后一行抛出以下错误:
IndexError Traceback (most recent call last)
<ipython-input-2-052d883ebd4c> in <module>()
72 # spot check
73 for i in range(100):
---> 74 print('[%s] => [%s]' % (clean_pairs[i,0], clean_pairs[i,1]))
75
76 # load a clean dataset
IndexError: too many indices for array
一件奇怪的事情是,对于标准数据集和我自己的数据集,下面这行的输出是不同的:
# Standard dataset:
return array(cleaned)
[['hi' 'hallo']
['hi' 'gru gott']
['run' ‘lauf’]]
# My own dataset:
return array(cleaned)
[list(['hi' 'hallo'])
list(['hi' 'gru gott'])
list(['run' ‘lauf’])]
谁能解释一下问题是什么,以及如何解决这个问题?
发布于 2018-07-08 05:00:54
clean_pairs
是Python的list
。核心list
语言没有正式的多维数组概念,因此您使用的clean_pairs[i,0]
语法不起作用。应该是clean_pairs[i][0]
。
您可能是从使用Pandas得到这个想法的,Pandas使用更复杂的n-d数组数据结构来支持这种类型的索引。
不过我还是被你的代码搞糊涂了。看起来您正在将数据帧保存到TSV文件(以制表符分隔),然后手动解析TSV并对其执行文本转换?这里面有很多地方是错误的:
你还有一些其他的问题,至少在你发布的代码中是这样。例如,您的to_pairs
函数(这也是您应该留给库的东西,如果有的话)不返回任何内容。
https://stackoverflow.com/questions/51225909
复制相似问题