在hadoop流中运行一个简单的python代码是有问题的。我已经尝试了所有的建议,在以前的帖子中,有一个类似的错误,但仍有问题。
我在外面运行了代码,运行得很好。
更新:我使用以下代码在hadoop流之外运行代码:
cat file |python mapper.py -n 5 -r 0.4 |sort|python reducer.py -f 3618
这很好..。但是现在我需要运行到HADOOP流
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar \
-D mapreduce.job.reduces=5 \
-files lr \
-mapper "python lr/mapper.py -n 5 -r 0.4" \
-reducer "python lr/reducer.py -f 3618" \
-input training \
-output models
hadoop流失败了。我看了看日志,却没有看到任何能告诉我为什么会发生这种事的东西?
我有下面的mapper.py
#!/usr/bin/env python
import sys
import random
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-n", "--model-num", action="store", dest="n_model",
help="number of models to train", type="int")
parser.add_option("-r", "--sample-ratio", action="store", dest="ratio",
help="ratio to sample for each ensemble", type="float")
options, args = parser.parse_args(sys.argv)
random.seed(8803)
r = options.ratio
for line in sys.stdin:
# TODO
# Note: The following lines are only there to help
# you get started (and to have a 'runnable' program).
# You may need to change some or all of the lines below.
# Follow the pseudocode given in the PDF.
key = random.randint(0, options.n_model-1)
value = line.strip()
for i in range(1, options.n_model+1):
m = random.random()
if m < r:
print "%d\t%s" % (i, value)
和我的reducer.py
#!/usr/bin/env python
import sys
import pickle
from optparse import OptionParser
from lrsgd import LogisticRegressionSGD
from utils import parse_svm_light_line
parser = OptionParser()
parser.add_option("-e", "--eta", action="store", dest="eta",
default=0.01, help="step size", type="float")
parser.add_option("-c", "--Regularization-Constant", action="store", dest="C",
default=0.0, help="regularization strength", type="float")
parser.add_option("-f", "--feature-num", action="store", dest="n_feature",
help="number of features", type="int")
options, args = parser.parse_args(sys.argv)
classifier = LogisticRegressionSGD(options.eta, options.C, options.n_feature)
for line in sys.stdin:
key, value = line.split("\t", 1)
value = value.strip()
X, y = parse_svm_light_line(value)
classifier.fit(X, y)
pickle.dump(classifier, sys.stdout)
当我在代码之外运行它时,它运行正常,但是当我在hadoop流中运行它时,它会给出错误:
17/02/07 07:44:34 INFO mapreduce.Job: Task Id : attempt_1486438814591_0038_m_000001_2, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
发布于 2022-04-04 06:06:17
hadoop jar /home/maria_dev/hadoop-streaming-2.7.3.jar \
-file ./mapper.py -mapper 'python mapper.py' \
-file ./reducer.py -reducer 'python reducer.py' \
-input /user/maria_dev/wordcount/worddata.txt \
-output /user/maria_dev/output
这对我有用。确保每个文件路径都是正确的。最初,我忘记为这两个python代码指定-file。但它不起作用。
发布于 2022-08-10 02:47:33
要获得完整的noobs (像我一样),请确保在.py文件的第一行中包含以下内容:
#!/usr/bin/env python
这不仅仅是一个评论,所以不要意外地删除它!
https://stackoverflow.com/questions/42084411
复制相似问题