我尝试过使用随机森林模型来预测一个例子流,但是我似乎不能用这个模型来分类这些例子。下面是pyspark中使用的代码:
sc = SparkContext(appName="App")
model = RandomForest.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={}, impurity='gini', numTrees=150)
ssc = StreamingContext(sc, 1)
lines = ssc.socketTextStream(hostname, int(port))
parsedLines = lines.map(parse)
parsedLines.pprint()
predictions = parsedLines.map(lambda event: model.predict(event.features))在集群中编译错误时返回的错误:
Error : "It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.有没有办法使用静态数据生成的modèle来预测流示例?
谢谢各位,我真的很感激!
发布于 2016-04-25 14:30:29
是的,您可以使用静态数据生成的模型。您所经历的问题根本与流无关。您根本不能在操作或转换中使用基于JVM的模型(请参见How to use Java/Scala function from an action or a transformation?来解释原因)。相反,您应该将predict方法应用于完整的RDD,例如在DStream上使用transform
from pyspark.mllib.tree import RandomForest
from pyspark.mllib.util import MLUtils
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from operator import attrgetter
sc = SparkContext("local[2]", "foo")
ssc = StreamingContext(sc, 1)
data = MLUtils.loadLibSVMFile(sc, 'data/mllib/sample_libsvm_data.txt')
trainingData, testData = data.randomSplit([0.7, 0.3])
model = RandomForest.trainClassifier(
trainingData, numClasses=2, nmTrees=3
)
(ssc
.queueStream([testData])
# Extract features
.map(attrgetter("features"))
# Predict
.transform(lambda _, rdd: model.predict(rdd))
.pprint())
ssc.start()
ssc.awaitTerminationOrTimeout(10)https://stackoverflow.com/questions/36838024
复制相似问题