我使用Spark2.0.2从一些数据中提取一些关联规则,而当我得到结果时,我发现我有一些奇怪的规则,例如:
【[MUJI,ROEM,西单科技广场] => Bauhaus ] 2.0
“2.0”是规则的信任度,它不就是“先发制人的概率”的意思,应该小于1.0吗?
发布于 2017-02-22 01:02:56
关键字:transactions != freqItemset
解决方案:使用spark.mllib.FPGrowth代替,它接受事务的rdd,并可以自动计算freqItemsets。
你好,我找到了。造成这种现象的原因是因为我的输入FreqItemset data freqItemsets是错误的。让我们来谈谈细节。我只使用三个原始的事务 ("a")、("a“、"b”、"c")、("a“、"b”、"d"),它们的频率都相同。
一开始,我以为火花会自动计算子项目的频率,我唯一需要做的就是像这样创建freqItemsets (官方的例子告诉我们):
val freqItemsets = sc.parallelize(Seq(
new FreqItemset(Array("a"), 1),
new FreqItemset(Array("a","b","d"), 1),
new FreqItemset(Array("a", "b","c"), 1)
))这是它出错的原因,AssociationRules的params是transactions,,而不是FreqItemset,所以我对这两个定义做了错误的理解。
根据这三个事务,freqItemsets应该是
new FreqItemset(Array("a"), 3),//because "a" appears three times in three transactions
new FreqItemset(Array("b"), 2),//"b" appears two times
new FreqItemset(Array("c"), 1),
new FreqItemset(Array("d"), 1),
new FreqItemset(Array("a","b"), 2),// "a" and "b" totally appears two times
new FreqItemset(Array("a","c"), 1),
new FreqItemset(Array("a","d"), 1),
new FreqItemset(Array("b","d"), 1),
new FreqItemset(Array("b","c"), 1)
new FreqItemset(Array("a","b","d"), 1),
new FreqItemset(Array("a", "b","c"), 1)您可以使用以下代码来完成这一统计工作
val transactons = sc.parallelize(
Seq(
Array("a"),
Array("a","b","c"),
Array("a","b","d")
))
val freqItemsets = transactions
.map(arr => {
(for (i <- 1 to arr.length) yield {
arr.combinations(i).toArray
})
.toArray
.flatten
})
.flatMap(l => l)
.map(a => (Json.toJson(a.sorted).toString(), 1))
.reduceByKey(_ + _)
.map(m => new FreqItemset(Json.parse(m._1).as[Array[String]], m._2.toLong))
//then use freqItemsets like the example code
val ar = new AssociationRules()
.setMinConfidence(0.8)
val results = ar.run(freqItemsets)
//....简单地说,我们可以使用FPGrowth而不是"AssociationRules",它接受事务的rdd。
val fpg = new FPGrowth()
.setMinSupport(0.2)
.setNumPartitions(10)
val model = fpg.run(transactions) //transactions is defined in the previous code 就这样。
https://stackoverflow.com/questions/42335003
复制相似问题