我是Scala的初学者,我有一个数据框架,看起来像这样(缩写):
root
|-- contigName: string (nullable = true)
|-- start: long (nullable = true)
|-- end: long (nullable = true)
|-- names: array (nullable = true)
| |-- element: string (containsNull = true)
|-- referenceAllele: string (nullable = true)
|-- alternateAlleles: array (nullable = true)
| |-- element: string (containsNull = true)我正在尝试简单地groupBy names列:
display(dataframe.groupBy("names"))
一个非常简单的操作,但是
notebook:1: error: overloaded method value display with alternatives:
[A](data: Seq[A])(implicit evidence$1: reflect.runtime.universe.TypeTag[A])Unit <and>
(dataset: org.apache.spark.sql.Dataset[_],streamName: String,trigger: org.apache.spark.sql.streaming.Trigger,checkpointLocation: String)Unit <and>
(model: org.apache.spark.ml.classification.DecisionTreeClassificationModel)Unit <and>
(model: org.apache.spark.ml.regression.DecisionTreeRegressionModel)Unit <and>
(model: org.apache.spark.ml.clustering.KMeansModel)Unit <and>
(model: org.apache.spark.mllib.clustering.KMeansModel)Unit <and>
(documentable: com.databricks.dbutils_v1.WithHelpMethods)Unit
cannot be applied to (org.apache.spark.sql.RelationalGroupedDataset)
display(dataframe.groupBy("names"))如何显示此分组数据?
我看到的一些发布的解决方案非常复杂,我不认为这是一个复制品,我想要的非常简单。
发布于 2019-09-19 03:16:33
groupBy返回RelationalGroupedDataset。您需要添加任何聚合函数(例如count())、dataframe.groupBy("names").count()或dataframe.groupBy("names").agg(max("end"))
如果需要按每个名称分组,可以在groupBy之前分解"names“数组
dataframe
.withColumn("name", explode(col("names")))
.drop("names")
.groupBy("name")
.count() // or other aggregate functions inside agg()https://stackoverflow.com/questions/57998206
复制相似问题