我对Scala和Pyspark非常陌生,我必须将这段用Scala编写的代码转换为Pyspark。有人能帮助我理解Scala的语法以便能够转换它吗?
val df= spark.read.parquet(s"$basePath/dod_m/")
.select(df2.map(x => col(x._1).as(x._2)).toList :_*)发布于 2022-10-22 07:25:26
很可能,这里的df2是一个简单的scala集合。
如果它是一个数据,df2.map(x => col(x._1).as(x._2))将产生error: value _1 is not a member of org.apache.spark.sql.Row。实际上,dataframe上的map函数允许您处理Row对象,而不是元组。
例如,如果它是(String, String)的数据集,df2.map(x => col(x._1).as(x._2))就会产生:error: Unable to find encoder for type org.apache.spark.sql.Column.。如果您定义了这样一个编码器,您将获得相当清楚的error: value toList is not a member of org.apache.spark.sql.Dataset[org.apache.spark.sql.Column]。
RDDs也不具有toList方法。
因此,让我们把df2看作是(String, String)的scala集合。df2.map(x => col(x._1).as(x._2)).toList是关于重命名列。前一个名称是元组的第一个元素,新名称是第二个元素。
scala中的一个例子:
val df2 = Seq(("a", "b"), ("c", "d"))
val df = Seq((1, 2), (4, 5)).toDF("a", "c")
// running this in a shell, we see that it is about renaming columns
df2.map(x => col(x._1).as(x._2)).toList
//res2: List[org.apache.spark.sql.Column] = List(a AS b, c AS d)让我们试试:
df.show
+---+---+
| a| c|
+---+---+
| 1| 2|
| 4| 5|
+---+---+
df.select(df2.map(x => col(x._1).as(x._2)).toList :_*).show
+---+---+
| b| d|
+---+---+
| 1| 2|
| 4| 5|
+---+---+在python中:
df2 = [("a", "b"), ("c", "d")]
df = spark.createDataFrame([(1, 2), (4, 5)], ['a', 'c'])
import pyspark.sql.functions as f
df.select([f.col(x[0]).alias(x[1]) for x in df2]).show()
+---+---+
| b| d|
+---+---+
| 1| 2|
| 4| 5|
+---+---+https://stackoverflow.com/questions/74155555
复制相似问题