Spark Thrift服务器试图在通过JDBC传输之前将完整的数据集加载到内存中,在JDBC客户端上,我收到错误: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serializedresults of 48 tasks (XX GB) is bigger than spark<
我正在运行本地的spark 2.4.0实例import org.apache.spark.sql.hive.HiveContext
val hc = new org.apache.spark.sql.hive.HiveContext在HiveContext.sql()代码中,我看到它现在只是SparkSession.sql()上的一个包装器。建议是在enableHive