我正在尝试从sqlserver读取表,并在读取时应用分区。在读取数据之前,我希望获得lowerBound和upperBound的界限,如下所示。boundsDF = spark.read.format('jdbc') .option('driverfor x in boundsDF.rdd.collect()]
mindate =
当我为表同步运行spark应用程序时,错误消息如下所示: at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:219)
at org.apache.spark.sql.execution.datasources.jdbc</e