mode('overwrite')操作期间设置saveAsTable(): 'spark_no_bucket_table1pyspark.sql.utils.AnalysisException: Can not create the managed
table('`spark_no_bucket_table
sample_bucket(n INT, v INT)
partitioned by (c STRING) CLUSTERED BY(n) INTO 3 BUCKETS") 然后我尝试将数据帧df中的数据插入到sample_bucket表中: spark.sql("INSERT OVERWRITE table SAMPLE_BUCKET PARTITION(c) select n, v, c from df`sample_bucket` is bucketed but S