首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
社区首页 >问答首页 >Apache Sedona (Geospark) SQL with Java: ClassNotFoundException

Apache Sedona (Geospark) SQL with Java: ClassNotFoundException
EN

Stack Overflow用户
提问于 2021-01-13 13:56:04
回答 1查看 630关注 0票数 1

我使用Apache Sedona的最新快照(1.3.2- snapshot )在docker集群上使用我的Apache Spark 3.0.1执行一些地理空间工作。

在尝试教程部分(http://sedona.apache.org/tutorial/sql/)中的第一个示例时,我遇到了NoClassDefException,这是ClassNotFoundException的原因:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
    SparkSession sparkSession = SparkSession.builder()
            .appName("de.oth.GeosparkDemoApplication")
            .master("local")
            .config("spark.serializer", KryoSerializer.class.getName())
            .config("spark.kryo.registrator", GeoSparkVizKryoRegistrator.class.getName())
            .getOrCreate();

    GeoSparkSQLRegistrator.registerAll(sparkSession);
    Dataset<Row> rawDf = sparkSession
            .read()
            .format("csv")
            .option("delimiter", "\t")
            .option("header", "false")
            .load("/spark-apps/usa-county.tsv");
    rawDf.createOrReplaceTempView("rawdf");
    rawDf.show();

    Dataset<Row> spatialDf = sparkSession.sql(
            "SELECT ST_GeomFromWKT(rawdf._c0) AS countyshape, rawdf._c1, rawdf._c2 FROM rawdf");
    spatialDf.createOrReplaceTempView("spatialdf");
    spatialDf.show();

   spatialDf.printSchema();

错误:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/catalyst/expressions/codegen/CodegenFallback$class
at org.apache.spark.sql.geosparksql.expressions.ST_GeomFromWKT.<init>(Constructors.scala:118)
at org.apache.spark.sql.geosparksql.expressions.ST_GeomFromWKT$.apply(Constructors.scala:117)
at org.apache.spark.sql.geosparksql.expressions.ST_GeomFromWKT$.apply(Constructors.scala:117)
at org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.lookupFunction(FunctionRegistry.scala:121)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupFunction(SessionCatalog.scala:1439)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$102.$anonfun$applyOrElse$105(Analyzer.scala:1944)
at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$102.applyOrElse(Analyzer.scala:1944)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16$$anonfun$applyOrElse$102.applyOrElse(Analyzer.scala:1927)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$1(TreeNode.scala:309)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:309)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDown$3(TreeNode.scala:314)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:314)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsDown$1(QueryPlan.scala:96)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:118)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:118)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:129)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:134)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.immutable.List.map(List.scala:298)
at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:134)
at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:139)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:139)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:96)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:87)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:1927)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$$anonfun$apply$16.applyOrElse(Analyzer.scala:1925)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$3(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:90)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:84)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$.apply(Analyzer.scala:1925)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveFunctions$.apply(Analyzer.scala:1923)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:89)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:176)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:170)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:130)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:116)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:116)
at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:154)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:153)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:68)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:68)
at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:66)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:58)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)
at de.oth.GeosparkDemoApplication.main(GeosparkDemoApplication.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.catalyst.expressions.codegen.CodegenFallback$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 90 more

在不使用ST_GeomFromWKT-function的情况下执行select,但使用仅限SQL的语句,一切正常。WKT-File格式良好,显示正确。gradle.build确实包含了spark和geospark的所有必要依赖项。

在build.gradle中声明的依赖关系:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
    dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'org.json', name: 'json', version: '20200518'
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.12.0-rc1'
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: '2.0.1'
compile group: 'com.fasterxml.jackson.dataformat', name: 'jackson-dataformat-csv', version: '2.11.3'
compile group: 'org.apache.spark', name: 'spark-core_2.12', version: '3.0.1'
compile group: 'org.apache.spark', name: 'spark-sql_2.12', version: '3.0.1'
compile group: 'org.apache.spark', name: 'spark-streaming_2.12', version: '3.0.1'
compile group: 'org.apache.spark', name: 'spark-streaming-kafka-0-10_2.12', version: '3.0.1'
compile group: 'org.apache.logging.log4j', name: 'log4j-api', version: '2.7'
compile group: 'org.apache.logging.log4j', name: 'log4j-core', version: '2.7'
compile group: 'org.apache.spark', name: 'spark-sql-kafka-0-10_2.12', version: '3.0.1'
compile('org.apache.logging.log4j:log4j-slf4j-impl:2.7')
compile group: 'org.apache.kafka', name: 'kafka-clients', version: '2.5.0'
compile group: 'org.datasyslab', name: 'geospark', version: '1.3.2-SNAPSHOT'
compile group: 'org.datasyslab', name: 'geospark-sql_2.3', version: '1.3.2-SNAPSHOT'
compile group: 'org.datasyslab', name: 'geospark-viz_2.3', version: '1.3.2-SNAPSHOT'
}
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-05-31 12:11:58

GeoSpark已经搬到了阿帕奇-塞多纳。根据spark版本导入依赖,如下所示:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
<dependency>
  <groupId>org.apache.sedona</groupId>
  <artifactId>sedona-python-adapter-3.0_2.12</artifactId>
  <version>1.0.1-incubating</version>
</dependency>
<dependency>
  <groupId>org.apache.sedona</groupId>
  <artifactId>sedona-viz-3.0_2.12</artifactId>
  <version>1.0.1-incubating</version>
</dependency> 


<!-- https://mvnrepository.com/artifact/org.datasyslab/geotools-wrapper -->
<dependency>
    <groupId>org.datasyslab</groupId>
    <artifactId>geotools-wrapper</artifactId>
    <version>geotools-24.1</version>
</dependency>

来源:https://sedona.apache.org/download/maven-coordinates/

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65703387

复制
相关文章
java.lang.ClassNotFoundException: org.apache.commons.fileupload.FileItemFactory
1、启动项目报出这个错误,未找到响应的包,所以需要你将包从远程仓库下载到本地仓库即可。
别先生
2019/08/06
2.1K0
【已解决[ERROR] Could not execute SQL statement. Reason:java.lang.ClassNotFoundException: org.apache.had
安装paimon集成FLink引擎的时候报错了:[ERROR] Could not execute SQL statement. Reason:java.lang.ClassNotFoundException: org.apache.hadoop.conf.Configuration
Maynor
2023/09/10
1.2K0
【已解决[ERROR] Could not execute SQL statement. Reason:java.lang.ClassNotFoundException: org.apache.had
Caused by: java.lang.ClassNotFoundException: org.apache.catalina.LifecycleException
笔者在项目中采用dubbo作为分布式服务框架,在eclipse时直接启动部署dubbo服务的war包程序运行正常,但是执行junit单元测试时却出现如下错误提示:
johnhuster的分享
2022/03/28
6640
GeoSpark---ST_Area的使用
GeoSpark计算某个区域的面积: 测试数据如下: 10.5,32.11,30.50,60.21,33.50,60.21,10.5,32.11,china1 9.51,30.11,32.50,62.21,34.50,62.21,9.51,30.11,china2 11.5,32.11,31.50,64.21,33.50,64.21,11.5,32.11,china3 10.5,31.16,32.51,63.21,35.51,63.21,10.5,31.16,china4 11.5,32.11,30.50
用户1483438
2022/01/02
3340
【已解决】org.apache.jasper.JasperException: java.lang.ClassNotFoundException: org.apache.jsp.index_jsp
错误信息很明确,就是没有找到index_jsp这个class文件。 Jsp本质上就是一个servlet,也就是一个java类,tomcat通过运行编译好的class文件来显示jsp页面,而翻译jsp文件为java文件的引擎也就是tomcat的jasper。但是我的tomcat内部是没有缺少这部分jar包内容的。 于是便有了我的第一次尝试。
全栈程序员站长
2022/07/22
6K0
【已解决】org.apache.jasper.JasperException: java.lang.ClassNotFoundException: org.apache.jsp.index_jsp
异常:java.lang.ClassNotFoundException
如图: 说明缺少相应的包(或者类),上图就是缺少commons-logging-1.2的jar包,导入即可 补充,在maven项目中, 可能是版本不兼容, 因为不同版本相关的jar内里面的类可能不同,
时间静止不是简史
2020/07/27
7510
异常:java.lang.ClassNotFoundException
使用Xpath时Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang3.StringUtils
当使用Xpath方法时,除了导入的Jsoup.jar包外,还必须导入JsoupXpath.jar;
阮键
2020/05/08
1.9K0
Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang3.StringUtils「建议收藏」
通过这个“Caused by: java.lang.ClassNotFoundException: org.apache.commons.lang3.StringUtils”,可知这个缺少commons-lang3-3.1.jar包
全栈程序员站长
2022/07/06
1.1K0
ClassNotFoundException
ClassNotFoundException:org.springframework.web.con text.ContextLoaderListener
the5fire
2019/02/28
9360
Dubbo项目启动报错ClassNotFoundException: org.apache.curator.RetryPolicy
所以需要在maven的pom.xml文件添加上相应的maven依赖就可以啦,此处添加curator-framework和curator-recipes
SmileNicky
2023/07/10
5530
启动HBase2.1.0报错Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
将htrace-core-3.1.0-incubating.jar复制到lib路径下,就可成功找到。
王小雷
2022/05/08
6580
启动HBase2.1.0报错Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
java.lang.ClassNotFoundException与java.lang.NoClassDefFoundError的区别
  以前一直没有注意过这个问题,前两天机缘巧合上网查了一下,然后自己测试验证了一下。虽然网上说法很多,但是关于NoClassDefFoundError并没有给出一个样例,所以一直无法理解,索性自己验证了一下,收获还不少。   ClassNotFoundException   ClassNotFoundException这个错误,比较常见也好理解。 原因:就是找不到指定的class。   常见的场景就是:   1 调用class的forName方法时,找不到指定的类   2 ClassLoader 中的
用户1154259
2018/01/17
5.4K0
java.lang.ClassNotFoundException与java.lang.NoClassDefFoundError的区别
【错误解决】 java.lang.ClassNotFoundException: org.apache.jsp.WEB_002dINF.classes.views.index_jsp
转载请注明出处:http://blog.csdn.net/qq_26525215 本文源自【大学之旅_谙忆的博客】 今天建立Spring MVC骨架的时候,突然遇到这么一个问题~~ HTTP Status 500 - java.lang.ClassNotFoundException: org.apache.jsp.WEB_002dINF.classes.views.index_jsp type Exception report message java.lang.ClassNotFoundEx
谙忆
2021/01/21
1.9K0
java.lang.ClassNotFoundException: org.jdom.JDOMException
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
the5fire
2019/02/28
1K0
Java中ClassNotFoundException V.S NoClassDefFoundException 区别
如果在加载类时内存不足,则它可能会静默失败,从而在数据库中留下无效的类。 稍后,如果您尝试调用或解析任何无效的类,则将在运行时引发ClassNotFoundException或NoClassDefFoundException实例。 如果要加载损坏的类文件,将会得到相同的例外。 应该执行以下操作:
JavaEdge
2020/05/26
1.2K0
Caused by: java.lang.ClassNotFoundException: Cannot find class: userMap
1、问题出现的情况是,spring整合mybatis的时候,将返回结果resultMap写成了resultType导致的。
别先生
2020/08/11
3.3K0
Caused by: java.lang.ClassNotFoundException: Cannot find class: userMap
【随笔】java.lang.ClassNotFoundException 异常解决及思路
而我这个属于第一种情况,全局唯一使用 swagger 相关的内容为 mybatis-plus-spring-boot-starter
框架师
2022/11/16
5.6K0
【随笔】java.lang.ClassNotFoundException 异常解决及思路
启动 mini-web 报错 java.lang.ClassNotFoundException...
在学习Springside的实例mini-web的时候遇到了Tomcat报错:
LeoXu
2018/08/15
4450
启动 mini-web 报错 java.lang.ClassNotFoundException...
java.lang.ClassNotFoundException: okio.ForwardingTimeout 已解决
这个问题发现是由于版本问题引起的,下面是我用的两个jar源码对比: 先看看okio-1.13.0.jar的,会发现在forwardingSource下面有ForwardingTimeout这个类
全栈程序员站长
2022/09/02
1K0
点击加载更多

相似问题

Apache : java.lang.ClassNotFoundException:

11

GeoSpark显示SQL结果失败

122

GeoSpark转换SQL函数失败

135

java.lang.ClassNotFoundException: org.apache.spark.sql.Dataset

4197

Apache错误java.lang.ClassNotFoundException

23
添加站长 进交流群

领取专属 10元无门槛券

AI混元助手 在线答疑

扫码加入开发者社群
关注 腾讯云开发者公众号

洞察 腾讯核心技术

剖析业界实践案例

扫码关注腾讯云开发者公众号
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
查看详情【社区公告】 技术创作特训营有奖征文