首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >使用Maven Got错误测试Spark Scala : java.lang.NoClassDefFoundError

使用Maven Got错误测试Spark Scala : java.lang.NoClassDefFoundError
EN

Stack Overflow用户
提问于 2018-08-17 13:50:09
回答 1查看 934关注 0票数 0

我试着用Maven在Scala IDE (eclipse)上测试Spark Scala,但总是出错:

代码语言:javascript
复制
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
    at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:73)
    at org.apache.spark.SparkConf.<init>(SparkConf.scala:68)
    at org.apache.spark.SparkConf.<init>(SparkConf.scala:55)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:904)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at com.SimpleApp$.main(SimpleApp.scala:7)
    at com.SimpleApp.main(SimpleApp.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 9 more

我尝试的程序是Spark文档中的快速入门代码:

代码语言:javascript
复制
import org.apache.spark.sql.SparkSession

object SimpleApp {

  def main(args: Array[String]) {
    val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
    val spark = SparkSession.builder.appName("Simple Application").getOrCreate()
    val logData = spark.read.textFile(logFile).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println(s"Lines with a: $numAs, Lines with b: $numBs")
    spark.stop()
  }
}

我使用的是Spark 2.2.0和Scala 2.11.7。pom.xml文件为:

代码语言:javascript
复制
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>2.2.0</version>
    </dependency>
    <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>2.2.0</version>
    </dependency>       

我遵循了另一个线程的解决方案:NoClassDefFoundError com.apache.hadoop.fs.FSDataInputStream when execute spark-shell

但它对我不起作用。我的spark-env.sh文件中的内容是:

代码语言:javascript
复制
# If 'hadoop' binary is on your PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath)

# With explicit path to 'hadoop' binary
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)

# Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop --config /usr/local/hadoop/etc/hadoop classpath)

有人能帮我这个忙吗?感谢你的帮助。

Devesh的回答解决了我的部分问题。然而,我还有其他的问题,

代码语言:javascript
复制
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/08/17 10:34:03 INFO SparkContext: Running Spark version 2.2.0
18/08/17 10:34:03 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/17 10:34:03 WARN Utils: Your hostname, toshiba0 resolves to a loopback address: 127.0.1.1; using 192.168.1.217 instead (on interface wlp2s0)
18/08/17 10:34:03 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
18/08/17 10:34:03 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at com.SimpleApp$.main(SimpleApp.scala:11)
    at com.SimpleApp.main(SimpleApp.scala)
18/08/17 10:34:03 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: A master URL must be set in your configuration
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:376)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
    at com.SimpleApp$.main(SimpleApp.scala:11)
    at com.SimpleApp.main(SimpleApp.scala)

我不知道为什么Spark说我的环回地址是127.0.1.1,我检查了我的配置:/etc/network/interface,它是自动环回,然后我ping了127.0.0.1。它起作用了。

我遵循了这个链接Error initializing SparkContext: A master URL must be set in your configuration中的解决方案

并放入以下代码,因为我使用的是笔记本电脑。它还是不能工作。

代码语言:javascript
复制
val conf = new SparkConf().setMaster("local[2]")

不知道我的设置发生了什么变化。谢谢!

EN

回答 1

Stack Overflow用户

发布于 2018-08-17 15:54:58

只需在maven pom.xml文件中添加以下内容

代码语言:javascript
复制
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
<dependency>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-client</artifactId>
     <version>2.7.0</version>
</dependency>

在以前的Spark版本中,您必须创建一个SparkConf和SparkContext来与Spark交互,而在Spark2.0之后的版本中,可以通过SparkSession实现相同的效果,而无需显式创建SparkConf、SparkContext或SQLContext,因为它们被封装在SparkSession中

**示例代码片段:-**

代码语言:javascript
复制
import org.apache.spark.sql.SparkSession
object SimpleApp {

def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // some file on system
val spark = SparkSession
            .builder
            .appName("Simple Application")
            .master("local[2]")
            .getOrCreate()
val logData = spark.read.textFile(logFile).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println(s"Lines with a: $numAs, Lines with b: $numBs")
 }
}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51889115

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档