前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >org.apache.spark.sql.AnalysisException: Table or view not found: `traintext`.`train`; line 1 pos 14;

org.apache.spark.sql.AnalysisException: Table or view not found: `traintext`.`train`; line 1 pos 14;

作者头像
用户1171305
发布2017-12-28 11:53:30
5.6K0
发布2017-12-28 11:53:30
举报
文章被收录于专栏:成长道路成长道路

恭喜老铁,跟我遇到了一样的问题,接下来是解决方法:

遇到的问题:

代码语言:javascript
复制
org.apache.spark.sql.AnalysisException: Table or view not found: `traintext`.`train`; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation `traintext`.`train`

	at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:71)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:67)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:128)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:127)
	at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:127)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:67)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:57)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at com.iflytek.test.ReadData$.main(ReadData.scala:24)
	at com.iflytek.test.ReadData.main(ReadData.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
	at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:372)
	at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:282)
	at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:64)
	at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:82)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:234)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
log4j:WARN No appenders could be found for logger (org.apache.spark.SparkContext).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

解决方案:

1.首先检查自己的代码问题,看看是否是代码的问题

代码语言:javascript
复制
object ReadData {
  def main(args: Array[String]): Unit = {
    
    val database=args.apply(0)
    
    val table=args.apply(1)
    
    val spark = SparkSession
                .builder()
                .appName("spark sql exmaple")
                .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
                .enableHiveSupport()
                .getOrCreate()
    
    val sql="select * from "+database+"."+table
代码语言:javascript
复制
    val data=spark.sql(sql)
    
    data.show();
    
  }
}
代码语言:javascript
复制
代码语言:javascript
复制
2.看看自己的项目中是否配置hive-site.xml(重点,我自己就是这个错误)
代码语言:javascript
复制
那么去哪找呢?
代码语言:javascript
复制
去集群服务器上:find -name hive-site.xml
代码语言:javascript
复制
找到之后拷贝到项目的资源文件下面就可以了,打包的时候在项目的根目录下,会自动加载jar根目录下的hive-site.xml
代码语言:javascript
复制
为什么要添加:spark要查找hive中的数据,需要这个配置文件,里面是hive的一些信息。
代码语言:javascript
复制
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017-09-20 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档