前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hive V1.2.1源码的解译

Hive V1.2.1源码的解译

作者头像
sparkexpert
发布2022-05-07 13:44:07
2570
发布2022-05-07 13:44:07
举报
文章被收录于专栏:大数据智能实战

在利用spark sql on hive的过程中,访问Mysql总是报错,其报错的日志总是显示:

15/09/21 11:12:20 INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.   Encountered: "@" (64), after : "".

15/09/21 11:12:20 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.

15/09/21 11:12:20 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.

15/09/21 11:12:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.

15/09/21 11:12:21 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.

15/09/21 11:12:21 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing

15/09/21 11:12:21 INFO ObjectStore: Initialized ObjectStore

15/09/21 11:12:21 INFO HiveMetaStore: Added admin role in metastore

15/09/21 11:12:21 INFO HiveMetaStore: Added public role in metastore

15/09/21 11:12:21 INFO HiveMetaStore: No user is added in admin role, since config is empty

15/09/21 11:12:21 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr.

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO Driver: Concurrency mode is disabled, not creating a lock manager

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO ParseDriver: Parsing command: CREATE TABLE IF NOT EXISTS src (key INT, value STRING)

15/09/21 11:12:21 INFO ParseDriver: Parse Completed

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO SemanticAnalyzer: Starting Semantic Analysis

15/09/21 11:12:21 INFO SemanticAnalyzer: Creating table src position=27

15/09/21 11:12:21 INFO HiveMetaStore: 0: get_table : db=default tbl=src

15/09/21 11:12:21 INFO audit: ugi=ndscbigdata ip=unknown-ip-addr cmd=get_table : db=default tbl=src

15/09/21 11:12:21 INFO HiveMetaStore: 0: get_database: default

15/09/21 11:12:21 INFO audit: ugi=ndscbigdata ip=unknown-ip-addr cmd=get_database: default

15/09/21 11:12:21 INFO Driver: Semantic Analysis Completed

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null)

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO Driver: Starting command: CREATE TABLE IF NOT EXISTS src (key INT, value STRING)

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO PerfLogger:

15/09/21 11:12:21 INFO DDLTask: Default to LazySimpleSerDe for table src

15/09/21 11:12:21 INFO HiveMetaStore: 0: create_table: Table(tableName:src, dbName:default, owner:ndscbigdata, createTime:1442805141, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:int, comment:null), FieldSchema(name:value, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputF ormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMa ps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)

15/09/21 11:12:21 INFO audit: ugi=ndscbigdata ip=unknown-ip-addr cmd=create_table: Table(tableName:src, dbName:default, owner:ndscbigdata, createTime:1442805141, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:int, comment:null), FieldSchema(name:value, type:string, comment:null)], location:null, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputF ormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMa ps:{}), storedAsSubDirectories:false), partitionKeys:[], parameters:{}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE)

15/09/21 11:12:21 ERROR RetryingHMSHandler: MetaException(message:file:/user/hive/warehouse/src is not a directory or unable to create one)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1239)

at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1294)

at sun.reflect.NativeMethodAccessorImpl .invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl .invoke(NativeMethodAccessorImpl .java:62)

at sun.reflect.DelegatingMethodAccessor Impl.invoke(DelegatingMethodAccessor Impl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)

at com.sun.proxy.$Proxy21.create_table_with_environment_context(Unknown Source)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:558)

at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:547)

at sun.reflect.NativeMethodAccessorImpl .invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl .invoke(NativeMethodAccessorImpl .java:62)

at sun.reflect.DelegatingMethodAccessor Impl.invoke(DelegatingMethodAccessor Impl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)

at com.sun.proxy.$Proxy22.createTable(Unknown Source)

at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:613)

at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4189)

at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:281)

at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153)

at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)

at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503)

at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270)

at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088)

at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911)

at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901)

at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:329)

at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:310)

at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)

at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:310)

at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:300)

at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:472)

at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33)

at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)

at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)

at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)

at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)

at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)

at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)

at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)

at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:939)

at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:939)

at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)

at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)

at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)

at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:744)

at org.apache.spark.examples.sql.hive.HiveFromSpark$.main(HiveFromSpark.scala:50)

at org.apache.spark.examples.sql.hive.HiveFromSpark.main(HiveFromSpark.scala)

由于网上对这一块的东西介绍得总是很少,按照操作也总是无解,于是自己想着先把HIVE源码编译一下,花了半天时间,终于搞定。

编译环境:Eclipse

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2015-10-14,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
云数据库 MySQL
腾讯云数据库 MySQL(TencentDB for MySQL)为用户提供安全可靠,性能卓越、易于维护的企业级云数据库服务。其具备6大企业级特性,包括企业级定制内核、企业级高可用、企业级高可靠、企业级安全、企业级扩展以及企业级智能运维。通过使用腾讯云数据库 MySQL,可实现分钟级别的数据库部署、弹性扩展以及全自动化的运维管理,不仅经济实惠,而且稳定可靠,易于运维。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档