首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >在java中将数据从csv导入snappy数据时出错

在java中将数据从csv导入snappy数据时出错
EN

Stack Overflow用户
提问于 2018-08-06 18:14:38
回答 2查看 90关注 0票数 1

我在scala中的表模式是

snSession.sql("create table category_subscriber( id int,catId int,brandId int,domains int,osId int,rType int,rTime int,ctId int,icmpId int,setId int,rAt int,cyId int)使用列选项(存储桶'5',PARTITION_BY 'ID',OVERFLOW 'true',EVICTION_BY 'LRUHEAPPERCENT‘)");

我的java代码是

代码语言:javascript
复制
Statement statement = snappy.createStatement();
            statement.execute("CREATE EXTERNAL TABLE CATEGORY_SUBSCRIBER USING com.databricks.spark.csv OPTIONS(path '/home/sys1010/Desktop/category_sub.csv', header 'true', inferSchema 'true', nullValue 'NULL', maxCharsPerColumn '4096';");

通过java将数据从csv导入到snappydata时出错。

代码语言:javascript
复制
INFO: Starting client on '172.16.20.28' with ID='1965|2018/08/06 15:38:58.573 IST' Source-Revision=e6cfbfdb0f14ee87261381934075b7f37672a99d
Aug 06, 2018 3:38:59 PM snappydump.SnappyOps upsert
SEVERE: null
java.sql.SQLException: (SQLState=42X01 Severity=20000) (Server=172.16.20.28/172.16.20.28[1528] Thread=ThriftProcessor-3) Syntax error: org.apache.spark.sql.ParseException: Invalid input 'U', expected tableSchema or 'EOI' (line 1, column 1):
USING com.databricks.spark.csv OPTIONS(path '/home/sys1010/Desktop/category_sub.csv', header 'true', inferSchema 'true', nullValue 'NULL', maxCharsPerColumn '4096'
^;;.
    at io.snappydata.thrift.SnappyDataService$execute_result$execute_resultStandardScheme.read(SnappyDataService.java:7033)
    at io.snappydata.thrift.SnappyDataService$execute_result$execute_resultStandardScheme.read(SnappyDataService.java:7010)
    at io.snappydata.thrift.SnappyDataService$execute_result.read(SnappyDataService.java:6949)
    at io.snappydata.org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
    at io.snappydata.thrift.SnappyDataService$Client.recv_execute(SnappyDataService.java:256)
    at io.snappydata.thrift.SnappyDataService$Client.execute(SnappyDataService.java:239)
    at io.snappydata.thrift.internal.ClientService.execute(ClientService.java:889)
    at io.snappydata.thrift.internal.ClientStatement.execute(ClientStatement.java:720)
    at io.snappydata.thrift.internal.ClientStatement.execute(ClientStatement.java:371)
    at snappydump.SnappyOps.upsert(SnappyOps.java:29)
    at snappydump.SnappyDump.menu(SnappyDump.java:51)
    at snappydump.SnappyDump.main(SnappyDump.java:39)
Caused by: java.rmi.ServerException: Server STACK: java.sql.SQLSyntaxErrorException(42X01): Syntax error: org.apache.spark.sql.ParseException: Invalid input 'U', expected tableSchema or 'EOI' (line 1, column 1):
USING com.databricks.spark.csv OPTIONS(path '/home/sys1010/Desktop/category_sub.csv', header 'true', inferSchema 'true', nullValue 'NULL', maxCharsPerColumn '4096'
^;;.
    at com.pivotal.gemfirexd.internal.iapi.error.StandardException.newException(StandardException.java:214)
    at com.pivotal.gemfirexd.internal.engine.Misc.processFunctionException(Misc.java:776)
    at com.pivotal.gemfirexd.internal.engine.Misc.processFunctionException(Misc.java:757)
    at com.pivotal.gemfirexd.internal.engine.sql.execute.SnappySelectResultSet.setup(SnappySelectResultSet.java:284)
    at com.pivotal.gemfirexd.internal.engine.distributed.message.GfxdFunctionMessage.executeFunction(GfxdFunctionMessage.java:332)
    at com.pivotal.gemfirexd.internal.engine.distributed.message.GfxdFunctionMessage.executeFunction(GfxdFunctionMessage.java:274)
    at com.pivotal.gemfirexd.internal.engine.sql.execute.SnappyActivation.executeOnLeadNode(SnappyActivation.java:338)
    at com.pivotal.gemfirexd.internal.engine.sql.execute.SnappyActivation.executeWithResultSet(SnappyActivation.java:202)
    at com.pivotal.gemfirexd.internal.engine.sql.execute.SnappyActivation.execute(SnappyActivation.java:158)
    at com.pivotal.gemfirexd.internal.impl.sql.GenericActivationHolder.execute(GenericActivationHolder.java:462)
    at com.pivotal.gemfirexd.internal.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:586)
    at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:2175)
    at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:1289)
    at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:1006)
    at com.pivotal.gemfirexd.internal.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:972)
    at io.snappydata.thrift.server.SnappyDataServiceImpl.execute(SnappyDataServiceImpl.java:1704)
    at io.snappydata.thrift.SnappyDataService$Processor$execute.getResult(SnappyDataService.java:1511)
    at io.snappydata.thrift.SnappyDataService$Processor$execute.getResult(SnappyDataService.java:1495)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at io.snappydata.thrift.server.SnappyDataServiceImpl$Processor.process(SnappyDataServiceImpl.java:201)
    at io.snappydata.thrift.server.SnappyThriftServerThreadPool$WorkerProcess.run(SnappyThriftServerThreadPool.java:270)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at io.snappydata.thrift.server.SnappyThriftServer$1.lambda$newThread$0(SnappyThriftServer.java:143)
    at java.lang.Thread.run(Thread.java:748)

csv中的数据由制表符空间分隔,如下所示

代码语言:javascript
复制
59314315    22  0   50  0   4   1531506600  0   87152   0   1531582029  0   2018-07-31
53865527    22  0   50  0   4   1531506600  0   87152   0   1531582037  0   2018-07-31
42637344    22  0   50  0   4   1531506600  0   87122   0   1531582142  0   2018-07-31
20501400    22  0   50  0   4   1531506600  0   87122   0   1531582263  0   2018-07-31
17067216    22  0   50  0   4   1531506600  0   87122   0   1531582291  0   2018-07-31
70845365    22  0   50  0   4   1531506600  0   86362   0   1531582308  0   2018-07-31
83702601    22  0   50  0   4   1531506600  0   87122   0   1531582373  0   2018-07-31

有人能帮帮我吗?

EN

回答 2

Stack Overflow用户

发布于 2018-08-06 18:30:13

语句中存在语法错误,更正后的语句为:

语句语句= snappy.createStatement();statement.execute("CREATE EXTERNAL TABLE CATEGORY_SUBSCRIBER USING com.databricks.spark.csv OPTIONS(path '/home/sys1010/Desktop/category_sub.csv',header 'true',inferSchema 'true',nullValue‘空’,maxCharsPerColumn '4096')");

票数 0
EN

Stack Overflow用户

发布于 2018-08-07 07:09:20

使用CSV选项创建外部表<name> (....)也应该行得通。CSV现在是一个内置数据源。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51705302

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档