Hive多分隔符支持示例

温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。

1.问题描述

如何将多个字符作为字段分割符的数据文件加载到Hive表中,事例数据如下:

字段分隔符为“@#$”

test1@#$test1name@#$test2value
test2@#$test2name@#$test2value
test3@#$test3name@#$test4value

如何将上述事例数据加载到Hive表(multi_delimiter_test)中,表结构如下:

字段名

字段类型

s1

String

s2

String

s3

String

2.Hive多分隔符支持

Hive在0.14及以后版本支持字段的多分隔符,参考https://cwiki.apache.org/confluence/display/Hive/MultiDelimitSerDe

3.实现方式

  • 测试环境说明

| 测试环境为CDH5.11.1Hive版本为1.1.0操作系统为RedHat6.5 |

|:----|

  • 操作步骤

1.准备多分隔符文件并装载到HDFS对应目录

[ec2-user@ip-172-31-8-141  ~]$ cat multi_delimiter_test.dat

test1@#$test1name@#$test2value

test2@#$test2name@#$test2value

test3@#$test3name@#$test4value  

[ec2-user@ip-172-31-8-141  ~]$ hadoop dfs -put multi_delimiter_test.dat /fayson/multi_delimiter_test

[ec2-user@ip-172-31-8-141  ~]$ hadoop dfs -ls /fayson/multi_delimiter_test

DEPRECATED: Use of this  script to execute hdfs command is deprecated.

Instead use the hdfs  command for it.


Found 1 items

-rw-r--r--   3 user_r supergroup         93 2017-08-23 03:24  /fayson/multi_delimiter_test/multi_delimiter_test.dat

[ec2-user@ip-172-31-8-141  ~]$

2.基于准备好的多分隔符文件建表

create  external table multi_delimiter_test(

s1 string,

s2 string,

s3 string)

ROW FORMAT  SERDE 'org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe' WITH  SERDEPROPERTIES ("field.delim"="@#$")

stored as  textfile location '/fayson/multi_delimiter_test';

3.测试

2: jdbc:hive2://localhost:10000/default>  select * from multi_delimiter_test;
+--------------------------+--------------------------+--------------------------+--+
|  multi_delimiter_test.s1  |  multi_delimiter_test.s2  |  multi_delimiter_test.s3  |
+--------------------------+--------------------------+--------------------------+--+
| test1                    | test1name                | test2value               |
| test2                    | test2name                | test2value               |
| test3                    | test3name                | test4value               |
+--------------------------+--------------------------+--------------------------+--+
2:  jdbc:hive2://localhost:10000/default> select count(*) from  multi_delimiter_test;
INFO  : Ended Job = job_1503469952834_0006
INFO  : MapReduce Jobs Launched:
INFO  : Stage-Stage-1: Map: 1  Reduce: 1    Cumulative CPU: 3.25 sec   HDFS  Read: 6755 HDFS Write: 2 SUCCESS
INFO  : Total MapReduce CPU Time Spent: 3 seconds  250 msec
INFO  : Completed executing command(queryId=hive_20170823041818_ce58aae2-e6db-4eed-b6af-652235a6e66a);  Time taken: 33.286 seconds
INFO  : OK
+------+--+
| _c0  |
+------+--+
| 3    |
+------+--+
1 row selected (33.679  seconds)
2:  jdbc:hive2://localhost:10000/def

4.常见问题

1.执行count查询时报错

  • 异常日志

通过beeline执行count查询时报错

2:  jdbc:hive2://localhost:10000/default> select count(*) from  multi_delimiter_test;

INFO  : Compiling  command(queryId=hive_20170823035959_f1b11a9b-757d-4d9b-b8a7-6d4ab1c00a97):  select count(*) from multi_delimiter_test

INFO  : Semantic Analysis Completed

INFO  : Returning Hive schema:  Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)],  properties:null)

INFO  : Completed compiling  command(queryId=hive_20170823035959_f1b11a9b-757d-4d9b-b8a7-6d4ab1c00a97);  Time taken: 0.291 seconds

INFO  : Executing  command(queryId=hive_20170823035959_f1b11a9b-757d-4d9b-b8a7-6d4ab1c00a97):  select count(*) from multi_delimiter_test

INFO  : Query ID =  hive_20170823035959_f1b11a9b-757d-4d9b-b8a7-6d4ab1c00a97

INFO  : Total jobs = 1

INFO  : Launching Job 1 out of 1

INFO  : Starting task [Stage-1:MAPRED] in serial  mode

INFO  : Number of reduce tasks determined at  compile time: 1

INFO  : In order to change the average load for a  reducer (in bytes):

INFO  :    set hive.exec.reducers.bytes.per.reducer=<number>

INFO  : In order to limit the maximum number of  reducers:

INFO  :    set hive.exec.reducers.max=<number>

INFO  : In order to set a constant number of  reducers:

INFO  :    set mapreduce.job.reduces=<number>

INFO  : number of splits:1

INFO  : Submitting tokens for job:  job_1503469952834_0002

INFO  : Kind: HDFS_DELEGATION_TOKEN, Service:  ha-hdfs:nameservice1, Ident: (token for hive: HDFS_DELEGATION_TOKEN  owner=hive/ip-172-31-8-141.ap-southeast-1.compute.internal@CLOUDERA.COM,  renewer=yarn, realUser=, issueDate=1503475160778, maxDate=1504079960778,  sequenceNumber=27, masterKeyId=9)

INFO  : The url to track the job:  http://ip-172-31-9-186.ap-southeast-1.compute.internal:8088/proxy/application_1503469952834_0002/

INFO  : Starting Job = job_1503469952834_0002,  Tracking URL =  http://ip-172-31-9-186.ap-southeast-1.compute.internal:8088/proxy/application_1503469952834_0002/

INFO  : Kill Command =  /opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/lib/hadoop/bin/hadoop  job  -kill job_1503469952834_0002

INFO  : Hadoop job information for Stage-1:  number of mappers: 1; number of reducers: 1

INFO  : 2017-08-23 03:59:32,039 Stage-1 map =  0%,  reduce = 0%

INFO  : 2017-08-23 04:00:08,106 Stage-1 map =  100%,  reduce = 100%

ERROR : Ended Job =  job_1503469952834_0002 with errors

ERROR : FAILED: Execution  Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

INFO  : MapReduce Jobs Launched:

INFO  : Stage-Stage-1: Map: 1  Reduce: 1    HDFS Read: 0 HDFS Write: 0 FAIL

INFO  : Total MapReduce CPU Time Spent: 0 msec

INFO  : Completed executing  command(queryId=hive_20170823035959_f1b11a9b-757d-4d9b-b8a7-6d4ab1c00a97);  Time taken: 48.737 seconds

Error: Error while  processing statement: FAILED: Execution Error, return code 2 from  org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)

使用Hive的shell操作报错如下

Error:  java.lang.RuntimeException: Error in configuring object
        at  org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at  org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at  org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:449)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at  org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
        at  java.security.AccessController.doPrivileged(Native Method)
        at  javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at  org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by:  java.lang.reflect.InvocationTargetException
        at  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at  sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at  java.lang.reflect.Method.invoke(Method.java:606)
        at  org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        ... 9 more
Caused by:  java.lang.RuntimeException: Error in configuring object
        at  org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        at  org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at  org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at  org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
        ... 14 more
Caused by:  java.lang.reflect.InvocationTargetException
        at  sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at  sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at  java.lang.reflect.Method.invoke(Method.java:606)
        at  org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        ... 17 more
Caused by:  java.lang.RuntimeException: Map operator initialization failed
        at  org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:147)
        ... 22 more
Caused by:  org.apache.hadoop.hive.ql.metadata.HiveException:  java.lang.ClassNotFoundException: Class  org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
        at  org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:323)
        at  org.apache.hadoop.hive.ql.exec.MapOperator.setChildren(MapOperator.java:333)
        at  org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:116)
        ... 22 more
Caused by:  java.lang.ClassNotFoundException: Class  org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe not found
        at  org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2105)
        at  org.apache.hadoop.hive.ql.plan.PartitionDesc.getDeserializer(PartitionDesc.java:140)
        at  org.apache.hadoop.hive.ql.exec.MapOperator.getConvertedOI(MapOperator.java:297)
        ... 24 more
FAILED: Execution Error,  return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map:  1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time  Spent: 0 ms
  • 问题原因分析

org.apache.hadoop.hive.contrib.serde2.MultiDelimitSerDe类是hive-contrib.jar包里。

在执行非聚合类操作查询时,sql能正常执行,在进行聚合类函数操作时报错,说明在执行MapReduce任务时缺少jar依赖包;MapReduce属于yarn作业,所以yarn运行环境缺少hive-contrib.jar的依赖包。

  • 解决方法

在CDH集群的所有节点一下操作,将hive-contrib-1.1.0-cdh5.10.2.jar包拷贝到yarn的lib目录下

| sudo scp -r /opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib-1.1.0-cdh5.10.2.jar /opt/cloudera/parcels/CDH/lib/hadoop-yarn/lib/ |

|:----|

重新运行count语句,执行成功

醉酒鞭名马,少年多浮夸! 岭南浣溪沙,呕吐酒肆下!挚友不肯放,数据玩的花! 温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。

本文分享自微信公众号 - Hadoop实操(gh_c4c535955d0f)

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2017-08-30

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

编辑于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏前端桃园

知识体系解决迷茫的你

最近在星球里群里都有小伙伴说道自己对未来的路比较迷茫,一旦闲下来就不知道自己改干啥,今天我这篇文章就是让你觉得一天给你 25 个小时你都不够用,觉得睡觉都是浪费...

21340
来自专栏Ken的杂谈

【系统设置】CentOS 修改机器名

18030
来自专栏怀英的自我修炼

考研英语-1-导学

英二图表作文要重视。总体而言,英语一会比英语二难点。不过就写作而言,英语二会比英语一有难度,毕竟图表作文并不好写。

11810
来自专栏haifeiWu与他朋友们的专栏

复杂业务下向Mysql导入30万条数据代码优化的踩坑记录

从毕业到现在第一次接触到超过30万条数据导入MySQL的场景(有点low),就是在顺丰公司接入我司EMM产品时需要将AD中的员工数据导入MySQL中,因此楼主负...

29540
来自专栏FSociety

SQL中GROUP BY用法示例

GROUP BY我们可以先从字面上来理解,GROUP表示分组,BY后面写字段名,就表示根据哪个字段进行分组,如果有用Excel比较多的话,GROUP BY比较类...

5.1K20
来自专栏钱塘大数据

中国互联网协会发布:《2018中国互联网发展报告》

在2018中国互联网大会闭幕论坛上,中国互联网协会正式发布《中国互联网发展报告2018》(以下简称《报告》)。《中国互联网发展报告》是由中国互联网协会与中国互联...

13550
来自专栏微信公众号:小白课代表

不只是软件,在线也可以免费下载百度文库了。

不管是学生,还是职场员工,下载各种文档几乎是不可避免的,各种XXX.docx,XXX.pptx更是家常便饭,人们最常用的就是百度文库,豆丁文库,道客巴巴这些下载...

44530
来自专栏腾讯高校合作

【倒计时7天】2018教育部-腾讯公司产学合作协同育人项目申请即将截止!

15720
来自专栏腾讯社交用户体验设计

ISUX Xcube智能一键生成H5

51220
来自专栏钱塘大数据

理工男图解零维到十维空间,烧脑已过度,受不了啦!

让我们从一个点开始,和我们几何意义上的点一样,它没有大小、没有维度。它只是被想象出来的、作为标志一个位置的点。它什么也没有,空间、时间通通不存在,这就是零维度。

32930

扫码关注云+社区

领取腾讯云代金券

年度创作总结 领取年终奖励