Hadoop基础教程-第12章 Hive:进阶(12.1 内置函数)(草稿)

第12章 Hive:进阶

12.1 内置函数

为了方便测试Hive的内置函数,需要构造一个类似于Oracle的dual虚表

hive> create table dual(value string);
OK
Time taken: 0.117 seconds
hive>
hive> insert into dual values("test");
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20170820093018_106fdbe1-3d77-4fbb-b200-3b3d56007858
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1503220733636_0016, Tracking URL = http://node1:8088/proxy/application_1503220733636_0016/
Kill Command = /opt/hadoop-2.7.3/bin/hadoop job  -kill job_1503220733636_0016
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-08-20 09:30:34,522 Stage-1 map = 0%,  reduce = 0%
2017-08-20 09:30:44,205 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.76 sec
MapReduce Total cumulative CPU time: 1 seconds 760 msec
Ended Job = job_1503220733636_0016
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://cetc/user/hive/warehouse/dual/.hive-staging_hive_2017-08-20_09-30-18_395_4589036656384871958-1/-ext-10000
Loading data to table default.dual
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.76 sec   HDFS Read: 3674 HDFS Write: 73 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 760 msec
OK
Time taken: 28.658 seconds
hive>
hive> select 1+1 from dual;
OK
2
Time taken: 0.176 seconds, Fetched: 1 row(s)
hive> select 7%3 from dual;
OK
1
Time taken: 0.212 seconds, Fetched: 1 row(s)
hive>

12.1.1 标准函数

(1)日期函数

hive> select to_date('2016-08-31 08:30:00') from dual;
OK
2016-08-31
Time taken: 0.129 seconds, Fetched: 1 row(s) 
hive> select year('2016-08-31 08:30:00') from dual;
OK
2016
Time taken: 0.207 seconds, Fetched: 1 row(s)
hive> select month('2016-08-31 08:30:00') from dual;
OK
8
Time taken: 0.156 seconds, Fetched: 1 row(s)
hive> 

(2)数学函数

hive> select sqrt(2) from dual;
OK
1.4142135623730951
Time taken: 0.17 seconds, Fetched: 1 row(s) 
hive> select abs(-11) from dual;
OK
11
Time taken: 0.167 seconds, Fetched: 1 row(s)
hive> select floor(3.56) from dual;
OK
3
Time taken: 0.134 seconds, Fetched: 1 row(s)
hive> select ceil(3.123) from dual;
OK
4
Time taken: 0.131 seconds, Fetched: 1 row(s)
hive> select round(3.23456) from dual;
OK
3.0
Time taken: 0.138 seconds, Fetched: 1 row(s)
hive> select round(3.23456,3) from dual;
OK
3.235
Time taken: 0.291 seconds, Fetched: 1 row(s)
hive>

(3)字符串函数

hive> select length('hadoop') from dual;
OK
6
Time taken: 0.28 seconds, Fetched: 1 row(s)
hive> select reverse('hadoop') from dual;
OK
poodah
Time taken: 0.257 seconds, Fetched: 1 row(s)
hive> select substr('hadoop',2) from dual;
OK
adoop
Time taken: 0.221 seconds, Fetched: 1 row(s)
hive> select trim(' hadoop  ') from dual;
OK
hadoop
Time taken: 0.267 seconds, Fetched: 1 row(s)
hive>

12.1.2 聚合函数

hive> select count(1) from emp;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20170824100247_a5b82db6-3a76-41bb-9f33-1c23c06209da
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1503582553611_0003, Tracking URL = http://node1:8088/proxy/application_1503582553611_0003/
Kill Command = /opt/hadoop-2.7.3/bin/hadoop job  -kill job_1503582553611_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-24 10:03:07,501 Stage-1 map = 0%,  reduce = 0%
2017-08-24 10:03:21,721 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.49 sec
2017-08-24 10:03:33,482 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.88 sec
MapReduce Total cumulative CPU time: 5 seconds 880 msec
Ended Job = job_1503582553611_0003
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 5.88 sec   HDFS Read: 9088 HDFS Write: 102 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 880 msec
OK
13
Time taken: 47.277 seconds, Fetched: 1 row(s)
hive>
hive> select avg(sal) from emp;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20170824095107_1f447d4c-f008-491d-8537-00fc4a0d45ea
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1503582553611_0001, Tracking URL = http://node1:8088/proxy/application_1503582553611_0001/
Kill Command = /opt/hadoop-2.7.3/bin/hadoop job  -kill job_1503582553611_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-24 09:51:36,596 Stage-1 map = 0%,  reduce = 0%
2017-08-24 09:51:55,721 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.95 sec
2017-08-24 09:52:10,207 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.01 sec
MapReduce Total cumulative CPU time: 6 seconds 10 msec
Ended Job = job_1503582553611_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.01 sec   HDFS Read: 9818 HDFS Write: 118 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 10 msec
OK
2077.0833333333335
Time taken: 65.158 seconds, Fetched: 1 row(s)
hive>
hive> select max(sal) from emp where did=30;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20170824095233_ba070012-65fb-42da-89de-f748d33fc9b9
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1503582553611_0002, Tracking URL = http://node1:8088/proxy/application_1503582553611_0002/
Kill Command = /opt/hadoop-2.7.3/bin/hadoop job  -kill job_1503582553611_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-08-24 09:52:52,526 Stage-1 map = 0%,  reduce = 0%
2017-08-24 09:53:07,359 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.0 sec
2017-08-24 09:53:18,303 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 6.7 sec
MapReduce Total cumulative CPU time: 6 seconds 700 msec
Ended Job = job_1503582553611_0002
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 6.7 sec   HDFS Read: 10207 HDFS Write: 106 SUCCESS
Total MapReduce CPU Time Spent: 6 seconds 700 msec
OK
2850.0
Time taken: 46.863 seconds, Fetched: 1 row(s)
hive>

12.1.3 表生成函数

hive> select array(1,2,3) from dual;
OK
[1,2,3]
Time taken: 0.371 seconds, Fetched: 1 row(s)
hive> select explode(array(1,2,3)) from dual;
OK
1
2
3
Time taken: 0.265 seconds, Fetched: 3 row(s)
hive>

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏Hadoop实操

CENTOS6.5安装CDH5.12.1(二)

[root@ip-172-31-6-148~]# hadoop fs -mkdir -p /fayson/test

3826
来自专栏Albert陈凯

Spark ReadmeApache Spark

Apache Spark Spark is a fast and general cluster computing system for Big Data. ...

2746
来自专栏Hadoop实操

Hive多分隔符支持示例

如何将上述事例数据加载到Hive表(multi_delimiter_test)中,表结构如下:

91012
来自专栏木宛城主

Fix SharePoint 2013 Site in Read only mode after an interrupted backup

Problem When I was backing up SharePoint Site Collection Automatically with Powe...

2258
来自专栏大数据学习笔记

Hadoop基础教程-第11章 Hive:SQL on Hadoop(11.6 HQL:DML数据操纵)(草稿)

第11章 Hive:SQL on Hadoop 11.6 HQL:DML数据操纵 11.6.1 普通表装载数据 在前面我们已经熟悉了一次性向Hive表导入(装载...

1968
来自专栏云计算与大数据

Import Kafka data into OSS using E-MapReduce service

Overview Kafka is a frequently-used message queue in open-source communities. A...

852
来自专栏xingoo, 一个梦想做发明家的程序员

[Hadoop大数据]——Hive连接JOIN用例详解

SQL里面通常都会用Join来连接两个表,做复杂的关联查询。比如用户表和订单表,能通过join得到某个用户购买的产品;或者某个产品被购买的人群.... H...

2598
来自专栏大数据学习笔记

Hadoop基础教程-第10章 HBase:Hadoop数据库(10.7 HBase 批量导入)

第10章 HBase:Hadoop数据库 10.7 HBase 批量导入 10.7.1 批量导入数据的方法 向HBase表中导入一条数据可以使用HBase Sh...

1925
来自专栏c#开发者

domaincontext load 回调

This post is not specific to RIA Services but I thought I'd add it to the title ...

3289
来自专栏智能计算时代

BigInsights on Cloud: Major benefits and future enhancements

image.png IBM BigInsights is an enterprise-ready Hadoop distribution designed to...

29911

扫码关注云+社区