首页
学习
活动
专区
圈层
工具
发布
34 篇文章
1
腾讯云流计算 Oceanus 最佳实践&解决方案汇总
2
腾讯云流计算 Oceanus Connector 使用示例汇总
3
Flink 实践教程:进阶11-SQL 关联:Regular Join
4
Flink 实践教程:进阶10-自定义聚合函数(UDAF)
5
Flink 实践教程:进阶9-自定义表值函数(UDTF)
6
Flink 实践教程:进阶8-自定义标量函数(UDF)
7
Flink 实践教程:进阶7-基础运维
8
Flink 实践教程:进阶6-CEP 复杂事件处理
9
Flink 实践教程:进阶5-乱序调整
10
Flink 实践教程:进阶4-窗口 TOP N
11
Flink 实践教程:进阶3-窗口操作
12
Flink 实践教程:进阶2-复杂格式数据抽取
13
Flink 实践教程:进阶1-维表关联
14
Flink 实践教程:入门10-Python作业的使用
15
Flink 实践教程:入门9-Jar 作业开发
16
Flink 实践教程:入门8-简单 ETL 作业
17
Flink 实践教程:入门7-消费 Kafka 数据写入 PG
18
Flink 实践教程:入门6-读取 PG 数据写入 ClickHouse
19
Flink 实践教程:入门5-写入 ClickHouse
20
Flink 实践教程:入门4-读取 MySQL 数据写入 ES
21
Flink 实践教程:入门3-读取 MySQL 数据
22
Flink 实践教程:入门2-写入 Elasticsearch
23
Flink 实践教程:入门 1-零基础用户实现简单 Flink 任务
24
Oceanus 实践-从0到1接入 CKafka SQL 作业
25
Oceanus 实践-从0到1开发ClickHouse SQL作业
26
Oceanus 实践-从0到1开发PG SQL作业
27
基于腾讯云Oceanus实现MySQL和Hbase维表到数据仓库ClickHouse的实时分析
28
基于流计算 Oceanus 和 Elasticsearch Service 构建百亿级实时监控系统
29
Flink社区 | Flink CDC 2.0 正式发布,核心改进详解
30
用Python进行实时计算——PyFlink快速入门
31
实时数据湖:Flink CDC流式写入Hudi
32
专家带你吃透 Flink 架构:一个 新版 Connector 的实现
33
Apache Flink 1.12.2集成Hudi 0.9.0运行指南
34
Flink Metrics&REST API 介绍和原理解析

实时数据湖:Flink CDC流式写入Hudi

1. 环境准备

•Flink 1.12.2_2.11•Hudi 0.9.0-SNAPSHOT(master分支)•Spark 2.4.5、Hadoop 3.1.3、Hive 3.1.2

2. Flink CDC写入Hudi

MySQL建表语句如下

代码语言:javascript
复制
create table users(    id bigint auto_increment primary key,    name varchar(20) null,    birthday timestamp default CURRENT_TIMESTAMP not null,    ts timestamp default CURRENT_TIMESTAMP not null); // 随意插入几条数据insert into users (name) values ('hello');insert into users (name) values ('world');insert into users (name) values ('iceberg');insert into users (id,name) values (4,'spark');insert into users (name) values ('hudi'); select * from users;update users set name = 'hello spark'  where id = 5;delete from users where id = 5;

启动sql-client

代码语言:javascript
复制
$FLINK_HOME/bin/sql-client.sh embedded  //1.创建 mysql-cdcCREATE TABLE mysql_users (                             id BIGINT PRIMARY KEY NOT ENFORCED ,                             name STRING,                             birthday TIMESTAMP(3),                             ts TIMESTAMP(3)) WITH (      'connector' = 'mysql-cdc',      'hostname' = 'localhost',      'port' = '3306',      'username' = 'root',      'password' = '123456',      'server-time-zone' = 'Asia/Shanghai',      'database-name' = 'mydb',      'table-name' = 'users'      ); // 2.创建hudi表CREATE TABLE hudi_users2(    id BIGINT PRIMARY KEY NOT ENFORCED,    name STRING,    birthday TIMESTAMP(3),    ts TIMESTAMP(3),    `partition` VARCHAR(20)) PARTITIONED BY (`partition`) WITH (    'connector' = 'hudi',    'table.type' = 'MERGE_ON_READ',    'path' = 'hdfs://localhost:9000/hudi/hudi_users2',    'read.streaming.enabled' = 'true',    'read.streaming.check-interval' = '1' ); //3.mysql-cdc 写入hudi ,会提交有一个flink任务INSERT INTO hudi_users2 SELECT *, DATE_FORMAT(birthday, 'yyyyMMdd') FROM mysql_users; 

Flink任务提交成功后可以查看任务界面

同时可以查看HDFS里的Hudi数据路径,这里需要等Flink 5次checkpoint(默认配置可修改)之后才能查看到这些目录,一开始只有.hoodie一个文件夹

在MySQL执行insert、update、delete等操作,当进行compaction生成parquet文件后就可以用hive/spark-sql/presto(本文只做了hive和spark-sql的测试)进行查询,这里需要注意下:如果没有生成parquet文件,我们建的parquet表是查询不出数据的。

3. Hive查询Hudi表

代码语言:javascript
复制
cd $HIVE_HOMEmkdir auxlib

然后将hudi-hadoop-mr-bundle-0.9.0-SNAPSHOT.jar拷贝过来

使用beeline登录hive

代码语言:javascript
复制
beeline -u jdbc:hive2://localhost:10000 -n hadoop hadoop

创建外部表关联Hudi路径,有两种建表方式

代码语言:javascript
复制
方式一:INPUTFORMAT是org.apache.hudi.hadoop.HoodieParquetInputFormat这种方式只会查询出来parquet数据文件中的内容,但是刚刚更新或者删除的数据不能查出来// 创建外部表CREATE EXTERNAL TABLE `hudi_users_2`(                  `_hoodie_commit_time` string,                       `_hoodie_commit_seqno` string,                      `_hoodie_record_key` string,                        `_hoodie_partition_path` string,                    `_hoodie_file_name` string,                         `id` bigint,                                        `name` string,                                      `birthday` bigint,                                  `ts` bigint)                                      PARTITIONED BY (                                      `partition` string)                               ROW FORMAT SERDE                                      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'   STORED AS INPUTFORMAT                                 'org.apache.hudi.hadoop.HoodieParquetInputFormat'  OUTPUTFORMAT                                          'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'  LOCATION                                              'hdfs://localhost:9000/hudi/hudi_users2';  方式二:INPUTFORMAT是org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat   // 这种方式是能够实时读出来写入的数据,也就是Merge On Write,会将基于Parquet的基础列式文件、和基于行的Avro日志文件合并在一起呈现给用户。 CREATE EXTERNAL TABLE `hudi_users_2_mor`(                  `_hoodie_commit_time` string,                       `_hoodie_commit_seqno` string,                      `_hoodie_record_key` string,                        `_hoodie_partition_path` string,                    `_hoodie_file_name` string,                         `id` bigint,                                        `name` string,                                      `birthday` bigint,                                  `ts` bigint)                                      PARTITIONED BY (                                      `partition` string)                               ROW FORMAT SERDE                                      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'   STORED AS INPUTFORMAT                                 'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat'  OUTPUTFORMAT                                          'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'  LOCATION                                              'hdfs://localhost:9000/hudi/hudi_users2';      // 添加分区alter table hudi_users_2 add if not exists partition(`partition`='20210414') location 'hdfs://localhost:9000/hudi/hudi_users2/20210414'; alter table hudi_users_2_mor add if not exists partition(`partition`='20210414') location 'hdfs://localhost:9000/hudi/hudi_users2/20210414';   // 查询分区的数据select * from hudi_users_2 where `partition`=20210414;select * from hudi_users_2_mor where `partition`=20210414;

INPUTFORMAT是org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat格式的表在hive3.1.2里面是不能够执行统计操作的

执行select count(1) from hudi_users3_mor where partition='20210414';

查看hive日志 tail -fn 100 hiveserver2.log

需要进行如下设置:set hive.input.format = org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat ;具体原因参照这个issue:https://github.com/apache/hudi/issues/2813,或者阿里云技术文档:https://help.aliyun.com/document_detail/193310.html?utm_content=g_1000230851&spm=5176.20966629.toubu.3.f2991ddcpxxvD1#title-ves-82n-odd

再执行一遍依旧报错

但是在本地用hive-2.3.8执行成功了,社群里面的同学测试1.1版本的也报同样的错误,目前猜测是hive版本兼容性有关

4. Spark-SQL查询Hudi表

hudi-spark-bundle_2.11-0.9.0-SNAPSHOT.jar拷贝到$SPAKR_HOME/jars,每个节点都拷贝一份

hudi-hadoop-mr-bundle-0.9.0-SNAPSHOT.jar拷贝到$HADOOP_HOME/share/hadoop/hdfs下,每个节点都拷贝一份,然后重启hadoop

创建表,同样有两种方式

代码语言:javascript
复制
CREATE EXTERNAL TABLE `hudi_users3_spark`(                  `_hoodie_commit_time` string,                       `_hoodie_commit_seqno` string,                      `_hoodie_record_key` string,                        `_hoodie_partition_path` string,                    `_hoodie_file_name` string,                         `id` bigint,                                        `name` string,                                      `birthday` bigint,                                  `ts` bigint)                                      PARTITIONED BY (                                      `partition` string)                               ROW FORMAT SERDE                                      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'   STORED AS INPUTFORMAT                                 'org.apache.hudi.hadoop.HoodieParquetInputFormat'  OUTPUTFORMAT                                          'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'  LOCATION                                              'hdfs://localhost:9000/hudi/hudi_users3';        alter table hudi_users3_spark add if not exists partition(`partition`='20210414') location 'hdfs://localhost:9000/hudi/hudi_users3/20210414';    select * from hudi_users3_spark where `partition`='20210414'; // 创建可以实时读表数据的格式CREATE EXTERNAL TABLE `hudi_users3_spark_mor`(                  `_hoodie_commit_time` string,                       `_hoodie_commit_seqno` string,                      `_hoodie_record_key` string,                        `_hoodie_partition_path` string,                    `_hoodie_file_name` string,                         `id` bigint,                                        `name` string,                                      `birthday` bigint,                                  `ts` bigint)                                      PARTITIONED BY (                                      `partition` string)                               ROW FORMAT SERDE                                      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'   STORED AS INPUTFORMAT                                 'org.apache.hudi.hadoop.HoodieParquetInputFormat'  OUTPUTFORMAT                                          'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat'  LOCATION                                              'hdfs://localhost:9000/hudi/hudi_users3';        alter table hudi_users3_spark_mor add if not exists partition(`partition`='20210414') location 'hdfs://localhost:9000/hudi/hudi_users3/20210414';    select * from hudi_users3_spark_mor where `partition`='20210414';

如果Spark-SQL读取实时Hudi数据,必须进行如下设置set spark.sql.hive.convertMetastoreParquet=false;

这里需要注意如果创建表的时候字段类型不对会报错,比如

代码语言:javascript
复制
CREATE EXTERNAL TABLE `hudi_users3_spark_mor`(                  `_hoodie_commit_time` string,                       `_hoodie_commit_seqno` string,                      `_hoodie_record_key` string,                        `_hoodie_partition_path` string,                    `_hoodie_file_name` string,                         `id` string,                                        `name` string,                                      `birthday` string,                                  `ts` string)                                      PARTITIONED BY (                                      `partition` string)                               ROW FORMAT SERDE                                      'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'   STORED AS INPUTFORMAT                                 'org.apache.hudi.hadoop.HoodieParquetInputFormat'  OUTPUTFORMAT                                          'org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat'  LOCATION                                              'hdfs://localhost:9000/hudi/hudi_users3';  

id 、ts、birthday都设置为String,会报下面错误。Spark-SQL想读取Hudi数据,字段类型需要严格匹配

5. 后续

目前使用小规模数据测试Flink CDC写入Hudi,后面我们准备用生产数据来走一波,看看Flink-CDC写入Hudi的性能和稳定性。

6. 参考

https://mp.weixin.qq.com/s/5h6VnXXPvEY3Brhb03ohBA•https://github.com/MyLanPangzi/flink-demo/blob/main/docs/%E5%A2%9E%E9%87%8F%E5%9E%8B%E6%95%B0%E4%BB%93%E6%8E%A2%E7%B4%A2%EF%BC%9AFlink%20+%20Hudi.md•https://blog.csdn.net/u010848845/article/details/115373576?spm=1001.2014.3001.5501•https://otime.top/2021/01/hudi01-start/•https://otime.top/2021/01/hudi03-write-query/

推荐阅读

Debezium-Flink-Hudi:实时流式CDC

一文彻底理解Apache Hudi的清理服务

对话Apache Hudi VP,洞悉数据湖的过去现在和未来

基于 Apache Hudi 构建实时数据湖在百信银行的实践

17张图带你彻底理解Hudi Upsert原理

版权声明:本文为CSDN博主「明喆_sama」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。原文链接:https://blog.csdn.net/u010848845/article/details/115701329

下一篇
举报
领券