温馨提示:如果使用电脑查看图片不清晰,可以使用手机打开文章单击文中的图片放大查看高清原图。
Fayson的github: https://github.com/fayson/cdhproject
提示:代码块部分可以左右滑动查看噢
1.文档编写目的
在前面的文章Fayson介绍了关于StreamSets的一些文章,参考《如何在CDH中安装和使用StreamSets》、《如何使用StreamSets从MySQL增量更新数据到Hive》、《如何使用StreamSets实现MySQL中变化数据实时写入Kudu》、《如何使用StreamSets实时采集Kafka并入库Kudu》、《如何使用StreamSets实现MySQL中变化数据实时写入HBase》、《如何使用StreamSets实时采集Kafka数据并写入Hive表》和《如何使用StreamSets实时采集Kafka中嵌套JSON数据并写入Hive表》,本篇文章Fayson主要介绍如何使用StreamSets实时采集Oracle中的变化数据实时写入Kudu,StreamSets的流程处理如下:
1.配置StreamSets创建Pipeline及测试
2.总结
3.附录:脚本代码
1.CentOS7.2
2.CM和CDH版本为cdh5.14
3.Oracle11.2.0.4
4.StreamSets3.2.0
1.集群已经安装StreamSets并运行正常
2.Oracle及Oracle LogMiner运行正常
2.配置StreamSets创建Pipeline及测试
1.在CDH集群中cdh04节点添加StreamSets DataCollector实例服务并启动。
2.在浏览器中输入http://cdh04:18630/,即可访问StreamStes Web Console服务。针对此测试场景新建一个命名为oracle_sdc_kudu的Pipeline,保存后开始构建对ORACLE Database实时数据同步的数据流管道。
3.在 Origin Processer中选择 Oracle CDC Client。
4.在 Destination Processor中选择 Kudu-Apache Kudu 1.6.0。此CDH测试集群中已部署Kudu 1.6.0 版本。
5.根据测试场景需求描述,在StreamSets中将命名为oracle_sdc_kudu 的Pipeline配置为Oracle CDC Client 与 Kudu 的端到端实时数据同步模式。
6. 将StreamSets Origin 端 Oracle CDC Client 服务的关键配置按需调整如下。
7.将StreamSets Destination 端 Kudu 服务的关键配置按需调整如下
8.添加Oracle Database 的JDBC驱动程序包ojdbc.jar。
详情可参考:https://streamsets.com/documentation/datacollector/latest/help。
9.针对此测试场景,完成以上配置步骤后,执行validate操作,验证无误即可运行此Pipeline,开始ORACLE to KUDU的实时数据同步。
StreamSets Pipeline开启后,正常运行状态及其监控图表与日志查看。
10.验证新增数据实时同步。
insert into test.sdc(id,name) values('1','AAA');
insert into test.sdc(id,name) values('2','BBB');
insert into test.sdc(id,name) values('3','CCC');
insert into test.sdc(id,name) values('4','DDD');
insert into test.sdc(id,name) values('5','EEE');
insert into test.sdc(id,name) values('6','FFF');
commit;
(可左右滑动)
select * from poc_ending.sdc;
select count(1) from poc_ending.sdc;
(可左右滑动)
11.验证修改数据的实时同步。
update test.sdc set name='111' where id ='1';
update test.sdc set name='222' where id ='2';
update test.sdc set name='333' where id ='3';
update test.sdc set name='444' where id ='4';
update test.sdc set name='555' where id ='5';
update test.sdc set name='666' where id ='6';
commit;
(可左右滑动)
select * from poc_ending.sdc;
select count(1) from poc_ending.sdc;
(可左右滑动)
12.验证删除数据的实时同步。
delete from test.sdc where id ='1';
delete from test.sdc where id ='2';
delete from test.sdc where id ='3';
delete from test.sdc where id ='4';
delete from test.sdc where id ='5';
delete from test.sdc where id ='6';
commit;
(可左右滑动)
select count(1) from poc_ending.sdc;
(可左右滑动)
3.总结
1.满足企业从Oracle数据库到Hadoop(kudu)的实时数据同步功能需求。
2.Cloudera Manager支持对CDH服务组件与StreamSets服务组件的统一管理。
3.StreamSets支持从Oracle Database 11.2.0.4 到 Kudu 1.6.0的端到端实时数据同步,在Oracle LogMiner的支持下可以实现对Oracle Database数据库产生的REDO日志文件分析并将其INSERT/UPDATE/DELETE等影响数据变化的事件实时同步到KDUD中。
4.StreamSets 提供完善的Web Console管理服务,实现可视化数据同步配置及管理、运行过程监控、异常告警、事件日志查看等。
4.附录:脚本代码
-- PERFORM on IMPALA WITH KUDU --------------------------------------------------
CREATE DATABASE IF NOT EXISTS POC_ENDING;
CREATE TABLE IF NOT EXISTS POC_ENDING.sdc(
id string,
name string,
PRIMARY KEY (id)
) PARTITION BY HASH(id) PARTITIONS 12
STORED AS KUDU
TBLPROPERTIES('kudu.master_addresses'='cdh01:7051');
-- PERFORM on ORACLE DATABASE --------------------------------------------------
-- STREAMSETS CDC ORIGIN CLIENT for ORACLE DATABASE is dependeced ORACLE LogMiner session and view v$logmnr_contents.
-- 1.配置对象数据字典:通过ORACLE内置存储过程DBMS_LOGMNR_D.BUILD可配置当前联机数据字典/平面数据字典文件/重做日志字典文件。
-- 2.追加重做日志文件:通过ORACLE内置存储过程DBMS_LOGMNR.ADD_LOGFILE可配置指定历史文件/指定滚动文件。
-- 3.开启数据变更分析:通过ORACLE内置存储过程DBMS_LOGMNR.START_LOGMNR启动LogMiner后可查询分析视图v$logmnr_contents。
-- 4.停止数据变更分析:通过ORACLE内置存储过程DBMS_LOGMNR.END_LOGMNR可停止LogMiner的SESSION级访问。
SQL> select * from v$logmnr_contents;
EXCEPTION 1 ON ORACLE-SESSION-CLIENT:
ORA-01306: dbms_logmnr.start_logmnr() must be invoked before selecting from v$logmnr_contents
SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS=> DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
ORA-01292: no log file has been specified for the current LogMiner session
ORA-06512: at "SYS.DBMS_LOGMNR", line 58
ORA-06512: at line 1
SQL> execute DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS=> DBMS_LOGMNR.DICT_FROM_REDO_LOGS);
EXCEPTION 2 ON ORACLE-SESSION-CLIENT and STREAMSETS-WEBCONSOLE:
java.sql.SQLException: ORA-01292: no log file has been specified for the current LogMiner session
ORA-06512: at "SYS.DBMS_LOGMNR", line 58
ORA-06512: at line 1
[oracle@oracleasm ~]$ oerr ora 1292
01292, 00000, "no log file has been specified for the current LogMiner session"
// *Cause: No logfile has been specified for the LogMiner session.
// *Action: Specify atleast one log file.
EXCEPTION 3 ON ORACLE-SESSION-CLIENT and STREAMSETS-WEBCONSOLE:
java.sql.SQLException: ORA-01291: Not all logfiles corresponding to the time or scn range specified
ORA-06512: 在 "SYS.DBMS_LOGMNR", line 58
ORA-06512: 在 line 1
[root@oracleasm dmp]# oerr ora 1291
01291, 00000, "missing logfile"
// *Cause: Not all logfiles corresponding to the time or scn range specified
// have been added to the list.
// *Action: Check the v$logmnr_logs view to determine the missing scn
// range, and add the relevant logfiles.
SOLUTION:
SQL> select member from v$logfile;
/mnt/orcl/orcl/redo03.log
/mnt/orcl/orcl/redo02.log
/mnt/orcl/orcl/redo01.log
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo01.log');
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo02.log');
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo03.log');
--------------------------------------------------
sqlplus / as sysdba;
SQL> show user;
--IF select log_mode from v$database that result is no archivelog;
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;
--IF select supplemental_log_data_min,supplemental_log_data_pk,supplemental_log_data_all from V$database that result is no;
SQL> alter database add supplemental log data;
SQL> alter database add supplemental log data (primary key) columns;
SQL> alter database add supplemental log data (all) columns;
SQL> alter system switch logfile;
SQL> select log_mode,supplemental_log_data_min,supplemental_log_data_pk,supplemental_log_data_all from V$database;
LOG_MODE SUPPLEMENTAL_LOG SUPPLE SUPPLE
------------------------ ---------------- ------ ------
ARCHIVELOG YES YES YES
SQL> --show parameter utl_file_dir;
SQL> --show parameter log_archived_dest_1;
SQL> --select * from v$log;
SQL> --select * from v$logfile;
SQL> --create table test.sdc (id varchar2(20), name varchar2(20));
SQL> create user test identified by test;
SQL> grant create session, alter session, execute_catalog_role,select any dictionary, select any transaction, select any table to test;
SQL> grant select on V$logmnr_parameters to test;
SQL> grant select on v$logmnr_logs to test;
SQL> grant select on v$archived_log to test;
SQL> grant select on test.sdc to test;
SQL> conn test/test;
SQL> show user;
SQL> desc v$logmnr_contents
SQL>
SQL> --execute DBMS_LOGMNR.END_LOGMNR;
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo01.log');
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo02.log');
SQL> execute DBMS_LOGMNR.add_logfile('/mnt/orcl/orcl/redo03.log');
SQL> execute DBMS_LOGMNR.START_LOGMNR(OPTIONS=> DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SELECT SCN,TIMESTAMP,SEQUENCE#,SQL_REDO,SQL_UNDO from v$logmnr_contents WHERE TABLE_NAME='SDC';
SQL> insert into test.sdc(id,name) values('1','AAA');
SQL> insert into test.sdc(id,name) values('2','BBB');
SQL> insert into test.sdc(id,name) values('3','CCC');
SQL> insert into test.sdc(id,name) values('4','DDD');
SQL> insert into test.sdc(id,name) values('5','EEE');
SQL> insert into test.sdc(id,name) values('6','FFF');
SQL> commit;
SQL> SELECT SCN,TIMESTAMP,SEQUENCE#,SQL_REDO,SQL_UNDO from v$logmnr_contents WHERE TABLE_NAME='SDC';
SQL> update test.sdc set name='111' where id ='1';
SQL> update test.sdc set name='222' where id ='2';
SQL> update test.sdc set name='333' where id ='3';
SQL> update test.sdc set name='444' where id ='4';
SQL> update test.sdc set name='555' where id ='5';
SQL> update test.sdc set name='666' where id ='6';
SQL> commit;
SQL> SELECT SCN,TIMESTAMP,SEQUENCE#,SQL_REDO,SQL_UNDO from v$logmnr_contents WHERE TABLE_NAME='SDC';
SQL> delete from test.sdc where id ='1';
SQL> delete from test.sdc where id ='2';
SQL> delete from test.sdc where id ='3';
SQL> delete from test.sdc where id ='4';
SQL> delete from test.sdc where id ='5';
SQL> delete from test.sdc where id ='6';
SQL> commit;
SQL> SELECT SCN,TIMESTAMP,SEQUENCE#,SQL_REDO,SQL_UNDO from v$logmnr_contents WHERE TABLE_NAME='SDC';
(可左右滑动)
提示:代码块部分可以左右滑动查看噢
为天地立心,为生民立命,为往圣继绝学,为万世开太平。 温馨提示:要看高清无码套图,请使用手机打开并单击图片放大查看。
推荐关注Hadoop实操,第一时间,分享更多Hadoop干货,欢迎转发和分享。
原创文章,欢迎转载,转载请注明:转载自微信公众号Hadoop实操