前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >Greenplum 集群性能测试

Greenplum 集群性能测试

作者头像
用户1148526
发布2021-12-07 12:31:17
发布2021-12-07 12:31:17
79200
代码可运行
举报
文章被收录于专栏:Hadoop数据仓库Hadoop数据仓库
运行总次数:0
代码可运行

目录

一、主机规划

二、GP参数设置

三、查询对比

1. 建库

2. 建schema

3. 设置模式查找路径

4. 建表

5. 装载数据

6. 查询

7. 压测

五、数据装载

1. 利用file协议的外部表

2. 利用gpfdist协议的外部表

3. copy

六、分区

七、VACUUM

一、主机规划

114.112.77.199 master、segment 210.73.209.103 standby master、segment 140.210.73.67 segment

二、GP参数设置

gpconfig -c max_connections -v 2500 -m 500 gpconfig -c max_prepared_transactions -v 500 gpconfig -c shared_buffers -v 5GB -m 32GB gpconfig -c effective_cache_size -v 16GB -m 96GB gpconfig -c maintenance_work_mem -v 1280MB -m 2GB gpconfig -c checkpoint_completion_target -v 0.9 gpconfig -c wal_buffers -v 16MB -m 16MB gpconfig -c checkpoint_segments -v 32 --skipvalidation gpconfig -c effective_io_concurrency -v 200 gpconfig -c default_statistics_target -v 100 gpconfig -c random_page_cost -v 1.1 gpconfig -c log_statement -v none gpconfig -c gp_enable_global_deadlock_detector -v on

三、查询对比

以new_moments线上从库慢查询作为对比,moments_dynamic表79309341行,relation表499194521行。

psql -U dwtest -h mdw

1. 建库

create database dw;

2. 建schema

\c dw create schema new_moments;

3. 设置模式查找路径

alter database dw set search_path to space,new_moments,public,pg_catalog;

4. 建表

set search_path to new_moments;

(1)moments_dynamic create table moments_dynamic ( id bigint not null, create_time timestamp default ('now'::text)::timestamp(0) not null, update_time timestamp not null, userid bigint not null, dynamic_type smallint not null, dynamic_sub_type smallint not null default -1, dynamic_extern_id bigint not null, dynamic_status smallint not null default 0, primary key (userid, id) ) distributed by (userid);

comment on table moments_dynamic is '动态'; comment on column moments_dynamic.id is 'id'; comment on column moments_dynamic.create_time is '发布时间'; comment on column moments_dynamic.update_time is '修改时间'; comment on column moments_dynamic.userid is '用户id'; comment on column moments_dynamic.dynamic_type is '动态的类型 1:作品,2:分享,3:直播,4 :开歌房 5: 新歌房上麦'; comment on column moments_dynamic.dynamic_sub_type is '动态的子类型,作品:(-1),分享:(1:作品,2:房间,3:直播, 4: 比赛),直播:(-1)'; comment on column moments_dynamic.dynamic_extern_id is '关联的外部id, 作品:(space_ac.avid),分享:(space_share.shareid),直播:(liveinfo.liveid)';

create sequence seq_moments_dynamic start 200000000; alter table moments_dynamic alter column id set default nextval('seq_moments_dynamic');

建议使用单列主键(避免数据倾斜)或关联列(利用本地计算)作为分布键,分布键字段不应该做更新操作。否则可能 Raise error when affecting tuple moved into another segment。

(2)relation create sequence seq_relation start 600000000; create table relation ( id bigint not null default nextval('seq_relation'), create_time timestamp default ('now'::text)::timestamp(0) not null, update_time timestamp default ('now'::text)::timestamp(0) not null, userid bigint not null, target bigint not null, new_attetion smallint not null default 0, relation_type smallint not null default 0, primary key (userid, id) ) distributed by (userid);

5. 装载数据

代码语言:javascript
代码运行次数:0
运行
复制
/**********
mysqldump -u root -p123456 -S /data2/3306/mysqldata/mysql.sock -t -T /storage new_moments moments_dynamic relation --fields-terminated-by='|' --single-transaction
scp /storage/*.txt gpadmin@114.112.77.199:/data/
**********/

psql
\timing on
\pset pager off
\c dw
copy moments_dynamic from '/data/moments_dynamic.txt' with delimiter '|';
copy relation from '/data/relation.txt' with delimiter '|';
analyze moments_dynamic;
analyze relation;

如果copy时出现如下错误: ERROR: invalid byte sequence for encoding "UTF8": 0x00 则先替换文件中的\0字符串:

代码语言:javascript
代码运行次数:0
运行
复制
cat audit_obj_detail_article.txt | sed 's/\\0//g' > audit_obj_detail_article.txt.1

更快、更好的方法:

代码语言:javascript
代码运行次数:0
运行
复制
perl -p -i -e "s/\x5c0//g" audit_obj_detail_article.txt

# 用kettle生成文件再装载 # 选择JDK 1.8

代码语言:javascript
代码运行次数:0
运行
复制
alternatives --config java

# 命令行执行转换,在GP master上生成文本文件

代码语言:javascript
代码运行次数:0
运行
复制
./pan.sh -file:/root/ffacelocal.ktr

# 用gpadmin登录GP psql

-- 连接目标数据库 \c dw

代码语言:javascript
代码运行次数:0
运行
复制
-- 装载文本文件到GP表
copy fidmapping_zz from '/data/zz/fidmapping.txt' with delimiter E'\t';
VACUUM FREEZE analyze fidmapping_zz;

-- ERROR:  invalid input syntax for integer: ""
-- CONTEXT:  COPY files_zz, line 16, column lvtime: ""
copy files_zz from '/data/zz/files.txt' with (FORMAT CSV, delimiter E'\t', FORCE_NULL(lvtime));
VACUUM FREEZE analyze files_zz;

copy fidmapping_yc from '/data/yc/fidmapping.txt' with delimiter E'\t';
VACUUM FREEZE analyze fidmapping_yc;

copy files_yc from '/data/yc/files.txt' with (FORMAT CSV, delimiter E'\t', FORCE_NULL(lvtime));
VACUUM FREEZE analyze files_yc;

6. 查询

代码语言:javascript
代码运行次数:0
运行
复制
select userid, target, relation_type, update_time
  from relation
 where userid = 910732086
   and relation_type in (1, 2) 
 order by update_time desc
 limit 500;

Time: 144 ms MySQL: 10600 ms

代码语言:javascript
代码运行次数:0
运行
复制
select userid, target, relation_type, update_time
  from relation
 where userid = 717600270
   and relation_type in (1, 2) 
 order by update_time desc
 limit 30;

Time: 44 ms MySQL: 9210 ms

代码语言:javascript
代码运行次数:0
运行
复制
select a.* 
          from moments_dynamic a -- force index (idx_id_all)
          join (select target 
                  from relation r 
                 where r.userid = 909122568 
                   and (r.relation_type = 1 or r.relation_type = 2) 
                 union all 
                select 909122568) b 
            on a.userid=b.target
         where dynamic_status = 0
           and id > 183653373 
         order by id desc
         limit 80;

Time: 1182 ms MySQL:1180 ms (force index (idx_id_all))

原写法:

代码语言:javascript
代码运行次数:0
运行
复制
select t2.* 
  from (select a.id 
          from moments_dynamic a -- force index (idx_user_all)
          join (select target 
                  from relation r 
                 where r.userid = 918046590 
                   and (r.relation_type = 1 or r.relation_type = 2) 
                 union all 
                select 918046590) b 
            on a.userid=b.target
         where  dynamic_status = 0
           and dynamic_type in (1, 6, 8, 11, 13) 
         order by id desc
         limit 80) t1 
  join moments_dynamic t2 
    on t1.id = t2.id 
 order by t2.id desc;

Time: 1508 ms

优化写法:

代码语言:javascript
代码运行次数:0
运行
复制
select a.*
          from moments_dynamic a -- force index (idx_user_all)
          join (select target 
                  from relation r 
                 where r.userid = 918046590 
                   and (r.relation_type = 1 or r.relation_type = 2) 
                 union all 
                select 918046590) b 
            on a.userid=b.target
         where  dynamic_status = 0
           and dynamic_type in (1, 6, 8, 11, 13) 
         order by id desc
         limit 80;

Time: 75 ms MySQL: 170 ms (force index (idx_user_all))

7. 压测

create table work_heat_user_operate ( userid bigint not null default -1, avid bigint not null, playcount bigint not null default 0, praisecount bigint not null default 0, commentcount bigint not null default 0, sharecount bigint not null default 0, updatetime timestamp default ('now'::text)::timestamp(0) not null, primary key (userid,avid) ) distributed by (userid);

/********** mysqldump -u root -p123456 -S /data1/3306/mysqldata/mysql.sock -t -T /data2 space work_heat_user_operate --fields-terminated-by='|' --single-transaction scp /data2/work_heat_user_operate.txt gpadmin@114.112.77.199:/data/ **********/

psql \timing on \pset pager off \c dw copy work_heat_user_operate from '/data/work_heat_user_operate.txt' with delimiter '|'; analyze work_heat_user_operate;

-- 确认数据倾斜

代码语言:javascript
代码运行次数:0
运行
复制
select gp_segment_id, pg_relation_size('work_heat_user_operate')
  from gp_dist_random('gp_id') order by 2 desc;

-- 下面四个查询都慢,不建议

代码语言:javascript
代码运行次数:0
运行
复制
select gp_segment_id,count(*) from work_heat_user_operate group by gp_segment_id;
SELECT 'Example Table' AS "Table Name", 
    max(c) AS "Max Seg Rows", min(c) AS "Min Seg Rows", 
    (max(c)-min(c))*100.0/max(c) AS "Percentage Difference Between Max & Min" 
FROM (SELECT count(*) c, gp_segment_id FROM files_zz GROUP BY 2) AS a;
-- skccoeff列为标准差值,越小越好
select * from gp_toolkit.gp_skew_coefficients; 
-- siffraction列为表扫描空闲百分比,越小越好,如超过10%需要重新评估分布键。
select * from gp_toolkit.gp_skew_idle_fractions; 

-- 计算倾斜

代码语言:javascript
代码运行次数:0
运行
复制
select * from gp_toolkit.gp_workfile_usage_per_segment;
select * from gp_toolkit.gp_workfile_usage_per_query;

-- 官方文档中检查计算倾斜的方法(https://docs.greenplum.org/6-14/admin_guide/distribution.html) 1. 查询库ID

代码语言:javascript
代码运行次数:0
运行
复制
SELECT oid, datname FROM pg_database;

2. 检查所有segment上的溢出文件大小。如果磁盘使用存在显著且持续的差异,则应调查正在执行的查询是否存在可能的偏差。

代码语言:javascript
代码运行次数:0
运行
复制
gpssh -f ~/all_host -e \
    "du -b /data[1-2]/primary/gpseg*/base/16385/pgsql_tmp/*" | \
    grep -v "du -b" | sort | awk -F" " '{ arr[$1] = arr[$1] + $2 ; tot = tot + $2 }; END \
    { for ( i in arr ) print "Segment node" i, arr[i], "bytes (" arr[i]/(1024**3)" GB)"; \
    print "Total", tot, "bytes (" tot/(1024**3)" GB)" }' -

3. 找到实际的segment目录,此示例专门查找排序文件。并非所有溢出文件或倾斜情况都是由排序文件引起的,因此需要自定义命令。

代码语言:javascript
代码运行次数:0
运行
复制
gpssh -f ~/all_host -e \
    "ls -l /data[1-2]/primary/gpseg*/base/16385/pgsql_tmp/*" \
    | grep -i sort | awk '{sub(/base.*tmp\//, ".../", $10); print $1,$6,$10}' | sort -k2 -n

4. 使用ssh登录到有问题的节点,使用lsof命令找到使用一个排序文件PID。

代码语言:javascript
代码运行次数:0
运行
复制
lsof /data1/primary/gpseg0/base/pgsql_tmp/pgsql_tmp_slice10_sort_15673_0002.1

5. 使用ps命令和PID来标识数据库连接信息

代码语言:javascript
代码运行次数:0
运行
复制
ps -eaf | grep 15673

6. 检查pg_log文件。日志文件中包含上个命令输出中的命令和连接信息的行应该是包含查询的行。如果查询仍在运行,则用户和连接的最后一个查询就是有问题的查询。 在几乎所有情况下,处理歪斜的补救方法都是重写查询。创建临时表可以消除倾斜,临时表可以随机分布,以强制进行两阶段聚合。

代码语言:javascript
代码运行次数:0
运行
复制
-- 查看锁等待信息
select * from pg_catalog.gp_dist_wait_status();

-- 设置活动会话
\set active_session 'select pid,usename,datname,application_name,client_addr,age(clock_timestamp(), query_start),query from pg_stat_activity where pid<>pg_backend_pid() and state=\'active\' order by query_start desc;'

五、数据装载

1. 利用file协议的外部表

# 创建外部表模式 create schema ext; # 分隔符的16进制 select to_hex(ascii('|')); # 创建外部表 \c dw gpadmin set search_path to ext; create external table test_ext ( userid bigint, avid bigint, playcount bigint, praisecount bigint, commentcount bigint, sharecount bigint, updatetime timestamp ) location ('file://mdw/data/work_heat_user_operate.txt') format 'text' (delimiter E'\x7c' null '\N'); # 装载数据 insert into space.work_heat_user_operate select * from ext.test_ext;

13亿行57G用时4370秒。copy用时1571秒。

2. 利用gpfdist协议的外部表

# 启动服务 nohup gpfdist -p 8080 -d /../ &

# 创建外部表 \c dw gpadmin set search_path to ext; create external table test_ext1 ( userid bigint, avid bigint, playcount bigint, praisecount bigint, commentcount bigint, sharecount bigint, updatetime timestamp ) location ('gpfdist://mdw:8080/data/work_heat_user_operate.txt') format 'text' (delimiter '|');

# 装载数据 insert into space.work_heat_user_operate select * from ext.test_ext;

Time: 4370655.578 ms

3. copy

# 流复制 psql -h src -d srcdb -c 'copy test to stdout' | psql -h des -d desdb -c 'copy test from stdin' -- 与copy命令读取master上的文件不同,\copy从客户端本地读取文件 \copy test from '/tmp/file0' delimiter '|'; -- 指定格式 copy test from '/tmp/file0' with (format csv, delimiter '|', encoding 'latin1'); -- 导出 copy (select * from test where a < 100) to '/tmp/file100.txt'; \copy (select * from test where a < 100) to '/tmp/file100.txt';

六、分区

1. 只用一级分区。 2. 只用range分区。 3. 单个primary数据量在500万以上。例如24个primary,表数据量在1.2亿以上时再考虑分区。 4. 每个叶子分区在每个primary上的记录数应在100万到1000万左右的范围。例如24个primary,每个叶子分区的数据量至少应在2400万以上。 5. 查询要用到分区条件,以利用分区消除。 6. 分区表尽量不要建索引,一定不要在分区字段上建索引。

七、VACUUM

每天低峰执行: -- 释放过期行所占空间,同时释放事务号防止XID回卷失败 VACUUM FREEZE tablename;

-- 分析表以生成正确的执行计划 ANALYZE tablename;

查询表上的依赖: SELECT v.oid::regclass AS view, d.refobjid::regclass AS ref_object -- name of table -- d.refobjid::regproc AS ref_object -- name of function FROM pg_depend AS d -- objects that depend on a table JOIN pg_rewrite AS r -- rules depending on a table ON r.oid = d.objid JOIN pg_class AS v -- views for the rules ON v.oid = r.ev_class WHERE v.relkind = 'v' -- filter views only -- dependency must be a rule depending on a relation AND d.classid = 'pg_rewrite'::regclass AND d.deptype = 'n' -- normal dependency -- qualify object AND d.refclassid = 'pg_class'::regclass -- dependent table AND d.refobjid = 't1'::regclass -- AND d.refclassid = 'pg_proc'::regclass -- dependent function -- AND d.refobjid = 'f'::regproc ;

查询函数上的依赖: SELECT v.oid::regclass AS view, -- d.refobjid::regclass AS ref_object -- name of table d.refobjid::regproc AS ref_object -- name of function FROM pg_depend AS d -- objects that depend on a table JOIN pg_rewrite AS r -- rules depending on a table ON r.oid = d.objid JOIN pg_class AS v -- views for the rules ON v.oid = r.ev_class WHERE v.relkind = 'v' -- filter views only -- dependency must be a rule depending on a relation AND d.classid = 'pg_rewrite'::regclass AND d.deptype = 'n' -- normal dependency -- qualify object -- AND d.refclassid = 'pg_class'::regclass -- dependent table -- AND d.refobjid = 't1'::regclass AND d.refclassid = 'pg_proc'::regclass -- dependent function AND d.refobjid = 'f'::regproc ;

查询表列上的依赖: SELECT v.oid::regclass AS view, d.refobjid::regclass AS ref_object, -- name of table a.attname AS col_name -- column name FROM pg_attribute AS a -- columns for a table JOIN pg_depend AS d -- objects that depend on a column ON d.refobjsubid = a.attnum AND d.refobjid = a.attrelid JOIN pg_rewrite AS r -- rules depending on the column ON r.oid = d.objid JOIN pg_class AS v -- views for the rules ON v.oid = r.ev_class WHERE v.relkind = 'v' -- filter views only -- dependency must be a rule depending on a relation AND d.classid = 'pg_rewrite'::regclass AND d.refclassid = 'pg_class'::regclass AND d.deptype = 'n' -- normal dependency AND a.attrelid = 't1'::regclass AND a.attname = 'id' ;

列出视图所属模式: SELECT v.oid::regclass AS view, ns.nspname AS schema, -- view schema, d.refobjid::regclass AS ref_object -- name of table FROM pg_depend AS d -- objects that depend on a table JOIN pg_rewrite AS r -- rules depending on a table ON r.oid = d.objid JOIN pg_class AS v -- views for the rules ON v.oid = r.ev_class JOIN pg_namespace AS ns -- schema information ON ns.oid = v.relnamespace WHERE v.relkind = 'v' -- filter views only -- dependency must be a rule depending on a relation AND d.classid = 'pg_rewrite'::regclass AND d.refclassid = 'pg_class'::regclass -- referenced objects in pg_class (tables and views) AND d.deptype = 'n' -- normal dependency -- qualify object AND ns.nspname NOT IN ('pg_catalog', 'information_schema', 'gp_toolkit') -- system schemas AND NOT (v.oid = d.refobjid) -- not self-referencing dependency ;

列出视图定义: SELECT v.relname AS view, d.refobjid::regclass as ref_object, d.refobjsubid as ref_col, 'CREATE VIEW ' || v.relname || ' AS ' || pg_get_viewdef(v.oid) AS view_def FROM pg_depend AS d JOIN pg_rewrite AS r ON r.oid = d.objid JOIN pg_class AS v ON v.oid = r.ev_class WHERE NOT (v.oid = d.refobjid) AND d.refobjid = 't1'::regclass ORDER BY d.refobjsubid ;

列出嵌套视图: WITH views AS ( SELECT v.relname AS view, d.refobjid AS ref_object, v.oid AS view_oid, ns.nspname AS namespace FROM pg_depend AS d JOIN pg_rewrite AS r ON r.oid = d.objid JOIN pg_class AS v ON v.oid = r.ev_class JOIN pg_namespace AS ns ON ns.oid = v.relnamespace WHERE v.relkind = 'v' AND ns.nspname NOT IN ('pg_catalog', 'information_schema', 'gp_toolkit') -- exclude system schemas AND d.deptype = 'n' -- normal dependency AND NOT (v.oid = d.refobjid) -- not a self-referencing dependency ) SELECT views.view, views.namespace AS schema, views.ref_object::regclass AS ref_view, ref_nspace.nspname AS ref_schema FROM views JOIN pg_depend as dep ON dep.refobjid = views.view_oid JOIN pg_class AS class ON views.ref_object = class.oid JOIN pg_namespace AS ref_nspace ON class.relnamespace = ref_nspace.oid WHERE class.relkind = 'v' AND dep.deptype = 'n' ;

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2021/10/07 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 一、主机规划
  • 二、GP参数设置
  • 三、查询对比
    • 1. 建库
    • 2. 建schema
    • 3. 设置模式查找路径
    • 4. 建表
    • 5. 装载数据
    • 6. 查询
    • 7. 压测
  • 五、数据装载
    • 1. 利用file协议的外部表
    • 2. 利用gpfdist协议的外部表
    • 3. copy
  • 六、分区
  • 七、VACUUM
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档