学习
实践
活动
工具
TVP
写文章
原创

Zabbix 分区优化

xiaodongpan

使用zabbix最大的瓶颈在于数据库,维护好zabbix的数据存储,告警,就能很好地应用zabbix去构建监控系统。目前zabbix的数据主要存储在history和trends的2个表中,随着时间的推移,这两个表变得非常大,性能会非常差,影响监控的使用。对MySQL进行调优,能够极大的提升Zabbix的性能,本文采用对MySQL进行分区的方法进行调优。

原理

对zabbix中的history和trends等表进行分区,按日期进行分区,每天一个,共保留90天分区。

操作详细步骤

操作影响: 可以在线操作,MySQL的读写变慢,Zabbix性能变慢,影响时间根据数据的小而变化,一般在2个小时左右。

第一步

登录zabbix server的数据库,统一MySQL的配置

cat > /etc/my.cnf<<EOF
[mysqld]
datadir=/data/mysql
socket=/var/lib/mysql/mysql.sock
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
symbolic-links=0
max_connections=4096
innodb_buffer_pool_size=12G
max_allowed_packet = 32M
join_buffer_size=2M
sort_buffer_size=2M  
query_cache_size = 64M   
query_cache_limit = 4M    
thread_concurrency = 8
table_open_cache=1024
innodb_flush_log_at_trx_commit = 0

long_query_time = 1
log-slow-queries =/data/mysql/mysql-slow.log  

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

#[mysql]
#socket=/data/mysql/mysql.sock
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
EOF

注意:一定要修改innodb_buffer_pool_size=物理内存的1/3

第二步

先确认zabbix的版本,本操作zabbix的版本一定要大于3.2.0。小于3.2的版本不能安装此操作,线上默认是zabbix-3.2.6。

a、 导入存储过程

#cat  partition.sql
DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAMEvarchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)
BEGIN
       /*
          SCHEMANAME = The DB schema in which to make changes
          TABLENAME = The table with partitions to potentially delete
          PARTITIONNAME = The name of the partition to create
       */
       /*
          Verify that the partition does not already exist
       */

       DECLARE RETROWS INT;
       SELECT COUNT(1) INTO RETROWS
       FROM information_schema.partitions
       WHERE table_schema = SCHEMANAME AND table_name = TABLENAME ANDpartition_description >= CLOCK;

       IF RETROWS = 0 THEN
               /*
                   1. Print a messageindicating that a partition was created.
                   2. Create the SQL to createthe partition.
                   3. Execute the SQL from #2.
               */
               SELECT CONCAT( "partition_create(", SCHEMANAME, ",",TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" )AS msg;
               SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADDPARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
               PREPARE STMT FROM @sql;
               EXECUTE STMT;
               DEALLOCATE PREPARE STMT;
       END IF;
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAMEVARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
       /*
           SCHEMANAME = The DB schema in which tomake changes
          TABLENAME = The table with partitions to potentially delete
          DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that aredates older than this one (yyyy-mm-dd)
        */
       DECLARE done INT DEFAULT FALSE;
       DECLARE drop_part_name VARCHAR(16);

       /*
          Get a list of all the partitions that are older than the date
          in DELETE_BELOW_PARTITION_DATE. All partitions are prefixed with
           a "p", so use SUBSTRING TOget rid of that character.
       */
       DECLARE myCursor CURSOR FOR
               SELECT partition_name
               FROM information_schema.partitions
               WHERE table_schema = SCHEMANAME AND table_name = TABLENAME ANDCAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) <DELETE_BELOW_PARTITION_DATE;
       DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

       /*
          Create the basics for when we need to drop the partition.  Also, create
          @drop_partitions to hold a comma-delimited list of all partitions that
          should be deleted.
       */
       SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME,".", TABLENAME, " DROP PARTITION ");
       SET @drop_partitions = "";

       /*
          Start looping through all the partitions that are too old.
       */
       OPEN myCursor;
       read_loop: LOOP
               FETCH myCursor INTO drop_part_name;
               IF done THEN
                        LEAVE read_loop;
               END IF;
               SET @drop_partitions = IF(@drop_partitions = "",drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
       END LOOP;
       IF @drop_partitions != "" THEN
                /*
                   1. Build the SQL to drop allthe necessary partitions.
                   2. Run the SQL to drop thepartitions.
                   3. Print out the tablepartitions that were deleted.
               */
               SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
               PREPARE STMT FROM @full_sql;
               EXECUTE STMT;
               DEALLOCATE PREPARE STMT;

               SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`,@drop_partitions AS `partitions_deleted`;
       ELSE
               /*
                   No partitions are beingdeleted, so print out "N/A" (Not applicable) to indicate
                   that no changes were made.
               */
               SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`,"N/A" AS `partitions_deleted`;
       END IF;
END$$
DELIMITER ;


DELIMITER $$
CREATE PROCEDURE`partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32),KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
       DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
       DECLARE PARTITION_NAME VARCHAR(16);
       DECLARE OLD_PARTITION_NAME VARCHAR(16);
       DECLARE LESS_THAN_TIMESTAMP INT;
       DECLARE CUR_TIME INT;

        CALL partition_verify(SCHEMA_NAME,TABLE_NAME, HOURLY_INTERVAL);
       SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));

       SET @__interval = 1;
       create_loop: LOOP
               IF @__interval > CREATE_NEXT_INTERVALS THEN
                        LEAVE create_loop;
               END IF;

               SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval *3600);
               SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL *(@__interval - 1) * 3600, 'p%Y%m%d%H00');
               IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
                        CALLpartition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
               END IF;
               SET @__interval=@__interval+1;
               SET OLD_PARTITION_NAME = PARTITION_NAME;
       END LOOP;

       SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVALKEEP_DATA_DAYS DAY), '%Y%m%d0000');
       CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);

END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAMEVARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
       DECLARE PARTITION_NAME VARCHAR(16);
       DECLARE RETROWS INT(11);
       DECLARE FUTURE_TIMESTAMP TIMESTAMP;

       /*
        * Check if any partitions exist for the given SCHEMANAME.TABLENAME.
        */
       SELECT COUNT(1) INTO RETROWS
       FROM information_schema.partitions
       WHERE table_schema = SCHEMANAME AND table_name = TABLENAME ANDpartition_name IS NULL;

       /*
        * If partitions do not exist, go ahead and partition the table
        */
        IFRETROWS = 1 THEN
               /*
                * Take the current date at 00:00:00 and add HOURLYINTERVAL to it.  This is the timestamp below which we willstore values.
                * We begin partitioning based on the beginning of a day.  This is because we don't want to generate arandom partition
                * that won't necessarily fall in line with the desired partition naming(ie: if the hour interval is 24 hours, we could
                * end up creating a partition now named "p201403270600" whenall other partitions will be like "p201403280000").
                */
               SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL,CONCAT(CURDATE(), " ", '00:00:00'));
               SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');

               -- Create the partitioning query
               SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME,".", TABLENAME, " PARTITION BY RANGE(`clock`)");
               SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ",PARTITION_NAME, " VALUES LESS THAN (",UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");

               -- Run the partitioning query
               PREPARE STMT FROM @__PARTITION_SQL;
               EXECUTE STMT;
               DEALLOCATE PREPARE STMT;
       END IF;
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
               CALL partition_maintenance(SCHEMA_NAME, 'history', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_log', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_str', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_text', 90, 24, 14);
                CALLpartition_maintenance(SCHEMA_NAME, 'history_uint', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14);
END$$
DELIMITER ;

上面内容包含了创建分区的存储过程,将上面内容复制到partition.sql中,然后执行如下:

mysql  -uzabbix -pzabbix zabbix  < partition.sql

b、 添加crontable,每天执行01点01分执行,如下:

crontab  -l > crontab.txt 
cat >> crontab.txt <<EOF
#zabbix partition_maintenance
01 01 * * *  mysql  -uzabbix -pzabbix zabbix -e"CALL partition_maintenance_all('zabbix')" &>/dev/null
EOF
cat crontab.txt |crontab

注意: mysql的zabbix用户的密码部分按照实际环境配置

c、首先执行一次(由于首次执行的时间较长,请使用nohup执行),如下:

nohup   mysql -uzabbix -pzabbix zabbix -e "CALLpartition_maintenance_all('zabbix')" &> /root/partition.log&  

注意:观察/root/partition.log的输出

d、 查看结果

登录mysql,查看history等表, 如下:

MariaDB [zabbix]> showcreate table history
| history | CREATE TABLE `history` (
 `itemid` bigint(20) unsigned NOT NULL,
  `clock`int(11) NOT NULL DEFAULT '0',
  `value`double(16,4) NOT NULL DEFAULT '0.0000',
  `ns`int(11) NOT NULL DEFAULT '0',
  KEY`history_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
/*!50100 PARTITION BY RANGE (`clock`)
(PARTITION p201708280000 VALUES LESS THAN(1503936000) ENGINE = InnoDB,
 PARTITION p201708290000 VALUES LESS THAN(1504022400) ENGINE = InnoDB,
 PARTITION p201708300000 VALUES LESS THAN(1504108800) ENGINE = InnoDB,
 PARTITION p201708310000 VALUES LESS THAN(1504195200) ENGINE = InnoDB,
 PARTITION p201709010000 VALUES LESS THAN(1504281600) ENGINE = InnoDB,
 PARTITION p201709020000 VALUES LESS THAN(1504368000) ENGINE = InnoDB,
 PARTITION p201709030000 VALUES LESS THAN(1504454400) ENGINE = InnoDB,
 PARTITION p201709040000 VALUES LESS THAN(1504540800) ENGINE = InnoDB,
 PARTITION p201709050000 VALUES LESS THAN(1504627200) ENGINE = InnoDB,
 PARTITION p201709060000 VALUES LESS THAN(1504713600) ENGINE = InnoDB,
 PARTITION p201709070000 VALUES LESS THAN(1504800000) ENGINE = InnoDB,
 PARTITION p201709080000 VALUES LESS THAN(1504886400) ENGINE = InnoDB,
 PARTITION p201709090000 VALUES LESS THAN(1504972800) ENGINE = InnoDB,
 PARTITION p201709100000 VALUES LESS THAN(1505059200) ENGINE = InnoDB,
 PARTITION p201709110000 VALUES LESS THAN(1505145600) ENGINE = InnoDB) */ |

发现了大量PARTITION字段,说明配置正确。注意观察Mysql的Slow Query,一般到执行操作的第二天,Slow Query几乎就会有了,此时Zabbix的Dashboard响应速度应该非常流畅了。

原创声明,本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

登录 后参与评论
0 条评论

相关文章

  • Mysql优化-表分区

    已经基于行级锁的话,就没有办法从软件层面提升并发度了,否则会事务冲突。所以思路:行级锁、物理层面提升。

    剑行者
  • 为 Zabbix 优化 MySQL

    Zabbix 和 MySQL 在大型的 Zabbix 环境中,遇到的挑战大部分是 MySQL 以及更具体的说是 MySQL 磁盘 IO。 考虑到这一点,我将提...

    小小科
  • zabbix表分区(适用于zabbix2.0.x,zabbix2.2.x和zabbix2.4.x)[推荐]

    本文主要介绍了zabbix进行数据库表分区的方法: 在系统监控中,zabbix已经代替了nagios+cacti,zabbix以其良好的图形展示和高度自定义赢得...

    小小科
  • MySQL性能优化(六):分区

    分区是根据一定的规则,数据库把一个表分解成多个更小的、更容易管理的部分。就访问数据库应用而言,逻辑上就只有一个表或者一个索引,但实际上这个表可能有N个物理分区对...

    全栈程序员站长
  • 存储优化--分区与冷热分离

    本文是专题的第一篇文章,主要讲解优化数据存储,涉及到锁、批处理、重试机制以及数据一致性等问题。下面 我们就开始吧。

    喵叔
  • 优化SQLServer——表和分区索引

    简单地说,分区是将大型的对象(如表)分成更小的且易于管理的小块。分区的基本单位是行,需要注意的是与分区视图不同的地方时,分区必须位于同一个...

    全栈程序员站长
  • 优化SQLServer——表和分区索引

    概念:            简单地说,分区是将大型的对象(如表)分成更小的且易于管理的小块。分区的基本单位是行,需要注意的是与分区视图不同的地方时,分区必须位...

    用户1217611
  • Zabbix 系统到底应该怎样优化?

    分布式架构: Server-Proxy-Agentd模式。 说明:Zabbix分布式架构,常用于监控主机比较多的情况下,使用Zabbix Proxy进行分布式监...

    用户8824291
  • 优化SQLServer——表和分区索引(二)

    简介     之前一篇简单的介绍了语法和一些基本的概念,隔了一段时间,觉得有必要细致的通过实例来总结一下这部分内容。如之前所说,分区就是讲大型的对象(表)分成更...

    用户1217611
  • VPS优化之添加swap交换分区

    Linux中Swap(即:交换分区),类似于Windows的虚拟内存,就是当内存不足的时候,把一部分硬盘空间虚拟成内存使用,从而解决内存容量不足的情况。它的功能...

    用户8851537
  • hive 查询优化之动态分区详解

    最近生产环境有将近 8 亿条,数据大小约 1T 左右的某类型客户业务数据需要查询分析。但是发现执行一次 hive sql 筛选数据需要耗时近几千秒,这样的性能指...

    leon_橙
  • Spark性能优化:基于分区进行操作

    基于分区对数据进行操作可以让我们避免为每个数据元素进行重复的配置工作。诸如打开数据库连接或创建随机数生成器等操作,都是我们应当尽量避免为每个元素都配置一次的工作...

    董可伦
  • Zabbix后端存储ES的优化实践

    由于公司zabbix的历史数据存储在elasticsearch中,有个需求是尽可能地把监控的历史数 据存储的长一点,最好是一年,目前的情况是三台ES节点,每天监...

    没有故事的陈师傅
  • 通信通道优化分区(Information Theory)

    给定一个原始的离散源X,它的分布被噪声破坏,产生的噪声数据Y与给定的联合分布p(X,Y)。然后使用量化器/分类器Q: Y→Z对数据Y进行分类/量化,得到具有概率...

    用户6869393
  • mysql数据库优化(三)--分区

    分区:把一个数据表的文件和索引分散存储在不同的物理文件中。 特点:业务层透明,无需任何修改,即使从新分表,也是在mysql层进行更改(业务层代码不动)

    用户1558882
  • 【老张监控技术】Zabbix3.4分区表配置

    2013年开始使用Zabbix,2014-2016年负责Zabbix二次开发及架构设计,目前从事PaaS平台及微服务的开发和运维工作,Zabbix实践爱好者,C...

    Zabbix
  • 分区表场景下的 SQL 优化

    有个表做了分区,每天一个分区。该表上有个查询,经常只查询表中某一天数据,但每次都几乎要扫描整个分区的所有数据,有什么办法进行优化吗?

    叶金荣
  • 大数据表查询优化 - 表分区

    快两年没写过业务代码了…… 今天帮一个研发团队优化了一下数据库表的查询性能。使用的是表分区。 简单记录了一下步骤,方便直接用:

    用户1172223
  • Greenplum性能优化之路 --(一)分区表

    分区表就是将一个大表在物理上分割成若干小表,并且整个过程对用户是透明的,也就是用户的所有操作仍然是作用在大表上,不需要关心数据实际上落在哪张小表里面。Green...

    腾讯云大数据

扫码关注腾讯云开发者

领取腾讯云代金券