首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >使用Node.js/Sequelize进行批量插入时,PostgreSQL崩溃

使用Node.js/Sequelize进行批量插入时,PostgreSQL崩溃
EN

Stack Overflow用户
提问于 2020-01-18 01:39:32
回答 2查看 1.4K关注 0票数 2

一个使用Node.js ORM的Sequelize.js应用程序正在对运行在Mac主机系统上的Docker容器中的PostgreSQL 11.2服务器执行批量插入。每个大容量插入通常由大约1000-4000行组成,批量插入并发性为30,因此在任何时候都有最多30个活动插入操作。

代码语言:javascript
运行
复制
const bulkInsert = async (payload) => {
    try {
        await sequelizeModelInstance.bulkCreate(payload);
    } catch (e) {
        console.log(e);
    }
}

pLimit = require('p-limit')(30);

(function() => {
    const promises = data.map(d => pLimit(() => bulkInsert(d))) // pLimit() controls Promise concurrency
    const result = await Promise.all(promises)
})();

一段时间后,PostgreSQL服务器将开始提供错误Connection terminated unexpectedly,然后是the database system is in recovery mode

在重复这几次并检查我的日志之后,当执行一个批处理的30大容量插入,其中几个大容量插入每个包含超过100,000行时,似乎通常会发生此错误。例如,当尝试将三个大容量插入( 190000、650000和150000行)与每个行为1000-4000行的27个插入一起进行时,就会发生一次特定的崩溃。

系统内存不足,CPU负载正常,可用磁盘空间充足。

问题:在这种情况下期望PostgreSQL崩溃是正常的吗?如果是这样的话,我们是否可以对PostgreSQL设置进行调优以允许更大的批量插入?如果这是由于大批量插入,Sequelize.js是否有为我们拆分大容量插入的功能?

运行在PostgreSQL 11.2上的码头集装箱,TimescaleDB 1.5.1,节点v12.6.0,续订5.21.3,Mac 10.15.2

PostgreSQL日志在问题发生后立即发生

代码语言:javascript
运行
复制
2020-01-18 00:58:26.094 UTC [1] LOG:  server process (PID 199) was terminated by signal 9
2020-01-18 00:58:26.094 UTC [1] DETAIL:  Failed process was running: INSERT INTO "foo" ("id","opId","unix","side","price","amount","b","s","serverTimestamp") VALUES (89880,'5007564','1579219200961','front','0.0000784','35','undefined','undefined','2020-01-17 00:00:01.038 +00:00'),.........
2020-01-18 00:58:26.108 UTC [1] LOG:  terminating any other active server processes
2020-01-18 00:58:26.110 UTC [220] WARNING:  terminating connection because of crash of another server process
2020-01-18 00:58:26.110 UTC [220] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-01-18 00:58:26.110 UTC [220] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-01-18 00:58:26.148 UTC [214] WARNING:  terminating connection because of crash of another server process
2020-01-18 00:58:26.148 UTC [214] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-01-18 00:58:26.148 UTC [214] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2020-01-18 00:58:26.149 UTC [203] WARNING:  terminating connection because of crash of another server process
2020-01-18 00:58:26.149 UTC [203] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

...

2020-01-18 00:58:30.098 UTC [1] LOG:  all server processes terminated; reinitializing
2020-01-18 00:58:30.240 UTC [223] FATAL:  the database system is in recovery mode
2020-01-18 00:58:30.241 UTC [222] LOG:  database system was interrupted; last known up at 2020-01-18 00:50:13 UTC
2020-01-18 00:58:30.864 UTC [224] FATAL:  the database system is in recovery mode
2020-01-18 00:58:31.604 UTC [225] FATAL:  the database system is in recovery mode
2020-01-18 00:58:32.297 UTC [226] FATAL:  the database system is in recovery mode
2020-01-18 00:58:32.894 UTC [227] FATAL:  the database system is in recovery mode
2020-01-18 00:58:33.394 UTC [228] FATAL:  the database system is in recovery mode
2020-01-18 01:00:55.911 UTC [222] LOG:  database system was not properly shut down; automatic recovery in progress
2020-01-18 01:00:56.856 UTC [222] LOG:  redo starts at 0/197C610
2020-01-18 01:01:55.662 UTC [229] FATAL:  the database system is in recovery mode
EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2020-01-20 13:46:46

您的Postgresql服务器可能会被Docker中的OOM杀手(内存之外的杀手)杀死。

您可以:

  1. 增加Postgres可用的内存,对于正在运行的操作量来说,2GB是一个很低的值。
  2. 减少批量插入大小并限制它们的并发性。
  3. 优化Postgres安装以适应您的硬件:
代码语言:javascript
运行
复制
- shared\_buffers: This as [recommended here](https://www.postgresql.org/docs/9.4/runtime-config-resource.html) should be 25% of the memory in your system, this is a recommendation, you should always benchmark and evaluate your scenario and choose the values that seem fit to your environment.
代码语言:javascript
运行
复制
- work\_mem: As explained [here](https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server): 

此大小适用于每个用户完成的每种类型,复杂查询可以使用多个工作内存排序缓冲区。将其设置为50 of,并让30个用户提交查询,您很快就会使用1.5GB的实际内存。此外,如果查询涉及对8个表进行合并排序,则需要8次work_mem。为了正确调整参数大小,您需要考虑将max_connections设置为什么。在这种情况下,数据仓库系统(用户提交非常大的查询)可以很容易地使用许多千兆字节的内存。

为了提高性能和稳定性,您可以在Postgres上更改许多配置。Postgres将在默认设置下正常运行,但在生产环境或重载环境中,您将不可避免地需要对其进行调优。

推荐读数:

  1. 调整您的PostgreSQL系统
  2. PostgreSQL文档:资源消耗
  3. 在PostgreSQL上配置内存
票数 1
EN

Stack Overflow用户

发布于 2020-04-18 14:08:23

在运行迁移时,我遇到了类似的问题,但是解决方案可以应用于这个问题。

这样做的目的是将您的有效负载拼接到可管理的块中。在我的例子中,每次100条记录似乎是可以管理的。

代码语言:javascript
运行
复制
const payload = require("./seeds/big-mama.json"); //around 715.000 records

module.exports = {
    up: (queryInterface) => {
        const records = payload.map(function (record) {
            record.createdAt = new Date();
            record.updatedAt = new Date();
            return record;
        });

        let lastQuery;
        while (records.length > 0) {
            lastQuery = queryInterface.bulkInsert(
                "Products",
                records.splice(0, 100),
                {}
            );
        }

        return lastQuery;
    },

    down: (queryInterface) => {
        return queryInterface.bulkDelete("Products", null, {});
    }
};
票数 4
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/59796832

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档