当在持久内存优化表和非持久内存优化表中运行大容量插入时,我将获得相同的速度。大容量插入到非持久内存优化表的速度不应该比持久内存优化表更快吗?如果是这样的话,我在这里做错了什么?
我的测试如下,它需要持续30秒。这是在Server 2016 SP1上进行的。大容量插入来自我生成的csv文件中的1000万行。
SQL
CREATE TABLE Users_ND (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_ONLY);
CREATE TABLE Users_D (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
SET STATISTICS TIME ON;
SET NOCOUNT ON;
BULK INSERT Users_ND
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);
BULK INSERT Users_D
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);users-huge.csv
Id, Username
,user1
,user2
...
,user10000000发布于 2017-05-10 13:45:52
结果我出现了这个问题,因为批量插入的源文件保存在一个缓慢的HDD上,因此在读取该文件时出现了瓶颈。
https://stackoverflow.com/questions/43813573
复制相似问题