前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Elastic Stack最佳实践:7.10.1与7.14.2的性能比较

Elastic Stack最佳实践:7.10.1与7.14.2的性能比较

原创
作者头像
点火三周
发布2022-03-20 17:35:42
1.6K0
发布2022-03-20 17:35:42
举报
文章被收录于专栏:Elastic Stack专栏

我们知道,最近腾讯云Elasticsearch service上提供了新版本7.14.2,这次版本更新较为低调,相对于原厂每月发版的节奏,国内的云厂商相对比较谨慎,通常是在原厂版本发布多月之后,才会选择一个稳定的版本在公有云的托管服务上提供版本更新。相对而言,7.14.2已经是国内各大云厂商上能够使用到的最新的Elasticsearch版本了。但尽管如此,仍然有不少用户疑问,从7.10.1升级到7.14.2是否值得更新,能带来哪些收益,是否会有问题。因此,这里会通过几篇文章,给大家介绍7.10.1升级到7.14.2,期间两个版本的主要不同。

本文主要集中在性能测试方面,以Elastic官方的压测工具esrally为主,选择其中一个比较典型的数据集奉上压测数据。

测试术语

rally:汽车拉力赛

track:赛道,压测方案,指压测用到的样本数据和压测策略

team/car: es instance

race: 一场比赛,一次压测

tournament: 锦标赛

esrally会提供大量的数据集用于测试,这不一一列举,仅以http_logs这一数据集为例,其路径为:rally/benchmarks/tracks/default/http_logs,包含的文件为:

├── http_logs

│ ├── challenges

│ ├── files.txt

│ ├── index.json

│ ├── index-runtime-fields.json

│ ├── operations

│ ├── pycache

│ ├── README.md

│ ├── _tools

│ ├── track.json

│ └── track.py

其中,测试规则由track.json定义,定义一个压测方案;

track.json包含以下几部分:

  • indices:索引定义
  • templates:定义indices template
  • corpora:定义数据集文件
  • operations:具体操作,可以没有,直接在schedule或者challenge内定义
  • schedule:执行操作时的负载
  • challenge:定义一次race需要经历的operation的集合,可以区分不同测试场景,比如append和update,便于分开统计

测试数据由index.json定义

测试参数

本次对比测试,选择http_logs。因为,通过在git命令在目录rally/benchmarks/tracks/default/下分别执行,git checkout 7.10, git checkout 7.14, 你可以发现,http_logs是唯一使用了match_only_text类型的数据集。具体如下:

代码语言:json
复制
{
  "settings": {
    "index.number_of_shards": {{ number_of_shards | default(5) }},
    "index.number_of_replicas": {{ number_of_replicas | default(0) }},
    "index.requests.cache.enable": false
  },
  "mappings": {
    "dynamic": "strict",
    "_source": {
      "enabled": {{ source_enabled | default(true) | tojson }}
    },
    "properties": {
      "@timestamp": {
        "format": "strict_date_optional_time||epoch_second",
        "type": "date"
      },
      "message": {
        "type": "keyword",
        "index": false,
        "doc_values": false
      },
      "clientip": {
        "type": "ip"
      },
      "request": {
        "type": "match_only_text",
        "fields": {
          "raw": {
            "ignore_above": 256,
            "type": "keyword"
          }
        }
      },
      "status": {
        "type": "integer"
      },
      "size": {
        "type": "integer"
      },
      "geoip" : {
        "properties" : {
          "country_name": { "type": "keyword" },
          "city_name": { "type": "keyword" },
          "location" : { "type" : "geo_point" }
        }
      }
    }
  }
}

因此,通过该方案的测试,我们可以更清晰比较7.10与7.14的不同

测试方案

为保证两个版本之间的测试环境一致性,将采取如下的测试步骤:

  • 与esrally服务器所在的vpc中,创建一个3节点的7.10.1版本的es集群
    image.png
    image.png
  • 通过esrally执行http_logs压测,保存测试结果(http_logs的版本通过git checkout为7.10,分片数手动设置为3)
  • 将es集群原地直接升级到7.14.2,并删除上次写入的索引
  • 通过esrally执行http_logs压测,保存测试结果(http_logs的版本通过git checkout为7.14,分片数手动设置为3)
  • 对比两次测试结果

注意,两次测试的结果,均同时发到另外一个ES集群,用于可视化的对比分析

测试结果分析

因为测试的结果太长,因此,以附录的形式附于文末。这里主要给出总结。

从版本的release notes看,从7.10.1到7.14.2,主要的版本更新来自于功能更新层面,包括:

  • 可搜索快照正式版(需企业版授权)
  • 运行时字段
  • Fleet 公测版
  • 增强可视化工具Lens的能力
  • 支持监督式机器学习
  • 正式宣布支持ARM架构

所以,我们其实很难从rally测试中看到太多的不同。因此,需要我们划重点,挑出一些主要的变化即可:

Metric

Task

Baseline

Contender

Diff

Unit

Diff %

Store size

17.5779

17.1429

-0.43506

GB

-2.48%

Heap used for segments

0.435303

0.34417

-0.09113

MB

-20.94%

Heap used for doc values

0.19455

0.176033

-0.01852

MB

-9.52%

Heap used for terms

0.182465

0.125092

-0.05737

MB

-31.44%

Heap used for norms

0.0057373

0.000671387

-0.00507

MB

-88.30%

Heap used for points

0

0

0

MB

0.00%

Heap used for stored fields

0.0525513

0.0423737

-0.01018

MB

-19.37%

Segment count

74

52

-22

-29.73%

Segment count

74

52

-22

-29.73%

Min Throughput

index-append

108883

109834

951.141

docs/s

+0.87%

Mean Throughput

index-append

114922

115458

536.178

docs/s

+0.47%

Median Throughput

index-append

114935

115308

372.795

docs/s

+0.32%

Max Throughput

index-append

125157

125362

205.398

docs/s

+0.16%

  • store size减少了近400M,主要来自于request字段的类型由text改为match_only_text
  • Heap used for norms 减少了 88%, 其原因相同,因为match_only_text关闭了与评分相关的数据索引
  • 索引速度有所加快,原因同上

而以下关于聚合分析的性能优化,无法在压测中体现

image.png
image.png

总结

7.14.2相对于7.10.1最重要的更新莫过于可搜索快照以及运行时字段,对于这两个功能的合理利用可以大幅减少数据存储的成本,再配合新的match_only_text字段,主要用于日志场景的用户可以考虑升级。

而对于搜索场景的用户,则可以测试Terms enum API,综合分析其提供的字段值建议功能,是否能给应用场景带来价值。

而且一些变化,比如:ARM架构的支持,匿名访问Dashbaord等,也是值得考虑的运维、机架特性

测试结果附录


代码语言:txt
复制
_______             __   _____

/ _()_ __ / / / /_ ____

/ /_ / / \/ `/ / __ \/ _/ \/ __/ \

/ / / / / / / // / / / / // /_/ / / / /

// /// //_,/_/ /__/_/_// _/


Metric

Task

Baseline

Contender

Diff

Unit

Diff %

Cumulative indexing time of primary shards

149.806

154.628

4.82263

min

+3.22%

Min cumulative indexing time across primary shard

0

0

0

min

0.00%

Median cumulative indexing time across primary shard

1.70811

1.66962

-0.03849

min

-2.25%

Max cumulative indexing time across primary shard

40.4544

40.6824

0.22798

min

+0.56%

Cumulative indexing throttle time of primary shards

0

0

0

min

0.00%

Min cumulative indexing throttle time across primary shard

0

0

0

min

0.00%

Median cumulative indexing throttle time across primary shard

0

0

0

min

0.00%

Max cumulative indexing throttle time across primary shard

0

0

0

min

0.00%

Cumulative merge time of primary shards

118.814

120.12

1.30553

min

+1.10%

Cumulative merge count of primary shards

334

725

391

+117.07%

Min cumulative merge time across primary shard

0

0

0

min

0.00%

Median cumulative merge time across primary shard

0.1528

0.134533

-0.01827

min

-11.95%

Max cumulative merge time across primary shard

37.9473

42.7596

4.81227

min

+12.68%

Cumulative merge throttle time of primary shards

40.7357

40.1416

-0.59412

min

-1.46%

Min cumulative merge throttle time across primary shard

0

0

0

min

0.00%

Median cumulative merge throttle time across primary shard

0

0

0

min

0.00%

Max cumulative merge throttle time across primary shard

14.9642

13.7643

-1.19992

min

-8.02%

Cumulative refresh time of primary shards

19.005

15.0934

-3.91163

min

-20.58%

Cumulative refresh count of primary shards

2739

6612

3873

+141.40%

Min cumulative refresh time across primary shard

0

0

0

min

0.00%

Median cumulative refresh time across primary shard

0.217183

0.176142

-0.04104

min

-18.90%

Max cumulative refresh time across primary shard

4.91215

3.63148

-1.28067

min

-26.07%

Cumulative flush time of primary shards

2.5866

1.8533

-0.7333

min

-28.35%

Cumulative flush count of primary shards

132

131

-1

-0.76%

Min cumulative flush time across primary shard

0

0

0

min

0.00%

Median cumulative flush time across primary shard

0.013025

0.00990833

-0.00312

min

-23.93%

Max cumulative flush time across primary shard

0.7248

0.53595

-0.18885

min

-26.06%

Total Young Gen GC time

159.686

201.877

42.191

s

+26.42%

Total Young Gen GC count

17408

20202

2794

+16.05%

Total Old Gen GC time

0

0

0

s

0.00%

Total Old Gen GC count

0

0

0

0.00%

Store size

17.5779

17.1429

-0.43506

GB

-2.48%

Translog size

0.0315703

0.0503274

0.01876

GB

+59.41%

Heap used for segments

0.435303

0.34417

-0.09113

MB

-20.94%

Heap used for doc values

0.19455

0.176033

-0.01852

MB

-9.52%

Heap used for terms

0.182465

0.125092

-0.05737

MB

-31.44%

Heap used for norms

0.0057373

0.000671387

-0.00507

MB

-88.30%

Heap used for points

0

0

0

MB

0.00%

Heap used for stored fields

0.0525513

0.0423737

-0.01018

MB

-19.37%

Segment count

74

52

-22

-29.73%

Min Throughput

index-append

108883

109834

951.141

docs/s

+0.87%

Mean Throughput

index-append

114922

115458

536.178

docs/s

+0.47%

Median Throughput

index-append

114935

115308

372.795

docs/s

+0.32%

Max Throughput

index-append

125157

125362

205.398

docs/s

+0.16%

50th percentile latency

index-append

321.779

327.485

5.70573

ms

+1.77%

90th percentile latency

index-append

533.378

510.85

-22.5285

ms

-4.22%

99th percentile latency

index-append

1366.46

1121.45

-245.005

ms

-17.93%

99.9th percentile latency

index-append

2475.3

1959.62

-515.684

ms

-20.83%

99.99th percentile latency

index-append

2939.52

2447.88

-491.638

ms

-16.73%

100th percentile latency

index-append

3578.9

2579.56

-999.335

ms

-27.92%

50th percentile service time

index-append

321.78

327.485

5.70469

ms

+1.77%

90th percentile service time

index-append

533.391

510.85

-22.5415

ms

-4.23%

99th percentile service time

index-append

1366.68

1121.45

-245.228

ms

-17.94%

99.9th percentile service time

index-append

2475.3

1959.62

-515.684

ms

-20.83%

99.99th percentile service time

index-append

2939.52

2447.88

-491.638

ms

-16.73%

100th percentile service time

index-append

3578.9

2579.56

-999.335

ms

-27.92%

error rate

index-append

0

0

0

%

0.00%

Min Throughput

default

20.0116

8.01225

-11.9993

ops/s

-59.96%

Mean Throughput

default

20.0126

8.01323

-11.9994

ops/s

-59.96%

Median Throughput

default

20.0125

8.0132

-11.9993

ops/s

-59.96%

Max Throughput

default

20.0137

8.01444

-11.9993

ops/s

-59.96%

50th percentile latency

default

5.54554

6.18421

0.63867

ms

+11.52%

90th percentile latency

default

9.59301

7.56175

-2.03125

ms

-21.17%

99th percentile latency

default

15.3999

15.3726

-0.02733

ms

-0.18%

100th percentile latency

default

17.7461

17.6093

-0.13676

ms

-0.77%

50th percentile service time

default

4.6762

4.39236

-0.28384

ms

-6.07%

90th percentile service time

default

8.69762

5.57245

-3.12517

ms

-35.93%

99th percentile service time

default

14.8464

13.572

-1.27441

ms

-8.58%

100th percentile service time

default

17.2429

15.1161

-2.1268

ms

-12.33%

error rate

default

0

0

0

%

0.00%

Min Throughput

term

49.1588

49.3021

0.14332

ops/s

+0.29%

Mean Throughput

term

49.2246

49.3303

0.1057

ops/s

+0.21%

Median Throughput

term

49.2245

49.3303

0.10575

ops/s

+0.21%

Max Throughput

term

49.2904

49.3584

0.06803

ops/s

+0.14%

50th percentile latency

term

6.57769

6.38964

-0.18805

ms

-2.86%

90th percentile latency

term

9.02989

7.74706

-1.28283

ms

-14.21%

99th percentile latency

term

22.8338

16.6021

-6.23172

ms

-27.29%

100th percentile latency

term

23.0754

18.8683

-4.20709

ms

-18.23%

50th percentile service time

term

5.43497

5.41069

-0.02429

ms

-0.45%

90th percentile service time

term

8.17052

6.85421

-1.31631

ms

-16.11%

99th percentile service time

term

20.758

15.8222

-4.93577

ms

-23.78%

100th percentile service time

term

20.8855

18.2411

-2.64443

ms

-12.66%

error rate

term

0

0

0

%

0.00%

Min Throughput

range

24.4438

1.00463

-23.4392

ops/s

-95.89%

Mean Throughput

range

24.559

1.00641

-23.5526

ops/s

-95.90%

Median Throughput

range

24.5697

1.00616

-23.5635

ops/s

-95.90%

Max Throughput

range

24.6527

1.00921

-23.6435

ops/s

-95.91%

50th percentile latency

range

13.2125

19.1614

5.94886

ms

+45.02%

90th percentile latency

range

18.2012

20.5689

2.36766

ms

+13.01%

99th percentile latency

range

25.3066

27.5328

2.22616

ms

+8.80%

100th percentile latency

range

26.8753

32.5699

5.69456

ms

+21.19%

50th percentile service time

range

12.1693

17.3338

5.16452

ms

+42.44%

90th percentile service time

range

17.3298

18.4377

1.10789

ms

+6.39%

99th percentile service time

range

24.5904

26.4062

1.81582

ms

+7.38%

100th percentile service time

range

26.3773

31.8829

5.50554

ms

+20.87%

error rate

range

0

0

0

%

0.00%

Min Throughput

200s-in-range

24.9717

32.9441

7.9724

ops/s

+31.93%

Mean Throughput

200s-in-range

24.9738

32.9475

7.97365

ops/s

+31.93%

Median Throughput

200s-in-range

24.9739

32.9477

7.97375

ops/s

+31.93%

Max Throughput

200s-in-range

24.9756

32.9506

7.97493

ops/s

+31.93%

50th percentile latency

200s-in-range

7.8419

8.81293

0.97103

ms

+12.38%

90th percentile latency

200s-in-range

11.291

11.4584

0.16737

ms

+1.48%

99th percentile latency

200s-in-range

20.3644

22.1515

1.78709

ms

+8.78%

100th percentile latency

200s-in-range

21.7352

26.3675

4.63231

ms

+21.31%

50th percentile service time

200s-in-range

6.96257

8.03834

1.07576

ms

+15.45%

90th percentile service time

200s-in-range

9.71676

10.5335

0.81677

ms

+8.41%

99th percentile service time

200s-in-range

19.5433

21.5298

1.98649

ms

+10.16%

100th percentile service time

200s-in-range

20.9495

25.7635

4.81396

ms

+22.98%

error rate

200s-in-range

0

0

0

%

0.00%

Min Throughput

400s-in-range

49.8587

49.935

0.07635

ops/s

+0.15%

Mean Throughput

400s-in-range

49.865

49.9358

0.07087

ops/s

+0.14%

Median Throughput

400s-in-range

49.865

49.9358

0.07087

ops/s

+0.14%

Max Throughput

400s-in-range

49.8712

49.9366

0.06539

ops/s

+0.13%

50th percentile latency

400s-in-range

4.68439

4.46375

-0.22064

ms

-4.71%

90th percentile latency

400s-in-range

6.36301

5.03508

-1.32793

ms

-20.87%

99th percentile latency

400s-in-range

11.4676

7.73441

-3.73323

ms

-32.55%

100th percentile latency

400s-in-range

14.3764

8.90604

-5.4704

ms

-38.05%

50th percentile service time

400s-in-range

3.84824

3.65788

-0.19036

ms

-4.95%

90th percentile service time

400s-in-range

5.45247

4.27252

-1.17995

ms

-21.64%

99th percentile service time

400s-in-range

10.9364

6.92335

-4.01303

ms

-36.69%

100th percentile service time

400s-in-range

13.9616

8.23023

-5.73134

ms

-41.05%

error rate

400s-in-range

0

0

0

%

0.00%

Min Throughput

hourly_agg

0.200432

0.200483

5e-05

ops/s

+0.02%

Mean Throughput

hourly_agg

0.200598

0.200667

7e-05

ops/s

+0.03%

Median Throughput

hourly_agg

0.200575

0.200641

7e-05

ops/s

+0.03%

Max Throughput

hourly_agg

0.200859

0.200957

0.0001

ops/s

+0.05%

50th percentile latency

hourly_agg

2390.6

2341.55

-49.0422

ms

-2.05%

90th percentile latency

hourly_agg

2430.04

2445.32

15.2752

ms

+0.63%

99th percentile latency

hourly_agg

2548.61

2616.33

67.7197

ms

+2.66%

100th percentile latency

hourly_agg

2570.15

2677.89

107.741

ms

+4.19%

50th percentile service time

hourly_agg

2388.67

2338.11

-50.5587

ms

-2.12%

90th percentile service time

hourly_agg

2427.58

2441.8

14.2227

ms

+0.59%

99th percentile service time

hourly_agg

2545.41

2614.57

69.1639

ms

+2.72%

100th percentile service time

hourly_agg

2566.94

2676.84

109.894

ms

+4.28%

error rate

hourly_agg

0

0

0

%

0.00%

Min Throughput

scroll

25.0282

25.0329

0.00466

pages/s

+0.02%

Mean Throughput

scroll

25.0465

25.0541

0.0076

pages/s

+0.03%

Median Throughput

scroll

25.0423

25.0492

0.00693

pages/s

+0.03%

Max Throughput

scroll

25.0842

25.0979

0.01372

pages/s

+0.05%

50th percentile latency

scroll

372.376

356.648

-15.7281

ms

-4.22%

90th percentile latency

scroll

388.204

368.826

-19.378

ms

-4.99%

99th percentile latency

scroll

478.402

508.923

30.5212

ms

+6.38%

100th percentile latency

scroll

579.492

603.644

24.1517

ms

+4.17%

50th percentile service time

scroll

370.456

354.889

-15.5667

ms

-4.20%

90th percentile service time

scroll

386.472

366.747

-19.7249

ms

-5.10%

99th percentile service time

scroll

476.037

507.359

31.3214

ms

+6.58%

100th percentile service time

scroll

577.857

602.063

24.2059

ms

+4.19%

error rate

scroll

0

0

0

%

0.00%

Min Throughput

desc_sort_timestamp

2.00365

0.501481

-1.50217

ops/s

-74.97%

Mean Throughput

desc_sort_timestamp

2.00442

0.501799

-1.50262

ops/s

-74.97%

Median Throughput

desc_sort_timestamp

2.00436

0.501774

-1.50258

ops/s

-74.97%

Max Throughput

desc_sort_timestamp

2.00543

0.502214

-1.50322

ops/s

-74.96%

50th percentile latency

desc_sort_timestamp

38.4097

46.0606

7.65091

ms

+19.92%

90th percentile latency

desc_sort_timestamp

45.1163

50.227

5.11068

ms

+11.33%

99th percentile latency

desc_sort_timestamp

86.7128

57.0004

-29.7125

ms

-34.27%

100th percentile latency

desc_sort_timestamp

116.341

58.2039

-58.1373

ms

-49.97%

50th percentile service time

desc_sort_timestamp

36.8282

43.3239

6.4957

ms

+17.64%

90th percentile service time

desc_sort_timestamp

43.8286

47.8833

4.05463

ms

+9.25%

99th percentile service time

desc_sort_timestamp

85.613

54.4289

-31.1841

ms

-36.42%

100th percentile service time

desc_sort_timestamp

115.4

56.1846

-59.2153

ms

-51.31%

error rate

desc_sort_timestamp

0

0

0

%

0.00%

Min Throughput

asc_sort_timestamp

19.8487

0.501625

-19.347

ops/s

-97.47%

Mean Throughput

asc_sort_timestamp

19.8708

0.501974

-19.3689

ops/s

-97.47%

Median Throughput

asc_sort_timestamp

19.8719

0.501948

-19.3699

ops/s

-97.47%

Max Throughput

asc_sort_timestamp

19.8894

0.502432

-19.387

ops/s

-97.47%

50th percentile latency

asc_sort_timestamp

10.2326

11.1864

0.95374

ms

+9.32%

90th percentile latency

asc_sort_timestamp

14.6524

14.6433

-0.00905

ms

-0.06%

99th percentile latency

asc_sort_timestamp

20.8715

22.4271

1.55568

ms

+7.45%

100th percentile latency

asc_sort_timestamp

20.9129

26.0099

5.09703

ms

+24.37%

50th percentile service time

asc_sort_timestamp

9.35462

9.22597

-0.12865

ms

-1.38%

90th percentile service time

asc_sort_timestamp

13.5687

11.659

-1.90971

ms

-14.07%

99th percentile service time

asc_sort_timestamp

20.1754

19.3997

-0.77577

ms

-3.85%

100th percentile service time

asc_sort_timestamp

20.5059

23.3318

2.8259

ms

+13.78%

error rate

asc_sort_timestamp

0

0

0

%

0.00%

Min Throughput

desc_sort_with_after_timestamp

0.406083

0.420066

0.01398

ops/s

+3.44%

Mean Throughput

desc_sort_with_after_timestamp

0.407612

0.421495

0.01388

ops/s

+3.41%

Median Throughput

desc_sort_with_after_timestamp

0.407734

0.421106

0.01337

ops/s

+3.28%

Max Throughput

desc_sort_with_after_timestamp

0.408331

0.428885

0.02055

ops/s

+5.03%

50th percentile latency

desc_sort_with_after_timestamp

88694.3

24623

-64071.3

ms

-72.24%

90th percentile latency

desc_sort_with_after_timestamp

147380

40166.3

-107214

ms

-72.75%

99th percentile latency

desc_sort_with_after_timestamp

161067

43585.4

-117482

ms

-72.94%

100th percentile latency

desc_sort_with_after_timestamp

161808

43771.9

-118036

ms

-72.95%

50th percentile service time

desc_sort_with_after_timestamp

2444.96

2375.24

-69.7207

ms

-2.85%

90th percentile service time

desc_sort_with_after_timestamp

2525.34

2431.62

-93.7266

ms

-3.71%

99th percentile service time

desc_sort_with_after_timestamp

2633.91

2518.66

-115.244

ms

-4.38%

100th percentile service time

desc_sort_with_after_timestamp

2654.69

2522.89

-131.792

ms

-4.96%

error rate

desc_sort_with_after_timestamp

0

0

0

%

0.00%

Min Throughput

asc_sort_with_after_timestamp

0.500039

0.500356

0.00032

ops/s

+0.06%

Mean Throughput

asc_sort_with_after_timestamp

0.500393

0.500911

0.00052

ops/s

+0.10%

Median Throughput

asc_sort_with_after_timestamp

0.500278

0.500634

0.00036

ops/s

+0.07%

Max Throughput

asc_sort_with_after_timestamp

0.50152

0.503561

0.00204

ops/s

+0.41%

50th percentile latency

asc_sort_with_after_timestamp

1881.78

1842.65

-39.1326

ms

-2.08%

90th percentile latency

asc_sort_with_after_timestamp

1940.92

1895.39

-45.5221

ms

-2.35%

99th percentile latency

asc_sort_with_after_timestamp

2034.58

2037.34

2.76288

ms

+0.14%

100th percentile latency

asc_sort_with_after_timestamp

2052.14

2092.78

40.6455

ms

+1.98%

50th percentile service time

asc_sort_with_after_timestamp

1880.12

1841.19

-38.9304

ms

-2.07%

90th percentile service time

asc_sort_with_after_timestamp

1940.07

1890.79

-49.2852

ms

-2.54%

99th percentile service time

asc_sort_with_after_timestamp

2032.96

2036.45

3.48926

ms

+0.17%

100th percentile service time

asc_sort_with_after_timestamp

2050.01

2091.59

41.5772

ms

+2.03%

error rate

asc_sort_with_after_timestamp

0

0

0

%

0.00%

Min Throughput

desc-sort-timestamp-after-force-merge-1-seg

0.997876

2.00258

1.0047

ops/s

+100.68%

Mean Throughput

desc-sort-timestamp-after-force-merge-1-seg

0.999108

2.00311

1.00401

ops/s

+100.49%

Median Throughput

desc-sort-timestamp-after-force-merge-1-seg

0.999184

2.00307

1.00389

ops/s

+100.47%

Max Throughput

desc-sort-timestamp-after-force-merge-1-seg

0.999384

2.0038

1.00442

ops/s

+100.50%

50th percentile latency

desc-sort-timestamp-after-force-merge-1-seg

1012.41

204.456

-807.955

ms

-79.81%

90th percentile latency

desc-sort-timestamp-after-force-merge-1-seg

1125.1

217.552

-907.552

ms

-80.66%

99th percentile latency

desc-sort-timestamp-after-force-merge-1-seg

1250.83

256.428

-994.398

ms

-79.50%

100th percentile latency

desc-sort-timestamp-after-force-merge-1-seg

1265.08

274.396

-990.682

ms

-78.31%

50th percentile service time

desc-sort-timestamp-after-force-merge-1-seg

1000.71

203.032

-797.677

ms

-79.71%

90th percentile service time

desc-sort-timestamp-after-force-merge-1-seg

1028.83

215.9

-812.935

ms

-79.02%

99th percentile service time

desc-sort-timestamp-after-force-merge-1-seg

1144.34

255.44

-888.897

ms

-77.68%

100th percentile service time

desc-sort-timestamp-after-force-merge-1-seg

1158.71

273.284

-885.423

ms

-76.41%

error rate

desc-sort-timestamp-after-force-merge-1-seg

0

0

0

%

0.00%

Min Throughput

asc-sort-timestamp-after-force-merge-1-seg

49.9805

2.00617

-47.9744

ops/s

-95.99%

Mean Throughput

asc-sort-timestamp-after-force-merge-1-seg

49.9809

2.00749

-47.9734

ops/s

-95.98%

Median Throughput

asc-sort-timestamp-after-force-merge-1-seg

49.9809

2.00737

-47.9736

ops/s

-95.98%

Max Throughput

asc-sort-timestamp-after-force-merge-1-seg

49.9813

2.00921

-47.9721

ops/s

-95.98%

50th percentile latency

asc-sort-timestamp-after-force-merge-1-seg

14.5971

28.6633

14.0661

ms

+96.36%

90th percentile latency

asc-sort-timestamp-after-force-merge-1-seg

88.5212

34.0156

-54.5056

ms

-61.57%

99th percentile latency

asc-sort-timestamp-after-force-merge-1-seg

112.765

47.2066

-65.5583

ms

-58.14%

100th percentile latency

asc-sort-timestamp-after-force-merge-1-seg

113.786

49.9852

-63.8007

ms

-56.07%

50th percentile service time

asc-sort-timestamp-after-force-merge-1-seg

11.4171

26.4398

15.0227

ms

+131.58%

90th percentile service time

asc-sort-timestamp-after-force-merge-1-seg

19.0515

31.5447

12.4931

ms

+65.58%

99th percentile service time

asc-sort-timestamp-after-force-merge-1-seg

66.4443

45.7023

-20.742

ms

-31.22%

100th percentile service time

asc-sort-timestamp-after-force-merge-1-seg

69.5908

48.6049

-20.9859

ms

-30.16%

error rate

asc-sort-timestamp-after-force-merge-1-seg

0

0

0

%

0.00%

Min Throughput

desc-sort-with-after-timestamp-after-force-merge-1-seg

0.402276

0.411587

0.00931

ops/s

+2.31%

Mean Throughput

desc-sort-with-after-timestamp-after-force-merge-1-seg

0.410018

0.41961

0.00959

ops/s

+2.34%

Median Throughput

desc-sort-with-after-timestamp-after-force-merge-1-seg

0.410444

0.420725

0.01028

ops/s

+2.50%

Max Throughput

desc-sort-with-after-timestamp-after-force-merge-1-seg

0.411837

0.422242

0.01041

ops/s

+2.53%

50th percentile latency

desc-sort-with-after-timestamp-after-force-merge-1-seg

87710

24481.7

-63228.3

ms

-72.09%

90th percentile latency

desc-sort-with-after-timestamp-after-force-merge-1-seg

144442

38871.5

-105570

ms

-73.09%

99th percentile latency

desc-sort-with-after-timestamp-after-force-merge-1-seg

157061

42545.7

-114515

ms

-72.91%

100th percentile latency

desc-sort-with-after-timestamp-after-force-merge-1-seg

157751

42715.3

-115035

ms

-72.92%

50th percentile service time

desc-sort-with-after-timestamp-after-force-merge-1-seg

2405.91

2350.46

-55.4477

ms

-2.30%

90th percentile service time

desc-sort-with-after-timestamp-after-force-merge-1-seg

2494.9

2408.06

-86.8397

ms

-3.48%

99th percentile service time

desc-sort-with-after-timestamp-after-force-merge-1-seg

2574.63

2634.66

60.036

ms

+2.33%

100th percentile service time

desc-sort-with-after-timestamp-after-force-merge-1-seg

2576.38

2685.25

108.87

ms

+4.23%

error rate

desc-sort-with-after-timestamp-after-force-merge-1-seg

0

0

0

%

0.00%

Min Throughput

asc-sort-with-after-timestamp-after-force-merge-1-seg

0.500202

0.500345

0.00014

ops/s

+0.03%

Mean Throughput

asc-sort-with-after-timestamp-after-force-merge-1-seg

0.500731

0.500896

0.00017

ops/s

+0.03%

Median Throughput

asc-sort-with-after-timestamp-after-force-merge-1-seg

0.500506

0.500628

0.00012

ops/s

+0.02%

Max Throughput

asc-sort-with-after-timestamp-after-force-merge-1-seg

0.502882

0.503477

0.0006

ops/s

+0.12%

50th percentile latency

asc-sort-with-after-timestamp-after-force-merge-1-seg

1881.26

1837.43

-43.8301

ms

-2.33%

90th percentile latency

asc-sort-with-after-timestamp-after-force-merge-1-seg

1951.78

1906.24

-45.5401

ms

-2.33%

99th percentile latency

asc-sort-with-after-timestamp-after-force-merge-1-seg

2091.2

1975.08

-116.124

ms

-5.55%

100th percentile latency

asc-sort-with-after-timestamp-after-force-merge-1-seg

2097.85

1982.95

-114.894

ms

-5.48%

50th percentile service time

asc-sort-with-after-timestamp-after-force-merge-1-seg

1879.96

1836.31

-43.6483

ms

-2.32%

90th percentile service time

asc-sort-with-after-timestamp-after-force-merge-1-seg

1944.69

1905.23

-39.4579

ms

-2.03%

99th percentile service time

asc-sort-with-after-timestamp-after-force-merge-1-seg

2048.4

1973.62

-74.7806

ms

-3.65%

100th percentile service time

asc-sort-with-after-timestamp-after-force-merge-1-seg

2076.84

1981.47

-95.3767

ms

-4.59%

error rate

asc-sort-with-after-timestamp-after-force-merge-1-seg

0

0

0

%

0.00%

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 测试术语
  • 测试参数
  • 测试方案
  • 测试结果分析
  • 总结
  • 测试结果附录
相关产品与服务
数据保险箱
数据保险箱(Cloud Data Coffer Service,CDCS)为您提供更高安全系数的企业核心数据存储服务。您可以通过自定义过期天数的方法删除数据,避免误删带来的损害,还可以将数据跨地域存储,防止一些不可抗因素导致的数据丢失。数据保险箱支持通过控制台、API 等多样化方式快速简单接入,实现海量数据的存储管理。您可以使用数据保险箱对文件数据进行上传、下载,最终实现数据的安全存储和提取。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档