首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
社区首页 >问答首页 >Elasticsearch索引红色状态

Elasticsearch索引红色状态
EN

Stack Overflow用户
提问于 2020-10-30 00:24:29
回答 1查看 1.9K关注 0票数 1

在我的集群中,一切都很好,今天我发现我没有数据包记录,碎片的健康状况是红色的:

当我运行GET _cat/shards时,我会得到这样的信息:

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
packetbeat-7.9.3-2020.10.28-000001                         2 p STARTED      11428    3.8mb 10.13.81.12 VSELK-MASTER-02
packetbeat-7.9.3-2020.10.28-000001                         2 r STARTED      11428    3.8mb 10.13.81.13 VSELK-MASTER-03
packetbeat-7.9.3-2020.10.28-000001                         9 r STARTED      11402    3.8mb 10.13.81.12 VSELK-MASTER-02
packetbeat-7.9.3-2020.10.28-000001                         9 p STARTED      11402    3.8mb 10.13.81.21 VSELK-DATA-01
packetbeat-7.9.3-2020.10.28-000001                         4 p STARTED      11619      4mb 10.13.81.21 VSELK-DATA-01
packetbeat-7.9.3-2020.10.28-000001                         4 r STARTED      11619    3.9mb 10.13.81.22 VSELK-DATA-02
packetbeat-7.9.3-2020.10.28-000001                         5 r STARTED      11567    3.8mb 10.13.81.21 VSELK-DATA-01
packetbeat-7.9.3-2020.10.28-000001                         5 p STARTED      11567    3.9mb 10.13.81.22 VSELK-DATA-02
packetbeat-7.9.3-2020.10.28-000001                         1 r STARTED      11553    3.8mb 10.13.81.11 VSELK-MASTER-01
packetbeat-7.9.3-2020.10.28-000001                         1 p STARTED      11553    3.9mb 10.13.81.22 VSELK-DATA-02
packetbeat-7.9.3-2020.10.28-000001                         7 r UNASSIGNED                              
packetbeat-7.9.3-2020.10.28-000001                         7 p UNASSIGNED                              
packetbeat-7.9.3-2020.10.28-000001                         6 r UNASSIGNED                              
packetbeat-7.9.3-2020.10.28-000001                         6 p UNASSIGNED                              
packetbeat-7.9.3-2020.10.28-000001                         8 r STARTED      11630      4mb 10.13.81.12 VSELK-MASTER-02
packetbeat-7.9.3-2020.10.28-000001                         8 p STARTED      11630    3.9mb 10.13.81.21 VSELK-DATA-01
packetbeat-7.9.3-2020.10.28-000001                         3 p STARTED      11495      4mb 10.13.81.12 VSELK-MASTER-02
packetbeat-7.9.3-2020.10.28-000001                         3 r STARTED      11495    3.7mb 10.13.81.13 VSELK-MASTER-03
packetbeat-7.9.3-2020.10.28-000001                         0 r STARTED      11713      4mb 10.13.81.11 VSELK-MASTER-01
packetbeat-7.9.3-2020.10.28-000001                         0 p STARTED      11713      4mb 10.13.81.22 VSELK-DATA-02

当我运行时,我得到: get /_群集/分配/解释

代码语言:javascript
代码运行次数:0
运行
AI代码解释
复制
{
  "index" : "packetbeat-7.9.2-2020.10.22-000001",
  "shard" : 6,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "ALLOCATION_FAILED",
    "at" : "2020-10-28T13:22:03.006Z",
    "failed_allocation_attempts" : 5,
    "details" : """failed shard on node [RCeMt0uXQie_ax_Sp22hLw]: failed to create shard, failure java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:489)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:763)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:176)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:607)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:584)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:242)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:504)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:494)
    at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
    at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
    at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
    at java.lang.Thread.run(Thread.java:832)
Caused by: [packetbeat-7.9.2-2020.10.22-000001/RRAnRZrrRZiihscJ3bymig][[packetbeat-7.9.2-2020.10.22-000001][6]] org.elasticsearch.env.ShardLockObtainFailedException: [packetbeat-7.9.2-2020.10.22-000001][6]: obtaining shard lock for [starting shard] timed out after [5000ms], lock already held for [closing shard] with age [199852ms]
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:869)
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:775)
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:409)
    ... 16 more
""",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes that hold an in-sync shard copy",
  "node_allocation_decisions" : [
    {
      "node_id" : "A_nOoYrdSSOAHNQrhfveNA",
      "node_name" : "VSELK-DATA-02",
      "transport_address" : "10.13.81.22:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8365424640",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "data" : "cold",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "found" : false
      }
    },
    {
      "node_id" : "RCeMt0uXQie_ax_Sp22hLw",
      "node_name" : "VSELK-MASTER-03",
      "transport_address" : "10.13.81.13:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8365068288",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "data" : "hot",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "in_sync" : true,
        "allocation_id" : "nMvn4c4vQp2efQQtIeKzlg"
      },
      "deciders" : [
        {
          "decider" : "max_retry",
          "decision" : "NO",
          "explanation" : """shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-28T13:22:03.006Z], failed_attempts[5], failed_nodes[[hHHRtd5HTCKJgLTBtgDbOw, RCeMt0uXQie_ax_Sp22hLw]], delayed=false, details[failed shard on node [RCeMt0uXQie_ax_Sp22hLw]: failed to create shard, failure java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:489)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:763)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:176)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:607)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:584)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:242)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:504)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:494)
    at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
    at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
    at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
    at java.lang.Thread.run(Thread.java:832)
Caused by: [packetbeat-7.9.2-2020.10.22-000001/RRAnRZrrRZiihscJ3bymig][[packetbeat-7.9.2-2020.10.22-000001][6]] org.elasticsearch.env.ShardLockObtainFailedException: [packetbeat-7.9.2-2020.10.22-000001][6]: obtaining shard lock for [starting shard] timed out after [5000ms], lock already held for [closing shard] with age [199852ms]
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:869)
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:775)
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:409)
    ... 16 more
], allocation_status[deciders_no]]]"""
        }
      ]
    },
    {
      "node_id" : "hHHRtd5HTCKJgLTBtgDbOw",
      "node_name" : "VSELK-MASTER-01",
      "transport_address" : "10.13.81.11:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8365068288",
        "xpack.installed" : "true",
        "data" : "hot",
        "transform.node" : "true",
        "ml.max_open_jobs" : "20"
      },
      "node_decision" : "no",
      "store" : {
        "in_sync" : true,
        "allocation_id" : "ByqJGtQSQT-p8dCCfk3VlA"
      },
      "deciders" : [
        {
          "decider" : "max_retry",
          "decision" : "NO",
          "explanation" : """shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [/_cluster/reroute?retry_failed=true] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2020-10-28T13:22:03.006Z], failed_attempts[5], failed_nodes[[hHHRtd5HTCKJgLTBtgDbOw, RCeMt0uXQie_ax_Sp22hLw]], delayed=false, details[failed shard on node [RCeMt0uXQie_ax_Sp22hLw]: failed to create shard, failure java.io.IOException: failed to obtain in-memory shard lock
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:489)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:763)
    at org.elasticsearch.indices.IndicesService.createShard(IndicesService.java:176)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createShard(IndicesClusterStateService.java:607)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.createOrUpdateShards(IndicesClusterStateService.java:584)
    at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:242)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:504)
    at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:494)
    at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:471)
    at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:418)
    at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:162)
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
    at java.lang.Thread.run(Thread.java:832)
Caused by: [packetbeat-7.9.2-2020.10.22-000001/RRAnRZrrRZiihscJ3bymig][[packetbeat-7.9.2-2020.10.22-000001][6]] org.elasticsearch.env.ShardLockObtainFailedException: [packetbeat-7.9.2-2020.10.22-000001][6]: obtaining shard lock for [starting shard] timed out after [5000ms], lock already held for [closing shard] with age [199852ms]
    at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:869)
    at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:775)
    at org.elasticsearch.index.IndexService.createShard(IndexService.java:409)
    ... 16 more
], allocation_status[deciders_no]]]"""
        }
      ]
    },
    {
      "node_id" : "k_SgmMDMRfGi-IFLbI-cRw",
      "node_name" : "VSELK-MASTER-02",
      "transport_address" : "10.13.81.12:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8365056000",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "data" : "hot",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "found" : false
      }
    },
    {
      "node_id" : "r4V_KqZDQ7mYi7AZea5eXQ",
      "node_name" : "VSELK-DATA-01",
      "transport_address" : "10.13.81.21:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8365424640",
        "ml.max_open_jobs" : "20",
        "xpack.installed" : "true",
        "data" : "warm",
        "transform.node" : "true"
      },
      "node_decision" : "no",
      "store" : {
        "found" : false
      }
    }
  ]
}

有人能告诉我这种错误的原因和解决方法吗?(知道我的集群中有5个节点,3个主节点和2个数据节点,并且它们都已经启动)。

谢谢你的帮助!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-10-30 00:30:45

您可以按照相关的GitHub问题,特别是注释来解决这个问题。

简而言之,您应该尝试使用下面的命令来更安全地运行

curl -XPOST‘localhost:9200/_群集/重路由?retry_failed

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64604872

复制
相关文章
python 数据图表呈现
平时压力测试,生成一些数据后分析,直接看 log 不是很直观,前段时间看到公司同事分享了一个绘制图表python 模块 : plotly, 觉得很实用,利用周末时间熟悉下。
orientlu
2018/09/13
1.2K0
python 数据图表呈现
目前最全,可视化数据工具大集合
数据可视化技术的基本思想是将数据库中每一个数据项作为单个图元元素表示,大量的数据集构成数据图像, 同时将数据的各个属性值以多维数据的形式表示,可以从不同的维度观察数据,从而对数据进行更深入的观察和分析。 图表库 C3 – 以 d3 为基础构建的可重用图表库 Chart.js – 带有 canvas 标签的图表 Chartist.js – 具有强大浏览器兼容能力的响应式图表 Dimple – 适用于业务分析的面向对象的 API Dygraphs – 适用于大型数据集的交互式线性图表库 Echarts – 针对
BestSDK
2018/03/02
3.7K0
12个数据可视化工具,人人都能做出超炫图表
导语:今天我们带来一篇来自 Adobe 工程师 Rohit Boggarapu 的文章。他在文章中介绍了一些适合网页开发者的数据可视化和绘图工具,让你不必再花大力气与枯燥的数据抗争。部分工具不要求写代码也可以使用!
IT阅读排行榜
2018/08/17
2.1K0
12个数据可视化工具,人人都能做出超炫图表
企业价值观在ERP中以何种方式呈现?
提供自定义平台,将业务功能模块化、接口标准化(SOA架构设计)。用户可以灵活的按自己需求,编排组合自己的业务流程,从而达到优化现有流程。
明象ERP
2019/03/01
8530
用SQL语句实现:当A列大于B列时选择A列否则选择B列,当B列大于C列时选择B列否则选择C列。
数据库中有A B C三列,用SQL语句实现:当A列大于B列时选择A列否则选择B列,当B列大于C列时选择B列否则选择C列。
全栈程序员站长
2022/07/09
1.7K0
50种制作图表JS库
在很多项目中都会有在前端展现数据图表的需求,而在开发过程中,开发者往往会使用一些JavaScript库,从而更有效地达到想要的目标。最近,TechSlide上的一篇文章总结了50种用于展现图表的JavaScript库,并对每种库做了简要的说明。这对于想要选择合适JavaScript库的开发者很有参考意义。
阳光岛主
2019/02/19
4.5K0
【python】使用csv库以字典格式读写csv文件
1、使用csv.DictWriter()写入字典格式的数据 import csv with open('test.csv', 'w', newline='') as csvfile: fieldnames = ['first_name', 'last_name'] writer = csv.DictWriter(csvfile, fieldnames=fieldnames) writer.writeheader() writer.writerow({'first
西西嘛呦
2020/08/26
1.7K0
【python】使用csv库以字典格式读写csv文件
Grafana使用mysql作为数据源,呈现图表
近期在使用python写一套模拟API请求的监控项目,考虑数据可视化这方面就采用grafana来呈现,下面来看看怎么弄。
Devops海洋的渔夫
2019/05/31
20.9K0
Grafana使用InfluxDB作为数据源,呈现图表
下面就在Grafana执行SELECT * FROM "CPU_All3" WHERE host =~ /qc_predepl_cms/查询出表里面的数据,并呈现在Grafana中。
Devops海洋的渔夫
2019/05/31
1.1K0
Github 上 10 个最流行的数据可视化项目
1. D3 Stars: 46561, Forks: 12465 D3 是一个JavaScript数据可视化库用于HTML和SVG。它旨在将数据带入生活,强调Web标准,将强大的可视化技术与数据驱动的
智能算法
2018/04/02
5.3K0
Github 上 10 个最流行的数据可视化项目
JAVA读取csv文件_java读取csv文件某一列
当读取的是一个简单的csv文件,即文件的列字段中不包含分隔符时,可以使用BufferedReader或者Scanner类去读取
全栈程序员站长
2022/11/16
3.8K0
Hive创建外部表CSV数据中列含有逗号问题处理
在不能修改示例数据的结构情况下,这里需要使用Hive提供的Serde,在Hive1.1版本中提供了多种Serde,此处的数据通过属于CSV格式,所以这里使用默认的org.apache.hadoop.hive.serde2.OpenCSVSerde类进行处理。经过修改后的建表语句如下:
Fayson
2018/11/16
7.5K0
极致呈现系列之:Vue3中使用Echarts图表初体验
Echarts是一个基于JavaScript的开源可视化图表库,由百度开发和维护。它提供了多种类型的图表,包括折线图、柱状图、散点图、饼图、地图等,可以用于展示各种类型的数据。Echarts具有良好的交互性和可扩展性,可以通过自定义主题和图表样式来满足不同的需求。同时,Echarts还支持移动端和桌面端的多种平台,可以在不同的设备上进行数据可视化展示。
九仞山
2023/10/14
3.7K0
sublime 列选择 原
2016年11月17日 09:27:24 zzh_my 阅读数:20295 标签: sublime text 更多
拓荒者
2019/03/08
2.6K0
python 根据csv表头、列号读取数据
设置index_col=0,目的是设置第一列name为index(索引),方便下面示例演示
lovelife110
2021/01/14
3.9K0
python 根据csv表头、列号读取数据
如何选择合适的数据图表?
在传递信息时,有数据比没数据更有说服力,而一旦有了数据,那就牵涉到如何呈现。PowerPoint为我们提供了诸多图表,它们在一定程度上已经可以满足我们平时需求。当然,若能够有更加简洁清晰的选择(并且又
用户1756920
2018/06/20
1.1K0
Excel图表技巧05:自由选择想要查看的图表
有时候,我们想通过选择来控制想要显示的图表。例如下图1所示,在单元格下拉列表中选取某项后,显示对应的图表。
fanjy
2021/01/20
1.5K0
Excel图表技巧05:自由选择想要查看的图表
pyecharts 图表切换,指标选择
最近小编在使用 pyecharts,深入研究了一下,pyecharts 的功能还有好多都没挖掘使用过。
用户6825444
2020/10/10
2K0
pyecharts 图表切换,指标选择
4. Grafana使用mysql作为数据源,呈现图表
近期在使用python写一套模拟API请求的监控项目,考虑数据可视化这方面就采用grafana来呈现,下面来看看怎么弄。
Devops海洋的渔夫
2022/01/14
2.7K0
4. Grafana使用mysql作为数据源,呈现图表
点击加载更多

相似问题

dc.js重新选择已呈现的图表

10

dc.js -侦听图表组呈现

12

仅使用dc.js呈现特定图表

23

dc.js Vue呈现图表不正确

10

扩展dc.js以添加"simpleLineChart“图表

12
添加站长 进交流群

领取专属 10元无门槛券

AI混元助手 在线答疑

扫码加入开发者社群
关注 腾讯云开发者公众号

洞察 腾讯核心技术

剖析业界实践案例

扫码关注腾讯云开发者公众号
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
查看详情【社区公告】 技术创作特训营有奖征文