前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >一次系统扩容引起的elasticsearch故障及恢复

一次系统扩容引起的elasticsearch故障及恢复

作者头像
冬天里的懒猫
发布2020-08-03 20:44:36
1.6K0
发布2020-08-03 20:44:36
举报

1.背景

在公司搭建了elk集群,集群配置如下:

机器名

CPU

内存

硬盘

raid

操作系统

部署软件

m21p22

Intel® Xeon®CPUE5620@2.40GHz X16

148G

2T

raid1

centos5.8

redis kibana

m21p23

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

160G

7T

raid1+0

centos6.8

logstash elasticsearch*2

m21p24

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

160G

7T

raid1+0

centos6.8

logstash elasticsearch*2

由于m21p22服务器配置比较老旧,而且上面还有其他人部署的其他应用。硬盘写入性能比较差,因此考虑吧elasticsearch部署在另外两台配置高的服务器,而将kibana、redis等与硬盘关系不大的软件部署在m21p22服务器。考虑到部署的复杂性以及服务器的实际情况,选择了redis接收beats的日志数据,再通过logstash实现负载均衡。这是之前elk集群的配置情况。 随着业务的深入,上述集群已经越来越难以满足业务的需要,日志量大会在redis中出现堆积,另外服务器查询量大之后,节点的cpu和load会触发告警。因此,与运维部门商议,又申请了2台服务器,作为elasticsearch的扩展节点,以降低原有服务器的负载。 如下是增加服务器之后的配置信息:

机器名

CPU

内存

硬盘

raid

操作系统

部署软件

m21p22

Intel® Xeon®CPUE5620@2.40GHz X16

148G

2T

raid1

centos5.8

redis kibana

m21p23

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

160G

7T

raid1+0

centos6.8

logstash elasticsearch*2

m21p24

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

160G

7T

raid1+0

centos6.8

logstash elasticsearch*2

m21p88

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

128G

7T

raid1+0

centos6.5

elasticsearch*2

m21p89

Intel® Xeon®CPUE5-2620v2@2.10GHz X24

128G

7T

raid1+0

centos6.5

elasticsearch*2

在新增加服务器到位之后,第一件事情就是决定将elasticsearch扩容。每台服务器部署2个节点,将原有集群扩大一倍,由4个节点扩大到8个节点。

2.扩容操作

原有节点:

节点名称

服务器

http端口

rack

Xms&Xmx

node1

192.168.21.23

9201

rack1

20G

node2

192.168.21.23

9202

rack1

20G

node3

192.168.21.24

9203

rack2

20G

node4

192.168.21.24

9204

rack2

20G

扩展节点:

节点名称

服务器

http端口

rack

Xms&Xmx

node5

192.168.21.88

9205

rack3

20G

node6

192.168.21.88

9206

rack3

20G

node7

192.168.21.89

9207

rack4

20G

node8

192.168.21.89

9208

rack4

20G

按上述配置对新增节点进行扩展,只需要配置好参数: discovery.zen.ping.unicast.hosts: [“192.168.21.23”, “192.168.21.24”,“192.168.21.88”,“192.168.21.89”] 新增节点就可以加入集群进行水平扩容。当上述操作都完成之后,新节点已加入集群,开始同步数据。

考虑到系统并未设置索引分片,全部索引一律采用的是系统默认的5个分片,而每个索引的数据可能大小不一,结果检查,决定将数据量较大的索引,分片数增加一倍。 操作如下:

代码语言:javascript
复制
curl -XPOST http://192.168.21.88:9205/_template/nginx -d '{"order":0,"template":"nginx-*","settings":{"index":{"number_of_shards":"10","refresh_interval":"5s"}},"mappings":{},"aliases":{}}'

curl -XPUT http://192.168.21.88:9205/_template/applog-yufa -d '{"order":0,"template":"applog-yufa-*","settings":{"number_of_shards":10,"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"strings_as_keywords":{"mapping":{"norms":false,"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"match_mapping_type":"string"}}],"_all":{"norms":{"enabled":false},"enabled":true},"properties":{"@timestamp":{"type":"date"},"offset":{"type":"long","doc_values":"true"},"level":{"type":"keyword"},"appName":{"type":"keyword"},"beat":{"properties":{"hostname":{"type":"keyword"},"name":{"type":"keyword"}}},"input_type":{"type":"keyword"},"source":{"type":"keyword"},"message":{"type":"text"},"type":{"type":"keyword"},"class":{"type":"keyword"},"threadName":{"type":"keyword"},"timestamp":{"type":"keyword"}}}},"aliases":{}}

curl -XPUT http://192.168.21.88:9205/_template/applog-prod -d '{"order":0,"template":"applog-prod-*","settings":{"number_of_shards":10,"index":{"refresh_interval":"5s"}},"mappings":{"_default_":{"dynamic_templates":[{"strings_as_keywords":{"mapping":{"norms":false,"type":"text","fields":{"keyword":{"ignore_above":256,"type":"keyword"}}},"match_mapping_type":"string"}}],"_all":{"norms":{"enabled":false},"enabled":true},"properties":{"@timestamp":{"type":"date"},"offset":{"type":"long","doc_values":"true"},"level":{"type":"keyword"},"appName":{"type":"keyword"},"beat":{"properties":{"hostname":{"type":"keyword"},"name":{"type":"keyword"}}},"input_type":{"type":"keyword"},"source":{"type":"keyword"},"message":{"type":"text"},"type":{"type":"keyword"},"class":{"type":"keyword"},"threadName":{"type":"keyword"},"timestamp":{"type":"keyword"}}}},"aliases":{}}‘

注意,在采取put操作时,先采用get操作,得到_template信息,之后对需要修改的部分进行增加或者修改操作,之后再进行put。这样保证其他不需要修改的数据不会被修改。

在做完上述这一切之后,已是晚上8点,因此打卡下班。

3.故障描述

早上还没到单位,就被同事信息轰炸,elk集群已经不能用了!! 我赶到单位之后,连上服务器一看,新加入的节点全部处于假死状态。已经有大部分数据同步到新节点,而且服务器已无法连接。通知运维人员将elasticsearch进程干掉。 如下是恢复的过程中某个节点假死查到的状态:

节点假死
节点假死

对上述节点进行重启,集群状态恢复中。。

health为red状态
health为red状态

检查redis,发现有大量数据堆积

redis keys
redis keys

redis内存在迅速增加,已经达到10个G

redis内存
redis内存

查看logstash日志:

logstash日志
logstash日志

出现如下提示:

代码语言:javascript
复制
retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[nginx-2017.08.16][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest to [nginx-2017.08.16] containing [2] requests]"})

原来是宕机节点太多,导致部分节点的主分片未分配。这样日志在写入过程中会超时。这就导致logstash的写入速度下降。从而导致redis中数据增加。

至于节点假死的原因,查看了elasticsearch的日志:

代码语言:javascript
复制
org.elasticsearch.transport.RemoteTransportException: [node6][192.168.21.88:9301][indices:data/write/bulk[s][r]]
Caused by: org.elasticsearch.index.engine.IndexFailedEngineException: Index failed for [applog-prod#AV3nwvfEpn_W3_8eumB2]
        at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:552) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:542) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:166) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onReplicaShard(TransportShardBulkAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.bulk.TransportShardBulkAction.onReplicaShard(TransportShardBulkAction.java:74) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:85) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.apache.lucene.store.AlreadyClosedException: translog is already closed
        at org.elasticsearch.index.translog.Translog.ensureOpen(Translog.java:1342) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.add(Translog.java:433) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.maybeAddToTranslog(InternalEngine.java:393) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.innerIndex(InternalEngine.java:524) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:409) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:552) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:542) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica(TransportIndexAction.java:166) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:85) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_101]
Caused by: java.nio.file.FileSystemException: /opt/elasticsearch/node6/data/nodes/0/indices/NP2tORUfSq6jl0lb5CzOVw/2/translog/translog.ckp: Too many open files in system
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
        at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177) ~[?:?]
        at java.nio.channels.FileChannel.open(FileChannel.java:287) ~[?:1.8.0_101]
        at java.nio.channels.FileChannel.open(FileChannel.java:335) ~[?:1.8.0_101]
        at org.elasticsearch.index.translog.Checkpoint.write(Checkpoint.java:127) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.TranslogWriter.writeCheckpoint(TranslogWriter.java:312) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.TranslogWriter.syncUpTo(TranslogWriter.java:273) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:553) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.translog.Translog.ensureSynced(Translog.java:578) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard$1.write(IndexShard.java:1679) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.processList(AsyncIOProcessor.java:107) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.drainAndProcess(AsyncIOProcessor.java:99) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AsyncIOProcessor.put(AsyncIOProcessor.java:82) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.sync(IndexShard.java:1701) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction$AsyncAfterWriteAction.run(TransportWriteAction.java:310) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction$WriteReplicaResult.<init>(TransportWriteAction.java:177) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:86) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportWriteAction.shardOperationOnReplica(TransportWriteAction.java:50) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:457) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.onResponse(TransportReplicationAction.java:434) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:142) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.index.shard.IndexShard.acquireReplicaOperationLock(IndexShard.java:1667) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction.acquireReplicaOperationLock(TransportReplicationAction.java:862) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncReplicaAction.doRun(TransportReplicationAction.java:526) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:418) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.action.support.replication.TransportReplicationAction$ReplicaOperationTransportHandler.messageReceived(TransportReplicationAction.java:408) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.transport.TcpTransport$RequestHandler.doRun(TcpTransport.java:1348) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:520) ~[elasticsearch-5.0.1.jar:5.0.1]
        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-5.0.1.jar:5.0.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) ~[?:1.8.0_101]

发现一个关键的问题:/opt/elasticsearch/node6/data/nodes/0/indices/NP2tORUfSq6jl0lb5CzOVw/2/translog/translog.ckp: Too many open files in system 也就是说同时打开的文件数达到了系统的限制,这也就是无法登陆系统的原因。 不难理解上述问题的出现:一个服务器中配置了两个节点,这两个节点都运行在elastic用户下,该用户所在系统的limit.conf中对该用户同时打开的文件数有限制。而在集群同步数据的过程中,系统在大量的写文件,同时实时数据又在大量写入。这样就导致文件达到最大的阈值。因此导致elasticsearch假死。

4.解决办法

查询elasticsearch节点状态:

代码语言:javascript
复制
curl -XGET http://192.168.21.23:9203/_cat/shards |fgrep UNASSIGNED
节点状态
节点状态

当天需要写入数据的索引,也存在部分分片未分配状态:

代码语言:javascript
复制
curl -XGET http://192.168.21.23:9203/_cat/shards |fgrep UNASSIGNED |grep '2017.08.16'
当天索引状态
当天索引状态

通过kibana也可发现该问题:

代码语言:javascript
复制
![kibana状态](http://upload-images.jianshu.io/upload_images/3237432-79af260a648503ef.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

考虑到redis缓存一直增加,当务之急是让数据可以写入。保证redis的数据被消费。否则会出现redis服务器内存溢出。 先不考虑elasticsearch是否能自动恢复,以及自动恢复所花费的时间。 查询API后,要用到命令:reroute 通过kibana分配一个主分片

代码语言:javascript
复制
  POST /_cluster/reroute
  {
  "commands": [
    {
      "allocate_stale_primary": {
        "index": "applog-prod-2017.08.16",
        "shard": 2,
        "node": "node7",
        "accept_data_loss" : true
      }
    }
  ]
}
分配主分区
分配主分区

命令格式参考https://kibana.logstash.es/content/elasticsearch/principle/shard-allocate.html

需要注意的是,对于主分片执行reroute,一定需要所分配的节点上存在该索引的文件,否则会提示找不到索引:

代码语言:javascript
复制
[2017-08-23T19:40:24,000][WARN ][o.e.i.c.IndicesClusterStateService] [node1-1] [[applog-prod-2017.07.26][1]] marking and sending shard failed due to [failed recovery]
org.elasticsearch.indices.recovery.RecoveryFailedException: [applog-prod-2017.07.26][1]: Recovery failed on {node1-1}{JIKzocuZRtec_XkrM1eXDg}{JN_7zoJKRBCSNTCgoDzpuQ}{192.168.21.23}{192.168.21.23:9300}{rack=r1, ml.enabled=true}
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1491) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.5.1.jar:5.5.1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: failed to fetch index version after copying it over
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:344) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more
Caused by: org.elasticsearch.index.shard.IndexShardRecoveryException: shard allocated for local recovery (post api), should exist, but doesn't, current files: []
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:329) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in store(mmapfs(/opt/elasticsearch/elasticsearch-node1-1/data/nodes/0/indices/dYkqpjsZRw2apPsylTHziQ/1/index)): files: []
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:687) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:644) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:450) ~[lucene-core-6.6.0.jar:6.6.0 5c7a7b65d2aa7ce5ec96458315c661a18b320241 - ishan - 2017-05-30 07:29:46]
        at org.elasticsearch.common.lucene.Lucene.readSegmentInfos(Lucene.java:129) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.store.Store.readSegmentsInfo(Store.java:199) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.store.Store.readLastCommittedSegmentsInfo(Store.java:184) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:319) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:90) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.executeRecovery(StoreRecovery.java:257) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:88) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1239) ~[elasticsearch-5.5.1.jar:5.5.1]
        at org.elasticsearch.index.shard.IndexShard.lambda$startRecovery$2(IndexShard.java:1487) ~[elasticsearch-5.5.1.jar:5.5.1]
        ... 4 more

在分配主分片的时候,一定要确认所分配的节点是否存在该索引的文件。 #5.其他思考 上述问题 虽然恢复,数据可以写入,但是还有几个地方需要优化: 1.redis作为负载均衡存在问题,在服务器节点充足之后,应该用kafka来替代redis,将数据放置在磁盘。避免redis内存溢出。 2.elasticsearch扩容时,如果有多个节点,尽量避免同时操作。如果一次性增加的节点比较多,就需要考虑文件是否达到最大limit配置的阈值。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2020-07-24 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1.背景
  • 2.扩容操作
  • 3.故障描述
  • 4.解决办法
相关产品与服务
Elasticsearch Service
腾讯云 Elasticsearch Service(ES)是云端全托管海量数据检索分析服务,拥有高性能自研内核,集成X-Pack。ES 支持通过自治索引、存算分离、集群巡检等特性轻松管理集群,也支持免运维、自动弹性、按需使用的 Serverless 模式。使用 ES 您可以高效构建信息检索、日志分析、运维监控等服务,它独特的向量检索还可助您构建基于语义、图像的AI深度应用。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档