前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >ZooKeeper节点数据量限制引起的Hadoop YARN ResourceManager崩溃原因分析

ZooKeeper节点数据量限制引起的Hadoop YARN ResourceManager崩溃原因分析

作者头像
九州暮云
发布2019-08-21 10:34:33
2.9K0
发布2019-08-21 10:34:33
举报
文章被收录于专栏:九州牧云九州牧云

环境

  • Hadoop版本:Apache Hadoop 2.6.3
  • ZooKeeper版本:ZooKeeper 3.4.10
  • 两个ResourceManager节点:主节点RM01,从节点RM02

背景

最近有一次我们的Hadoop监控平台发出ResourceManager(以下简称RM)崩溃的报警信息,于是我们分析了RM的日志和ZooKeeper服务端(以下简称ZK)的日志,异常日志信息分别如下:

RM日志

RM01的异常日志如下:

代码语言:javascript
复制
2019-03-03 02:33:01,826 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Retrying operation on ZK. Retry no. 999
2019-03-03 02:33:02,088 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server 10.204.245.44/10.204.245.44:5181. Will not attempt to authenticate using SASL (unknown error)
2019-03-03 02:33:02,089 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to 10.204.245.44/10.204.245.44:5181, initiating session
2019-03-03 02:33:02,090 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server 10.204.245.44/10.204.245.44:5181, sessionid = 0x1690d678e9448da, negotiated timeout = 60000
2019-03-03 02:33:02,101 WARN org.apache.zookeeper.ClientCnxn: Session 0x1690d678e9448da for server 10.204.245.44/10.204.245.44:5181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Broken pipe
	at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
	at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
	at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
	at sun.nio.ch.IOUtil.write(IOUtil.java:65)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:117)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2019-03-03 02:33:02,201 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /bi-rmstore/ZKRMStateRoot/RMAppRoot/application_1551528254601_0725/appattempt_1551528254601_0725_000001
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1073)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1058)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1128)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1161)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationAttemptStateInternal(ZKRMStateStore.java:712)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:243)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:226)
	at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:812)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:872)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:867)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
	at java.lang.Thread.run(Thread.java:745)
2019-03-03 02:33:02,201 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Maxed out ZK retries. Giving up!
2019-03-03 02:33:02,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Error updating appAttempt: appattempt_1551528254601_0725_000001
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /bi-rmstore/ZKRMStateRoot/RMAppRoot/application_1551528254601_0725/appattempt_1551528254601_0725_000001
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1073)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1058)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1128)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1161)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationAttemptStateInternal(ZKRMStateStore.java:712)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:243)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:226)
	at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:812)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:872)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:867)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
	at java.lang.Thread.run(Thread.java:745)
2019-03-03 02:33:02,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: State store operation failed 
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /bi-rmstore/ZKRMStateRoot/RMAppRoot/application_1551528254601_0725/appattempt_1551528254601_0725_000001
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
	at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
	at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1073)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1058)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$9.run(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1128)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1161)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.existsWithRetries(ZKRMStateStore.java:1055)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.updateApplicationAttemptStateInternal(ZKRMStateStore.java:712)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:243)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$UpdateAppAttemptTransition.transition(RMStateStore.java:226)
	at org.apache.hadoop.yarn.state.StateMachineFactory$SingleInternalArc.doTransition(StateMachineFactory.java:362)
	at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
	at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
	at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:812)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:872)
	at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:867)
	at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:182)
	at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
	at java.lang.Thread.run(Thread.java:745)
2019-03-03 02:33:03,074 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1690d678e9448da closed
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.event.AsyncDispatcher: AsyncDispatcher is draining to stop, igonring any new events.
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: appattempt_1551528254601_0723_000001 State change from RUNNING to FINAL_SAVING
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Ignore watcher event type: None with state:Disconnected for path:null from old session
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Ignore watcher event type: None with state:SyncConnected for path:null from old session
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Ignore watcher event type: None with state:Disconnected for path:null from old session
2019-03-03 02:33:03,074 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Ignore watcher event type: None with state:SyncConnected for path:null from old session
2019-03-03 02:33:03,074 WARN org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: State-store fenced ! Transitioning RM to standby

RM02的异常日志如下:

代码语言:javascript
复制
2019-03-05 12:10:24,634 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Retrying operation on ZK. Retry no. 124
2019-03-05 12:10:25,447 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server 10.204.245.45/10.204.245.45:5181. Will not attempt to authenticate using SASL (unknown error)
2019-03-05 12:10:25,447 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to 10.204.245.45/10.204.245.45:5181, initiating session
2019-03-05 12:10:25,449 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server 10.204.245.45/10.204.245.45:5181, sessionid = 0x1693f9adb5401ae, negotiated timeout = 60000
2019-03-05 12:10:26,704 WARN org.apache.zookeeper.ClientCnxn: Session 0x1693f9adb5401ae for server 10.204.245.45/10.204.245.45:5181, unexpected error, closing socket connection and attempting reconnect
java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:3236)
	at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
	at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:135)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
	at org.apache.jute.BinaryOutputArchive.writeInt(BinaryOutputArchive.java:55)
	at org.apache.zookeeper.proto.SetDataRequest.serialize(SetDataRequest.java:59)
	at org.apache.zookeeper.MultiTransactionRecord.serialize(MultiTransactionRecord.java:77)
	at org.apache.zookeeper.ClientCnxn$Packet.createBB(ClientCnxn.java:292)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:115)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

以上是两个RM节点的日志,可以忽略时间点,3月3号3月5号发生异常的轨迹类似,异常发生的过程如下:

1、我们在yarn-site.xml配置了yarn.resourcemanager.store.class这个属性的值为org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore,也就是说,RM会将它的状态信息保存到ZK中,与此属性相关的,还有如下两个重要属性:

  • yarn.resourcemanager.zk-num-retries:表示RMZK的连接丢失后,尝试与ZK建立连接的次数,我们线上默认是1000
  • yarn.resourcemanager.zk-retry-interval-ms:尝试连接ZK的时间间隔,我们线上默认是1000毫秒,也就是1

基于以上配置,在RMZK连接失败后,每隔1RM就会尝试与ZK建立连接,直到达到最大重试次数,因此,从第一段日志我们可以看出如下日志轨迹:

(1)主节点RM01上的RMZK/bi-rmstore节点中写数据发生异常,返回连接丢失错误码KeeperErrorCode=ConnectionLoss

代码语言:javascript
复制
2019-03-03 02:33:02,201 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /bi-rmstore/ZKRMStateRoot/RMAppRoot/application_1551528254601_0725/appattempt_1551528254601_0725_000001

(2)RM接收到连接丢失的错误码,按照重试逻辑进行重试,一直重试到999次,按照重试间隔时间,这期间花了至少999秒,也就是至少15分钟:

代码语言:javascript
复制
2019-03-03 02:33:01,826 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Retrying operation on ZK. Retry no. 999

(3)重试到第999次,还是连接丢失,那么,进行第1000次重试,但是,这个时候已经达到最大重试次数限制了,于是RM就打印达到最大重试次数、保存状态失败的信息:

代码语言:javascript
复制
2019-03-03 02:33:02,201 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Maxed out ZK retries. Giving up!
2019-03-03 02:33:02,201 ERROR org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: State store operation failed

(4)主节点的RM发现自己怎么向ZK中写数据都写不进去,于是就标识自己出了问题,接着开始故障转移,将请求转移到RM02服务器,也就是从节点上,此时RM发生了主从切换:

代码语言:javascript
复制
2019-03-03 02:33:03,074 WARN org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: State-store fenced ! Transitioning RM to standby

(5)因为发生了主从切换,从节点RM02此时是主节点。它切换完成后,也立即执行RM01上没有完成的操作,也就是往ZK写数据,但是也写不进去,最后重试到124次的时候,发生了JVM堆内存溢出异常:

代码语言:javascript
复制
2019-03-05 12:10:24,634 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Retrying operation on ZK. Retry no. 124
java.lang.OutOfMemoryError: Java heap space

堆内存溢出异常发生的原因一般是JVM的堆内存区域中有很多存活的对象,然后一个新对象申请不到所需的堆内存时,就会发生此类异常。

(6)此时,RM02内存溢出了,但是它自己进程没有退出,也没有故障转移,受影响的是90NodeManager,它们发现连接不到RM了,就自己挂掉了,于是整个集群挂了。在整个故障发生过程中,我们可能会有一个疑问——为什么RMZK写数据写不进去?这个问题要结合ZK日志来分析,先抓住一个点,那就是发生内存溢出异常的代码位置:

代码语言:javascript
复制
java.lang.OutOfMemoryError: Java heap space
	at java.util.Arrays.copyOf(Arrays.java:3236)
	at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
	at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:135)
	at java.io.DataOutputStream.writeInt(DataOutputStream.java:197)
	at org.apache.jute.BinaryOutputArchive.writeInt(BinaryOutputArchive.java:55)
	at org.apache.zookeeper.proto.SetDataRequest.serialize(SetDataRequest.java:59)
	at org.apache.zookeeper.MultiTransactionRecord.serialize(MultiTransactionRecord.java:77)
	at org.apache.zookeeper.ClientCnxn$Packet.createBB(ClientCnxn.java:292)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:115)
	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:366)
	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)

在这里,重点关注org.apache.zookeeper.proto.SetDataRequest.serialize这个位置的代码,经过查看源码可以知道,这是ZK客户端往服务端写数据时,执行的正常序列化操作,也可以从serialize这个方法名字看出来该方法的作用,关键是写了多大的数据给ZK。接下来我们来看看ZK日志。

ZK日志

ZK日志如下:

代码语言:javascript
复制
2019-03-05 12:59:46,158 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxnFactory@197] - Accepted socket connection from /10.216.2.25:41586
2019-03-05 12:59:46,160 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:ZooKeeperServer@966] - Client attempting to renew session 0x2690d678e98ae8b at /10.216.2.25:41586
2019-03-05 12:59:46,160 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:Learner@108] - Revalidating client: 0x2690d678e98ae8b
2019-03-05 12:59:46,161 [myid:1] - INFO  [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:5181:ZooKeeperServer@712] - Established session 0x2690d678e98ae8b with negotiated timeout 60000 for client /10.216.2.25:41586
2019-03-05 12:59:46,167 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:ZooKeeperServer@997] - got auth packet /10.216.2.25:41586
2019-03-05 12:59:46,167 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:ZooKeeperServer@1031] - auth success /10.216.2.25:41586
2019-03-05 12:59:46,177 [myid:1] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@380] - Exception causing close of session 0x2690d678e98ae8b due to java.io.IOException: Len error 1788046
2019-03-05 12:59:46,177 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@1091] - Closed socket connection for client /10.216.2.25:41586 which had sessionid 0x2690d678e98ae8b

ZK日志中,我们可以看到ZK Server接收到来自10.216.2.25:41586(RM01)进程的连接请求,最后发生了异常关闭了session,重点关注如下日志片段:

代码语言:javascript
复制
2019-03-05 12:59:46,177 [myid:1] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@380] - Exception causing close of session 0x2690d678e98ae8b due to java.io.IOException: Len error 1788046
2019-03-05 12:59:46,177 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:5181:NIOServerCnxn@1091] - Closed socket connection for client /10.216.2.25:41586 which had sessionid 0x2690d678e98ae8b

以上日志显示ZK接收到一个1788046字节(1.7MB)的数据,接着发生了异常,最后关闭了session。在ZK的配置中,有一个jute.maxbuffer参数,这个参数是个全局参数,定义了每个ZNode节点所能存储的最大数据量,以字节为单位,默认为1048576字节,也就是说,默认情况下,ZK规定它的每个ZNode只能存储1兆数据。在这里,YARN集群的RM节点给ZK发送了一个1.7MB的数据,ZK判断这个数据超过了它所限制的最大数据量,因此就抛出了Len error 1788046的异常,最后直接关闭了session,给客户端返回了KeeperErrorCode=ConnectionLoss连接丢失错误码。

在这里,我们基本上清楚了两个RM为什么向ZK写数据写不进去了。还有一个问题,为什么会发生JVM内存溢出呢?这个问题的原因不在于ZK写数据异常,在于以下代码,接下来进行代码剖析。

YARN读写ZK代码

以下代码摘自org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore类的保存状态和重试逻辑部分,具体细节见注释。先看下保存应用状态的逻辑:

代码语言:javascript
复制
 @Override
    public synchronized void updateApplicationStateInternal(ApplicationId appId,
                                                            ApplicationStateData appStateDataPB) throws Exception {
        String nodeUpdatePath = getNodePath(rmAppRoot, appId.toString());

        if (LOG.isDebugEnabled()) {
            LOG.debug("Storing final state info for app: " + appId + " at: "
                    + nodeUpdatePath);
        }
        
        // RM状态字节数组
        byte[] appStateData = appStateDataPB.getProto().toByteArray();

        // 调用重试逻辑写数据
        if (existsWithRetries(nodeUpdatePath, true) != null) {
            setDataWithRetries(nodeUpdatePath, appStateData, -1);
        } else {
            createWithRetries(nodeUpdatePath, appStateData, zkAcl,
                    CreateMode.PERSISTENT);
            LOG.debug(appId + " znode didn't exist. Created a new znode to"
                    + " update the application state.");
        }
    }

该段代码最终会调用以下重试逻辑:

代码语言:javascript
复制
T runWithRetries() throws Exception {
            int retry = 0; // 保存重试次数
            while (true) {
                try {
                    return runWithCheck();
                } catch (KeeperException.NoAuthException nae) {
                    if (HAUtil.isHAEnabled(getConfig())) {
                        // NoAuthException possibly means that this store is fenced due to
                        // another RM becoming active. Even if not,
                        // it is safer to assume we have been fenced
                        throw new StoreFencedException();
                    }
                } catch (KeeperException ke) {
                    if (ke.code() == Code.NODEEXISTS) {
                        LOG.info("znode already exists!");
                        return null;
                    }
                    LOG.info("Exception while executing a ZK operation.", ke);
                    retry++; // 累计重试次数

                    /**
                     * 在未达到最大重试次数的条件的下,根据ZK Server返回的错误码执行不同的重试策略
                     */
                    if (shouldRetry(ke.code()) && retry < numRetries) { // 如果ZK Server返回的错误码为连接丢失、连接超时,则直接进行重试
                        LOG.info("Retrying operation on ZK. Retry no. " + retry);
                        Thread.sleep(zkRetryInterval);
                        continue;
                    }
                    if (shouldRetryWithNewConnection(ke.code()) && retry < numRetries) {  // 如果ZK Server返回的错误码为会话丢失、会话转移,则创建连接后进行重试
                        LOG.info("Retrying operation on ZK with new Connection. " +
                                "Retry no. " + retry);
                        Thread.sleep(zkRetryInterval);
                        createConnection();
                        syncInternal(ke.getPath());
                        continue;
                    }
                    LOG.info("Maxed out ZK retries. Giving up!"); // 达到最大重试次数,放弃重试,抛出异常
                    throw ke;
                }
            }
        }

以上代码就是RM读写ZK失败时的重试逻辑,其中zkRetryIntervalnumRetries两个变量的值来源于RM日志部分所列的两个重试配置。

问题原因

总的看来,JVM内存溢出异常发生的轨迹是这样的,RMZK写数据,ZK判断这个数据超过了最大数据量限制,然后就关闭了RMZK之间的连接。RM捕捉到ZK返回的连接丢失状态码,接着进行重试,重试机制是这样的:1秒重试一次,一直重试到1000次,最后搞的RM节点内存溢出了。观察数据发现,内存溢出时YRAN队列里堆积了200多个需要执行的任务,并且有几十个在执行,RM上的JVM堆内存消耗平时消耗4、5G,堆最大值6G。因此内存溢出的原因就是重试次数太多、太频繁,创建了太多对象,导致内存溢出的。

ZK数据量限制是诱因,重试机制引起的程序循环调用是根源。因为程序运行过程中,抛出了异常,然后在抛出异常的位置执行重试逻辑,重试间隔时间短、频率高,这个时候,方法栈内存和方法在执行过程中引用的堆内存对象也不会被释放,这也间接说明:

  • 设置重试策略时,要结合实际情况设置合适的值
  • 重试逻辑中的while(true)类似于一个死循环,死循环其实是方便了程序员难为了机器,其优点就是易理解,容易编程。这里可以理解为递归调用,递归是用栈机制实现的,每深入一层,都要占去一块栈数据区域,对嵌套层数深的一些算法,递归会力不从心,空间上会以内存崩溃而告终,而且递归也带来了大量的函数调用,这也有许多额外的时间开销

解决方案

Hadoop YARN jira YARN-2368中发现,之前也有人碰到过类似的问题,官方也给出了相关的解决方案,但是Hadoop版本之间有差异,以Apache Hadoop版本为准:

1、Hadoop 2.9.0以后:

该版本在yarn-site.xml中增加了yarn.resourcemanager.zk-max-znode-size.bytes参数,该参数定义了ZKZNode节点所能存储的最大数据量,以字节为单位,默认是1048576字节,也就是1MB

2、Hadoop 2.9.0以前:

通过在/etc/hadoop/conf/yarn-env.sh设置-Djute.maxbuffer=<bytes>参数解决。ZK在创建客户端连接时,会自动读取jute.maxbuffer的值。在yarn-env.sh末尾增加配置如下,这里我设置该值为3145728字节,也就是3MB:

代码语言:javascript
复制
YARN_OPTS="$YARN_OPTS -Djute.maxbuffer=3145728"

无论采用以上哪一种解决方案,都必须使ZK客户端配置的值与ZK服务端的jute.maxbuffer参数值保持一致,如果要修改,需要同时修改ZK服务端的jute.maxbuffer参数,否则在ZK读写时会发生异常。配置不一致时发生异常如下:

  • ZK客户端的jute.maxbuffer参数值大于ZK服务端的jute.maxbuffer参数值:客户端向服务端写入的数据量超过服务端配置的值时抛出java.io.IOException: Len error异常
  • ZK客户端的jute.maxbuffer参数值小于ZK服务端的jute.maxbuffer参数值:客户端从服务端读取的数据量超过客户端配置的值时抛出java.io.IOException: Unreasonable length异常

相关思考

  • 为什么要限制ZKZNode的大小

ZK是一套高吞吐量的系统,为了提高系统的读取速度,ZK不允许从文件中读取需要的数据,而是直接从内存中查找。

换句话说,ZK集群中每一台服务器都包含全量的数据,并且这些数据都会加载到内存中,同时ZNode的数据不支持Append操作,全部都是Replace

所以从上面分析可以看出,如果ZNode过大,那么读写某一个ZNode将造成不确定的延时,同时ZNode过大,将过快地耗尽ZK服务器的内存,这也是为什么ZK不适合存储大量数据的原因。

  • 如何提升ZK集群的性能

我们说性能,可以从两个方面去考虑:写入的性能与读取的性能。

由于ZK的写入首先需要通过Leader,然后这个写入的消息需要传播到半数以上的Follower通过才能完成整个写入,所以整个集群写入的性能无法通过增加服务器的数量达到目的,相反,整个集群中Follower数量越多,整个集群写入的性能越差。

ZK集群中的每一台服务器都可以提供数据的读取服务,所以整个集群中服务器的数量越多,读取的性能就越好,但是Follower增加又会降低整个集群的写入性能。

  • 在生产环境下配置,jute.maxbuffer参数的值的设置需要考虑哪些因素

首先,要考虑依赖ZK的服务数据量多大的问题。ZK作为一个分布式协调服务,它是要和其他服务整合才能发挥作用的。这里以YARN为例,YARN使用ZK做主备节点之间的故障转移,并且将RM的状态信息保存到ZK中。因此这里设置该参数的时候,要考虑这些数据信息有多大,故障转移节点保存的数据量很少,无非是主备节点的IP、host等信息。状态信息数据量相对来说比较大,RM将调度过程中产生的应用状态信息都保存到了ZK中,统计发现,我们现在出问题的YARN集群节点共有90个,高峰期任务数是节点数的3倍,每天调度几千个任务,任务量不是特别大,状态数据信息大部分在几百K1M之间,偶尔会产生2M的状态数据信息,因此我们将该参数的值调整为3M

其次,需要考虑改变该值之后,对其他依赖ZK的服务有没有影响。生产环境中,依赖ZK服务的服务有HBase、JStormHadoop,理论上,这些服务都要修改各自客户端的该参数值,这是一个很繁琐的工作。目前从每天监控结果来看,其他服务保存在ZK中的数据,没有超过500K的,因此不需要改变客户端的该参数值。

最后,还要考虑读写场景和频率,因为数据量太大的话,读写还很多,可能会造成ZK服务端同步数据慢。

  • ZK中保存RM状态的ZNODE会不会越来越大?

不会。在一定条件下,RM会删除保存在ZK中的applicationId对应的状态数据,ZKRMStateStore中对应的逻辑如下:

代码语言:javascript
复制
public synchronized void removeApplicationStateInternal(ApplicationState appState)
            throws Exception {
        String appId = appState.getAppId().toString(); // 获取应用ID
        String appIdRemovePath = getNodePath(rmAppRoot, appId); // 根据应用ID获取对应的路径
        ArrayList<Op> opList = new ArrayList<Op>();

        // 执行状态信息删除操作
        for (ApplicationAttemptId attemptId : appState.attempts.keySet()) {
            String attemptRemovePath = getNodePath(appIdRemovePath, attemptId.toString());
            opList.add(Op.delete(attemptRemovePath, -1));
        }
        opList.add(Op.delete(appIdRemovePath, -1));

        if (LOG.isDebugEnabled()) {
            LOG.debug("Removing info for app: " + appId + " at: " + appIdRemovePath
                    + " and its attempts.");
        }
        doMultiWithRetries(opList);
    }
  • YARN为什么使用Protobuf

这是一个额外的话题,引入Protocal Buffer的一个最大好处——满足向后兼容性。具体原因参考<<YARN/MRv2 RPC框架深入剖析—引入Protocal Buffer的好处>>这篇文章。

参考资料

李亚坤:Hadoop YARN在字节跳动的实践——推荐,总结了头条在YARN使用和改造方面的经验

ResourceManager failed when ZKRMStateStore tries to update znode data larger than 1MB

RM状态存储与还原机制详解

Remove jute.maxbuffer limit packetLen in client side

YARN源码分析(三)-----ResourceManager HA之应用状态存储与恢复

源码走读-Yarn-ResourceManager01-基础概念

Zookeeper在RM HA的应用

YARN/MRv2 RPC框架深入剖析—引入Protocal Buffer的好处

Zookeeper 服务器端和客户端扩大节点数据1M大小限制

ZOOKEEPER jute.maxbuffer参数解惑

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 环境
  • 背景
    • RM日志
      • ZK日志
        • YARN读写ZK代码
        • 问题原因
        • 解决方案
        • 相关思考
        • 参考资料
        相关产品与服务
        大数据
        全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
        领券
        问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档