golang mgo的mongo连接池设置:必须手动加上maxPoolSize

本司礼物系统使用了golang的 mongo库 mgo,中间踩了一些坑,总结下避免大家再踩坑

golang的mgo库说明里是说明了开启连接复用的,但观察实验发现,这并没有根本实现连接的控制,连接复用仅在有空闲连接时生效,高并发时无可用连接会不断创建新连接,所以最终还是需要程序员自行去限制最大连接才行。

废话不多说,开始上代码

GlobalMgoSession, err := mgo.Dial(host)   func (m *MongoBaseDao) Get(tablename string, id string, result interface{}) interface{} {     session := GlobalMgoSession.Clone()     defer session.Close()       collection := session.DB(globalMgoDbName).C(tablename)     err := collection.FindId(bson.ObjectIdHex(id)).One(result)       if err != nil {         logkit.Logger.Error("mongo_base method:Get " + err.Error())     }     return result }

golang main入口启动时,我们会创建一个全局session,然后每次使用时clone session的信息和连接,用于本次请求,使用后调用session.Close() 释放连接。

// Clone works just like Copy, but also reuses the same socket as the original // session, in case it had already reserved one due to its consistency // guarantees.  This behavior ensures that writes performed in the old session // are necessarily observed when using the new session, as long as it was a // strong or monotonic session.  That said, it also means that long operations // may cause other goroutines using the original session to wait. func (s *Session) Clone() *Session {     s.m.Lock()     scopy := copySession(s, true)     s.m.Unlock()     return scopy }     // Close terminates the session.  It's a runtime error to use a session // after it has been closed. func (s *Session) Close() {     s.m.Lock()     if s.cluster_ != nil {         debugf("Closing session %p", s)         s.unsetSocket()  //释放当前线程占用的socket 置为nil         s.cluster_.Release()         s.cluster_ = nil     }     s.m.Unlock() }

Clone的方法注释里说明会重用原始session的socket连接,但是并发请求一大,其他协程来不及释放连接,当前协程会怎么办?

func (s *Session) acquireSocket(slaveOk bool) (*mongoSocket, error) {     // Read-only lock to check for previously reserved socket.     s.m.RLock()     // If there is a slave socket reserved and its use is acceptable, take it as long     // as there isn't a master socket which would be preferred by the read preference mode.     if s.slaveSocket != nil && s.slaveOk && slaveOk && (s.masterSocket == nil || s.consistency != PrimaryPreferred && s.consistency != Monotonic) {         socket := s.slaveSocket         socket.Acquire()         s.m.RUnlock()         logkit.Logger.Info("sgp_test 1 acquireSocket slave is ok!")         return socket, nil     }     if s.masterSocket != nil {         socket := s.masterSocket         socket.Acquire()         s.m.RUnlock()         logkit.Logger.Info("sgp_test 1  acquireSocket master is ok!")         return socket, nil     }       s.m.RUnlock()       // No go.  We may have to request a new socket and change the session,     // so try again but with an exclusive lock now.     s.m.Lock()     defer s.m.Unlock()     if s.slaveSocket != nil && s.slaveOk && slaveOk && (s.masterSocket == nil || s.consistency != PrimaryPreferred && s.consistency != Monotonic) {         s.slaveSocket.Acquire()         logkit.Logger.Info("sgp_test 2  acquireSocket slave is ok!")         return s.slaveSocket, nil     }     if s.masterSocket != nil {         s.masterSocket.Acquire()         logkit.Logger.Info("sgp_test 2  acquireSocket master is ok!")         return s.masterSocket, nil     }       // Still not good.  We need a new socket.     sock, err := s.cluster().AcquireSocket(s.consistency, slaveOk && s.slaveOk, s.syncTimeout, s.sockTimeout, s.queryConfig.op.serverTags, s.poolLimit)   ......     logkit.Logger.Info("sgp_test 3   acquireSocket cluster AcquireSocket is ok!")     return sock, nil   }

在源码中加debug,结果日志说明一切:

Mar 25 09:46:40 dev02.pandatv.com bikini[12607]:  [info] sgp_test 1  acquireSocket master is ok! Mar 25 09:46:40 dev02.pandatv.com bikini[12607]:  [info] sgp_test 1  acquireSocket master is ok! Mar 25 09:46:41 dev02.pandatv.com bikini[12607]:  [info] sgp_test 1 acquireSocket slave is ok! Mar 25 09:46:41 dev02.pandatv.com bikini[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok! Mar 25 09:46:41 dev02.pandatv.com bikini[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok! Mar 25 09:46:41 dev02.pandatv.com bikini[12607]:  [info] sgp_test 3   acquireSocket cluster AcquireSocket is ok!

不断的创建连接  AcquireSocket

 $  netstat -nat|grep -i 27017|wc -l

400

如果每个session 不调用close,会达到恐怖的4096,并堵死其他请求,所以clone或copy session时一定要defer close掉

启用maxPoolLimit 参数则会限制总连接大小,连接到限制则当前协程会sleep等待  直到可以创建连接,高并发时锁有问题,会导致多创建几个连接

src/gopkg.in/mgo.v2/cluster.go      s, abended, err := server.AcquireSocket(poolLimit, socketTimeout)         if err == errPoolLimit {             if !warnedLimit {                 warnedLimit = true                 logkit.Logger.Error("sgp_test WARNING: Per-server connection limit reached. " + err.Error())                 log("WARNING: Per-server connection limit reached.")             }             time.Sleep(100 * time.Millisecond)             continue         }   session.go: // SetPoolLimit sets the maximum number of sockets in use in a single server   // before this session will block waiting for a socket to be available.   // The default limit is 4096.   //   // This limit must be set to cover more than any expected workload of the   // application. It is a bad practice and an unsupported use case to use the   // database driver to define the concurrency limit of an application. Prevent   // such concurrency "at the door" instead, by properly restricting the amount   // of used resources and number of goroutines before they are created.   func (s *Session) SetPoolLimit(limit int) {       s.m.Lock()       s.poolLimit = limit       s.m.Unlock()   }

连接池设置方法:

1、配置中 增加 

[host]:[port]?maxPoolSize=10

2、代码中 :

dao.GlobalMgoSession.SetPoolLimit(10)

再做压测:

 $  netstat -nat|grep -i 27017|wc -l

15

结论:

每次clone session之后,操作结束时如果调用 session.Close 则会unset Socket  ,socket refer数减少,如果不设置上限,每个协程请求到来发现无空闲连接就会创建socket连接,直到达到最大值4096,而mongo的连接数上限一般也就是1万,也就是一个端口你只能启动一两个进程保证连接不被撑爆,过多的连接数客户端效率不高,server端更会耗费内存和CPU,所以需要启用自定义连接池 , 启用连接池也需要注意如果有pooMaxLimit个协程执行过长或者死循环不释放socket连接,也会悲剧。

mgo底层socket连接池只在maxPooMaxLimit 范围内实现复用,需要自行优化。

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏菩提树下的杨过

mac:在当前文件夹打开terminal终端

System Preferences -> Keyboard -> Shortcuts -> Services -> New Terminal at Folde...

2348
来自专栏扎心了老铁

springboot kafka集成(实现producer和consumer)

本文介绍如何在springboot项目中集成kafka收发message。 1、先解决依赖 springboot相关的依赖我们就不提了,和kafka相关的只依赖...

9435
来自专栏Java后端技术

SpringBoot初体验及原理解析

​  上篇文章,我们聊到了SpringBoot得以实现的幕后推手,这次我们来用SpringBoot开始HelloWorld之旅。SpringBoot是Sprin...

1512
来自专栏bboysoul

linux编译安装apache

wget http://mirrors.ustc.edu.cn/apache/httpd/httpd-2.4.25.tar.gz tar -zxvf http...

2793
来自专栏一个会写诗的程序员的博客

《Springboot极简教程》使用Spring Boot, JPA, Mysql, ThymeLeaf,gradle, Kotlin快速构建一个CRUD Web App

使用Spring Boot, JPA, Mysql, ThymeLeaf,gradle, Kotlin快速构建一个CRUD Web App

2112
来自专栏A周立SpringCloud

使用Feign实现Form表单提交

之前,笔者写了《使用Spring Cloud Feign上传文件》。近日,有同事在对接遗留的Struts古董系统,需要使用Feign实现Form表单提交。其实步...

5536
来自专栏一个会写诗的程序员的博客

13.7 SpringBoot集成日志系统logback的几个问题问题1: Logging system failed to initialize using configuration from

让人感到疑惑的是,SpringBoot居然没有对application.properties配置文件value末端作空格trim处理。

4721
来自专栏Netkiller

怎样制作RPM包

怎样制作RPM包 摘要 我在网上找RPM包的制作例子几乎都是C源码编译安装然后生成RPM包, 而我的程序不是C写的很多时候是脚本语言如Python, PHP 甚...

7276
来自专栏Albert陈凯

2018-11-17 面试必问问题TransactionalJava事务之一——Java事务的基本问题

关于加@Transactional注解的方法之间调用,事务是否生效的问题 https://blog.csdn.net/blacktal/article/det...

3102
来自专栏醉生梦死

shell脚本--练习1(爬虫)

3584

扫码关注云+社区

领取腾讯云代金券