前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >为什么java程序占用那么多内存

为什么java程序占用那么多内存

作者头像
zhangheng
发布2020-04-28 18:13:01
3.1K0
发布2020-04-28 18:13:01
举报

做java开发以来,有一个问题一直萦绕在脑海,那就是java程序为什么会占用那么多的虚拟内存。之前也没有深究,因为服务器内存够大。但是最近用上了docker容器,每个容器基本上就几个GB的内存,内存占用过大的问题必须得解决了。

缘由

自从用上docker容器后,容器老报警,登上容器看看资源使用情况,发现java程序占用的虚拟内存不是一般的高。

检测命令是top,其中VIRT程序申请的内存有32GB,RES程序实际使用的内存有4.6GB,实际上我配置的最大内存和最小内存只有16GB。

查看内存映射

linux服务器提供了查看内存映射关系的命令pmap

代码语言:javascript
复制
pmap(选项)(参数)
选项
-x:显示扩展格式;
-d:显示设备格式;
-q:不显示头尾行;
-V:显示指定版本。
参数
一个或多个进程号

显示的列属性为:

  • Address: start address ofmap 映像起始地址
  • Kbytes: size of map in kilobytes 映像大小
  • RSS: resident set size inkilobytes 驻留集大小
  • Dirty: dirty pages (both sharedand private) in kilobytes 脏页大小
  • Mode: permissions on map 映像权限: r=read,w=write, x=execute, s=shared, p=private (copy on write)
  • Mapping: file backing the map ,or '[ anon ]' for allocated memory, or '[ stack ]' for the program stack. 映像支持文件,[anon]为已分配内存[stack]为程序堆栈
  • Offset: offset into the file 文件偏移
  • Device: device name(major:minor) 设备名

使用pmap -x命令,看到有一个块,大小是16GB,应该就是我设置的内存大小。同时还有大量的6xxxx大小Kbytes,Mapping为[anon]的块。

我将这些分配内存的块大小都加起来,最终大小是32GB,说明除了我申请的16GB内存,程序又额外申请了16GB内存。

计算分配内存大小的命令:

代码语言:javascript
复制
pmap -x pid | grep anon |  awk ' { mem = mem + $2;print $0} END {print mem/1024/1024,"GB"}'

glibc搞的鬼?

经过一番google,找到了这篇文章 red had 6.0发行版说明-13 编译器及工具

代码语言:javascript
复制
Red Hat Enterprise Linux 6 features version 2.11 of glibc, providing many features and enhancements, including… An enhanced dynamic memory allocation (malloc) behaviour enabling higher scalability across many sockets and cores.This is achieved by assigning threads their own memory pools and by avoiding locking in some situations. The amount of additional memory used for the memory pools (if any) can be controlled using the environment variables MALLOC_ARENA_TEST and MALLOC_ARENA_MAX. MALLOC_ARENA_TEST specifies that a test for the number of cores is performed once the number of memory pools reaches this value. MALLOC_ARENA_MAX sets the maximum number of memory pools used, regardless of the number of cores.

同时,还有一个名叫Ulrich Drepper的开发者在 glibc 2.10 新功能 中详细说明了这一改动。

代码语言:javascript
复制
Before, malloc tried to emulate a per-core memory pool. Every time when contention for all existing memory pools was detected a new pool is created. Threads stay with the last used pool if possible… This never worked 100% because a thread can be descheduled while executing a malloc call. When some other thread tries to use the memory pool used in the call it would detect contention. A second problem is that if multiple threads on multiple core/sockets happily use malloc without contention memory from the same pool is used by different cores/on different sockets. This can lead to false sharing and definitely additional cross traffic because of the meta information updates. There are more potential problems not worth going into here in detail.
The changes which are in glibc now create per-thread memory pools. This can eliminate false sharing in most cases. The meta data is usually accessed only in one thread (which hopefully doesn’t get migrated off its assigned core). To prevent the memory handling from blowing up the address space use too much the number of memory pools is capped. By default we create up to two memory pools per core on 32-bit machines and up to eight memory per core on 64-bit machines. The code delays testing for the number of cores (which is not cheap, we have to read /proc/stat) until there are already two or eight memory pools allocated, respectively.

While these changes might increase the number of memory pools which are created (and thus increase the address space they use) the number can be controlled. Because using the old mechanism there could be a new pool being created whenever there are collisions the total number could in theory be higher. Unlikely but true, so the new mechanism is more predictable.

… Memory use is not that much of a premium anymore and most of the memory pool doesn’t actually require memory until it is used, only address space… We have done internally some measurements of the effects of the new implementation and they can be quite dramatic.

New versions of glibc present in RHEL6 include a new arena allocator design. In several clusters we’ve seen this new allocator cause huge amounts of virtual memory to be used, since when multiple threads perform allocations, they each get their own memory arena. On a 64-bit system, these arenas are 64M mappings, and the maximum number of arenas is 8 times the number of cores. We’ve observed a DN process using 14GB of vmem for only 300M of resident set. This causes all kinds of nasty issues for obvious reasons.
Setting MALLOC_ARENA_MAX to a low number will restrict the number of memory arenas and bound the virtual memory, with no noticeable downside in performance – we’ve been recommending MALLOC_ARENA_MAX=4. We should set this in hadoop-env.sh to avoid this issue as RHEL6 becomes more and more common.

整理一下,glibc为了分配内存的性能问题,在进程创建线程时,给新创建的线程使用了叫做arena的memory pool。缺省情况中,64bit系统每个arena大小为64M,一个进程最多有cpu-cores8个arena。32bit系统每个arena大小为1M,一个进程最多有cpu-cores2个arena。

这个功能在版本大于2.11的glibc中才会有。查看glibc版本命令:

代码语言:javascript
复制
ldd --version

我使用的服务器中glibc的版本为2.12,所有也受到了影响。

如何解决?

使用环境变量MALLOC_ARENA_MAX即可设置进程拥有的最多arena的大小。例如:

代码语言:javascript
复制
export MALLOC_ARENA_MAX=1

网上有人说hadoop推荐这个值为4,但如果你的程序不怎么考虑内存分配的性能,那还是设置为1,直接禁止新建线程就分配arena,所有线程使用一个arena。

回顾

探究这个问题,原因是在使用docker容器的时候,一容器设置的内存较小,容易引起内存报警,二是容器的隔离程度不够,程序获得的cpu核数还是母鸡cpu的核数,这就更加剧了内存使用的膨胀。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018-07-31,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 缘由
  • 查看内存映射
  • glibc搞的鬼?
  • 如何解决?
  • 回顾
相关产品与服务
容器服务
腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档