当内存出现不够用的时候,oom-killer会kill掉一些进程
这个信息可以在/var/log/messger里查到
当这种情况出现的时候,可以将系统里一些nattach为0的shm清理掉
关于高低端内存的问题可以看如下
Since this problem seems to popup on different lists, this message has
been cross-posted to the general Red Hat discussion list, the RHEL3
(Taroon) list and the RHEL4 (Nahant) list. My apologies for not having
the time to post this summary sooner.
I would still be banging my head against this problem were it not for
the generous assistance of Tom Sightler <ttsig@xxxxxxxxxxxxx> and Brian
Long <brilong@xxxxxxxxx>.
In general, the out of memory killer (oom-killer) begins killing
processes, even on servers with large amounts (6Gb+) of RAM. In many
cases people report plenty of "free" RAM and are perplexed as to why the
oom-killer is whacking processes. Indications that this has happened
appear in /var/log/messages:
Out of Memory: Killed process [PID] [process name].
In my case I was upgrading various VMware servers from RHEL3 / VMware
GSX to RHEL4 / VMware Server. One of the virtual machines on a server
with 16Gb of RAM kept getting whacked by the oom-killer. Needless to
say, this was quite frustrating.
As it turns out, the problem was low memory exhaustion. Quoting Tom:
"The kernel uses low memory to track allocations of all memory thus a
system with 16GB of memory will use significantly more low memory than a
system with 4GB, perhaps as much as 4 times. This extra pressure
happens from the moment you turn the system on before you do anything at
all because the kernel structures have to be sized for the potential of
tracking allocations in four times as much memory."
You can check the status of low & high memory a couple of ways:
# egrep 'High|Low' /proc/meminfo
HighTotal: 5111780 kB
HighFree: 1172 kB
LowTotal: 795688 kB
LowFree: 16788 kB
# free -lm
total used free shared buffers cached
Mem: 5769 5751 17 0 8 5267
Low: 777 760 16 0 0 0
High: 4991 4990 1 0 0 0
-/+ buffers/cache: 475 5293
Swap: 4773 0 4773
When low memory is exhausted, it doesn't matter how much high memory is
available, the oom-killer will begin whacking processes to keep the
server alive.
There are a couple of solutions to this problem:
If possible, upgrade to 64-bit Linux. This is the best solution because
*all* memory becomes low memory. If you run out of low memory in this
case, then you're *really* out of memory. ;-)
If limited to 32-bit Linux, the best solution is to run the hugemem
kernel. This kernel splits low/high memory differently, and in most
cases should provide enough low memory to map high memory. In most
cases this is an easy fix - simply install the hugemem kernel RPM &
reboot.
If running the 32-bit hugemem kernel isn't an option either, you can try
setting /proc/sys/vm/lower_zone_protection to a value of 250 or more.
This will cause the kernel to try to be more aggressive in defending the
low zone from allocating memory that could potentially be allocated in
the high memory zone. As far as I know, this option isn't available
until the 2.6.x kernel. Some experimentation to find the best setting
for your environment will probably be necessary. You can check & set
this value on the fly via:
# cat /proc/sys/vm/lower_zone_protection
# echo "250" > /proc/sys/vm/lower_zone_protection
To set this option on boot, add the following to /etc/sysctl.conf:
vm.lower_zone_protection = 250
As a last-ditch effort, you can disable the oom-killer. This option can
cause the server to hang, so use it with extreme caution (and at your
own risk)!
Check status of oom-killer:
# cat /proc/sys/vm/oom-kill
Turn oom-killer off/on:
# echo "0" > /proc/sys/vm/oom-kill
# echo "1" > /proc/sys/vm/oom-kill
To make this change take effect at boot time, add the following
to /etc/sysctl.conf:
vm.oom-kill = 0
For processes that would have been killed, but weren't because the oom-
killer is disabled, you'll see the following message
in /var/log/messages:
"Would have oom-killed but /proc/sys/vm/oom-kill is disabled"
Sorry for being so long-winded. I hope this helps others who have
struggled with this problem
分享到:
相关推荐
Mar 9 11:29:16 xxxxxx kernel: mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 Mar 9 11:29:16 xxxxxx kernel: mysqld cpuset=/ mems_allowed=0 Mar 9 11:29:16 x
主要介绍了Slave memory leak and trigger oom-killer,需要的朋友可以参考下
kubernetes-oom-event-generator 容器启动时生成Kubernetes事件,并指示该容器先前已被杀死。设计控制器侦听Kubernetes API中的新事件和事件更改。 每次收到有关事件的通知时,它都会根据事件的Reason和所涉及对象的...
问题原因分析:使用ScriptEngine.eval每次都会对脚本进行编译,生成一个新的类,被GroovyClassLoader加载,大量执行计算后,将导致被加载的类数量不断增加,最终OOM。 解决办法:对计算的表达式expression进行预...
oom-killer通常在Linux用户中享有不良声誉。 这可能是Linux仅在绝对没有其他选择时才调用它的部分原因。 它将换出桌面环境,删除整个页面缓存,并在最终终止进程之前清空每个缓冲区。 至少那是我认为的做法。 我坐...
Android应用源码开发Demo,主要用于毕业设计学习。
大部分情况下,会杀掉导致OOM的进程,然后系统恢复。通常我们会添加对内存的监控报警,例如:当memory或swap使用超过90%时,触发报警通知,需要及时介入排查。 如果已经出现OOM,则可以通过dmesg命令查看,CentOS7...
Android应用源码开发Demo,主要用于毕业设计学习。
node-oom-heapdump 即将在发生“内存不足”错误之前创建V8堆快照的节点模块。 它还可以根据请求创建堆转储和CPU配置文件,例如“ v8-profiler”,但是这样做是在进程外进行的,因此不会干扰主进程的执行。 在Node.js...
Linux系统的OOM Killer处理机制.docx
前面一节重点分享了Linux的内存分配策略,基于上述的分配策略,为了规避超售的风险,Linux采了一种OOM Killer的机制,即系统可用内存(包括Swap)即将使用完之前,选择性的Kill掉一些进程以求释放一些内存
Java内存溢出之PermGen_OOM
NULL 博文链接:https://shuechaolau.iteye.com/blog/1558046
基本上解决了OOM问题 如果 方便可以直接引用BitmapManager类到 项目中使用 解决blog 地址http://www.cnblogs.com/liongname/articles/2345087.html
在 OomAdjuster.updateAndTrimProcessLocked() 函数中针对 Bservice进行优化
如果在Keras内部多次使用同一个Model,例如在不同的数据集上训练同一个模型进而得到结果,会存在内存泄露的问题。在运行几次循环之后,就会报错OOM。 解决方法是在每个代码后面接clear_session()函数,显示的关闭TF...
oom-meta-simulator
oom观察者可以运行此命令来观察 cgroup 的 OOM 事件并运行一些子命令。 用法示例: go build ../oom-watcher --cgroup /sys/fs/cgroup/memory/yourcgroup -- \mail -s oom oom-alert@yourco.com