kernel: Out of Memory: Killed process xxxxx (clamd).

Got a new problem that appears to have been happening for the last week or so.

kernel: Out of Memory: Killed process 26469 (clamd).

The machine it is on has 4gb of ram and is the node manager for the cluster. The cluster machine has 2GB but does not have the same problem.

Also, I am using V3 RC4.

Any ideas?

What does top show for physical memory usage and swap usage?

Here is what I have from the graphs.

This did not start happening until rc4 though.

What about swap usage? You’ll need to take a look at top on the server, as we don’t have RRD graphs for swap file usage. The last time I saw something like this, somehow the swap file had been disabled on the server, so whenever physically memory filled up, with no swap to use, processes started getting killed.

From looking at top, the system is using 4mb out of 2gb.

If you look at my graphs though, something slowly increases it’s ram usage over the course of a day then every day at about the same time it goes away.

The “cached” memory (the great big dark red part of the graph) isn’t actually used memory. Linux “caches” free ram steadily, but just so the OS can hand it out quickly if programs need it.

I’m not sure why linux is letting go of a big chunk of cached ram once per day as your graph shows, but this doesn’t mean there’s a problem.

I’ve only seen out of memory errors on VPS’s with memory limitations being enforced.