Apr. 2nd, 2009

sawyl: (Default)
I've been musing, yet again, on the subject of memory limits. On most systems that support explicit limits on memory usage, jobs which exceed their allocation are signaled in some way. If a process exceeds its allocation whilst attempting a dynamic allocation, the allocation call should return an invalid pointer and the process should be allowed to handle the problem itself. But if the process fails in a static allocation, e.g. the thread of execution jumps to a subroutine and the allocation of the local variables will cause the limit to be exceeded, the job should receive a warning signal followed, after a suitable grace period, by a kill signal, if the problem is not resolved.

This should be obvious stuff: processes should be given a chance to clean up after themseves before being summarily killed. And sure enough, most big iron operating systems, e.g. Unicos, Super-UX, stick to these rules.

AIX, sadly, does not behave in this way. Rather, exceeding a workload manager memory limit causes a process to be killed without warning, regardless of how the allocation occurs and whether a possible route for handling the problem exists. I can understand why IBM might have chosen to implement the rules in this way, because WLM is supposed to create hard boundaries which effectively partition the system up and prevent any one process from hogging the resources, but this is not a particularly user friendly way of doing things...

Profile

sawyl: (Default)
sawyl

August 2018

S M T W T F S
   123 4
5 6 7 8910 11
12131415161718
192021222324 25
262728293031 

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 03:02 am
Powered by Dreamwidth Studios