Process limits under Ubuntu

Started by Tapewolf, January 24, 2011, 04:18:35 AM

Previous topic - Next topic

Tapewolf

Google doesn't seem to have thrown much up for this, so I'm wondering if anyone here knows.
Basically, I have a system with 4GB of physical memory and another 4GB of swap.  64-bit custom kernel.
Last night, GIMP was taking up about 3.1GB with the comic and numerous other reference panels open in it.
At this point, it failed to run the gaussian blur plugin, with an error akin to "Error running fork() - out of memory".
This surprised me a little as I had assumed that the system would swap when it ran out of core, so it's presumably some kind of process limit.

I've never reached this limit before so I'm a little stumped.   For what it's worth, ulimit is returning "unlimited" - at the time I didn't think to check its switches so that is something I will have to try.  I'm wondering if it's a kernel config setting, though.

Any thoughts?

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


llearch n'n'daCorna

Find the gaussian blur plugin, and ldd on that. If it's linked to 32 bit libs, then there's your problem right there...


Failing that, I'm at a bit of a loss... can your system handle more than 4G? Mine has a 64 bit processor, and 64 bit bus to the memory, and 4G of memory... and a 32 bit sodding northbridge. >.< So I can't actually access more than 3275M or so, since the thing has a 512M graphics card in there. Might it be possible that the same sort of thing is going on for you?
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Tapewolf

#2
Quote from: llearch n'n'daCorna on January 24, 2011, 01:46:23 PM
Find the gaussian blur plugin, and ldd on that. If it's linked to 32 bit libs, then there's your problem right there...
It's 64-bit.

Quote
Failing that, I'm at a bit of a loss... can your system handle more than 4G? Mine has a 64 bit processor, and 64 bit bus to the memory, and 4G of memory... and a 32 bit sodding northbridge. >.< So I can't actually access more than 3275M or so, since the thing has a 512M graphics card in there. Might it be possible that the same sort of thing is going on for you?

Quite possible.  According to the X11 log, the graphics aperture is indeed 512MB.  I'm not sure how I'd go about finding out whether the chipset is 32-bit only...


EDIT:

Just wrote something that allocated memory in 32MB chunks and filled it with a randomly chosen value.  It managed to steal 4992MB (which made the system swap like crazy) before the kernel decided it had to die.

This didn't happen with GIMP - no swapping at all, it just didn't work.  I'll try modifying mine to spawn an external program and see if that fails after a certain threshold or something.

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


Fibre

Since it happens on a fork() of a large process, possibly too conservative memory overcommit configuration? (Though I'm not especially familiar with the details of Linux on that topic, it might be worth checking out.)

Tapewolf

Quote from: Fibre on January 24, 2011, 09:38:20 PM
Since it happens on a fork() of a large process, possibly too conservative memory overcommit configuration? (Though I'm not especially familiar with the details of Linux on that topic, it might be worth checking out.)

I wouldn't know where to begin when investigating that, but...

I'm using system() in my test program, to spawn a 'Hello world' program - system() calls fork() internally and is presumably what GIMP is also doing.
Last night, system() stopped functioning correctly when the allocator program had grabbed about 2.2GB or so.  So it is looking rather like it's attempting to reallocate the parent's memory space again for the child - which is pretty daft when AFAIK both GIMP and I are trying to shell out to a completely separate program.

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


Fibre

Well fork() just creates a clone of the process. A later exec() in the child then replaces it with the subprogram you actually want to execute. Even though the parent process memory is not, in practice, actually copied for the child, the system has no way of knowing that the child will exec() after the fork() instead of writing to all of its memory (which would require actually allocating that much).

Memory overcommit allows the system to make the optimistic assumption that the child will not actually require allocating all of that memory. However on your system it seems that it's tuned too conservatively for your situation. Linux does allow configuration of this to allow for more aggressive overcommit, so if this is indeed the problem you should be able to tune it enough to allow the plugin to run. The kernel docs on this is http://kernel.org/doc/Documentation/vm/overcommit-accounting.

There is also vfork() and posix_spawn() as alternatives to this traditional fork()/exec() sequence, which GIMP should arguably be using to avoid this problem (though it's possible that GIMP may be doing something that still requires the full fork()).


Tapewolf

Thanks for the documentation link.  I'll look into that when I get home.
If you're interested, from memory the test program is something like this, I'll have to look at the GIMP source to see what they're doing:




#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main(int argc, char *argv[])
{
int r,blocksize;
void *p;
char buf[16];
int total;

blocksize=32*1024*1024;
total=0;

for(;;) {
        p=calloc(1,blocksize);
        if(!p) {
                printf("Allocation failed!  Press enter to quit\n");
                fgets(buf,15,stdin);
                exit(1);
        }

        r=random()&0xff;
        memset(p,r,blocksize);

        total += 32;

        printf("Allocated %dMB\n",total);
        r=system("./hello-world");
        if(r == -1) {
                printf("Failed to run hello!\n");
                perror("Error: ");
        }
}
}



J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


VAE

Hmm, might it be that you haven't compiled into the kernel the support for large amount of memory allocation?
I know there are options there for it, and that at least one of the options has a limit of up to 4 GB...
Might be it, i am not sure how swap is handled
What i cannot create, i do not understand. - Richard P. Feynman
This is DMFA. Where major species don't understand clothing. So innuendo is overlooked for nuendo. .
Saphroneth



Tapewolf

10 merit marks to Fibre.

Using the command 'sysctl vm/overcommit_memory=1', my test program was able to get 5.4GB of memory and swap before the kernel destroyed it with the fascinating error:

Out of memory: Kill process 9268 (allocate_shit) score 918 or sacrifice child

...I think I will put it in rc.local.

Quote from: VAE on January 25, 2011, 12:45:01 PM
Hmm, might it be that you haven't compiled into the kernel the support for large amount of memory allocation?
I know there are options there for it, and that at least one of the options has a limit of up to 4 GB...

Yeah, that's only on 32 bit kernels, though.  However, this did cause a number of headaches years back when I had 2GB, and the system was unstable in windows and rock solid in linux.
What had actually happened was that I'd left linux with a 1GB memory limit - which windows didn't have - and the top 1GB stick had gone faulty.

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


llearch n'n'daCorna

Quote from: Tapewolf on January 25, 2011, 05:16:10 PM
10 merit marks to Fibre.

Using the command 'sysctl vm/overcommit_memory=1', my test program was able to get 5.4GB of memory and swap before the kernel destroyed it with the fascinating error:

Out of memory: Kill process 9268 (allocate_shit) score 918 or sacrifice child

...I think I will put it in rc.local.

try /etc/sysctl.d/local_changes.conf instead, in the same pattern as /etc/sysctl.conf; this presumes, of course, that you're using a debian-like system, or one in which the same change has been set up...
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Tapewolf

Quote from: llearch n'n'daCorna on January 26, 2011, 09:09:30 AM
try /etc/sysctl.d/local_changes.conf instead, in the same pattern as /etc/sysctl.conf; this presumes, of course, that you're using a debian-like system, or one in which the same change has been set up...

Right, I'll check those out when I get home.  I've not needed to set a sysctl before now, so I'm not entirely sure how it should be done on boot.

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E


llearch n'n'daCorna

Either will work, admittedly.

Doing it with sysctl.d is cleaner, since updates to the system won't overwrite your code - provided you choose a reasonable filename, anyway.
Thanks for all the images | Unofficial DMFA IRC server
"We found Scientology!" -- The Bad Idea Bears

Fibre

Quote from: Tapewolf on January 25, 2011, 05:16:10 PM
10 merit marks to Fibre.

Using the command 'sysctl vm/overcommit_memory=1', my test program was able to get 5.4GB of memory and swap before the kernel destroyed it with the fascinating error:

Out of memory: Kill process 9268 (allocate_shit) score 918 or sacrifice child

...I think I will put it in rc.local.

Glad it worked! (I assume it solved the GIMP plugin issue as well?) I'd be a bit nervous to run with unlimited overcommit though, but I suppose it isn't too huge of a risk on a single-user system...

Tapewolf

Quote from: Fibre on January 30, 2011, 04:42:06 PM
Glad it worked! (I assume it solved the GIMP plugin issue as well?) I'd be a bit nervous to run with unlimited overcommit though, but I suppose it isn't too huge of a risk on a single-user system...

It should work for GIMP, but I haven't pushed it that hard since then so I don't know for certain.  I did have a lot of images open that didn't need to be open but were because I figured I might as well use the memory for something...

J.P. Morris, Chief Engineer DMFA Radio Project * IT-HE * D-T-E