www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1992/06/05/10:38:03

Subject: Re: malloc() doesn't return a null pointer when out of memory
From: Marco Imperatore <mimperat AT cs DOT dal DOT ca>
To: djgpp AT sun DOT soe DOT clarkson DOT edu (djgpp)
Date: Fri, 5 Jun 1992 10:51:53 -0300

> > It is supposed to keep allocating memory until there is no more memory
> > left, at which point I expected to get a NULL pointer returned from
> > malloc(). 
> This is really nasty: allocating more and more memory. You will get
> real enemies when you try this on a Unix machine...

That's besides the point.  Malloc() should return NULL when there is no
more memory to be allocated.  It should not give a segmentation fault
nor should one need to get a larger disk for swap space.  Why should
one make life hard for oneself when one can rely on the simple fact
that malloc() return NULL when there is no more memory and then take
it from there?

> Since gcc is a Unix compiler, it is thought to give you as much memory as
> you need. It should *never* let you run out of memory. Especially malloc
> cannot see when your disk is full...

This is a bogus comment.  What does 'gcc is a Unix compiler' mean?
Doesn't GCC run under DOS and other OSs as well?  And since it does
run under DOS and other OSs it should behave in a manner consistent
with those OSs (perhaps the crashing aspect in consistent with DOS :-)).

I can certainly see how malloc() could not easily detect that the
swap disk is full but it should be able to easily see if there is not
enough virtual memory to satisfy the request.

> 
> > Instead, I got this error:
> > 
> > 	Fatal! disk full writing to swap file
> Oh yes. This is the real limit of virtual memory: the swap file (-partition,
> whatever) will be full some day. If you really need to process such a lot of 
> memory, get a bigger hard disk. ;-)

You dont' need a bigger hard disk if you can recover safely from an
out-of-memory error (which you should be able to detect when malloc()
return NULL).  One should always try to make use of as many available
resources as possible provided it doesn't constitute nasty behaviour.
As you say, on a Unix system (which is meant to be multi-user) such
practice could get you on somebody's black-list, but on a single-user
DOS machine, who (besides the single-user) can complain about using
too many resources?

> > When I increased the size of the array item.string to 1 000 000 or 
> > 10 000 000, my PC froze and I had to reboot.  When I increased it to
> > 100 000 000, I got the error
> > 
> > 	Segmentation violation in pointer 0xe8421000 at 40:1318
> > 	Exception 14 at eip=1318
> Yes, i've seen this, too. But this is not a question of enough memory or not,
> but of access permissions. Maybe go32 has some internal `segment sizes'?
> (But this time at 64M instead of 64K ;-)

Maybe 'go32' has some internal memory restrictions which it puts on the
program but that should not effect the correct behaviour of malloc().

> > Shouldn't malloc just return a NULL pointer?  I have some
> > memory-hogging programs that depend on that for error checking.
> This is the usual behaviour on DOS machines. You are on the way to Unix :-) :-)
> 
> BTW: keep your memory requirements in a reasonable proportion to your available
> RAM. I'm trying to work with 1024x1024 double matrices on an 8M-Machine, and
> things have to be thought carefully not to blow up the LRU-paging algorithm.

Rediculous... that's what virtual memory is for... so you don't have to know
how much real memory the machine has!  Just because you choose to do things
the hard way doesn't mean that everyone else should too!

This has been a public service announcement brought to you by...

Marco

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019