www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/2000/10/06/17:50:38

Message-ID: <39DE17F3.A88B4FD7@cyberoptics.com>
From: Eric Rudd <rudd AT cyberoptics DOT com>
Organization: CyberOptics
X-Mailer: Mozilla 4.72 [en] (Win95; U)
X-Accept-Language: en,pdf
MIME-Version: 1.0
Newsgroups: comp.os.msdos.djgpp
Subject: Re: get the amount of physical memory
References: <Pine DOT SUN DOT 3 DOT 91 DOT 1001003142803 DOT 12455B-100000 AT is> <39DA1E51 DOT D0C53931 AT cyberoptics DOT com> <3405-Wed04Oct2000191356+0300-eliz AT is DOT elta DOT co DOT il>
Lines: 46
Date: Fri, 06 Oct 2000 13:20:36 -0500
NNTP-Posting-Host: 38.196.93.9
X-Trace: client 970856438 38.196.93.9 (Fri, 06 Oct 2000 14:20:38 EDT)
NNTP-Posting-Date: Fri, 06 Oct 2000 14:20:38 EDT
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp
Reply-To: djgpp AT delorie DOT com

Eli Zaretskii wrote:

> Let's say that there is a way to know the amount of available memory--how
> would this help you in the scenario you've just described? You still won't
> know whether that memory will be enough to read the file, since you don't
> know how many lines there are in the file, right?

I agree that you still don't know whether the operation will succeed in
advance, but in a DOS program, you can malloc the maximum physical region,
and the program will succeed if it is possible for it to succeed (and
complain when it finds out it can't).  In either event, you can call
realloc() later to free up the unneeded space.  The alternative seems to be
to place a check inside the inner loop, which slows the program down.  (I'm
not sympathetic to arguments that, with the fast computers we have these
days, efficiency isn't so important any more, because I'm still using them
to solve problems at the limits of what they can conveniently handle -- and
those limits are still a function of the efficiency of the routines I
write.)

I admit that it is possible to get around these difficulties by more
intricate programming, but a good programming environment ought make simple
things like this straightforward.

> > 2. To predict how large a region of memory can be accessed without
> > disk swaps.
>
> This is impossible with most modern OSes.  Plain DOS and, sometimes,
> Windows 9X are the only ones that report available physical memory
> reliably (on Windows 9X, the report is only accurate if no other program
> works at the same time consuming memory).

I count this as a deficiency in the OSes, since it means that an app can't
predict which algorithm would be most efficient.  In my experience, an
algorithm that expects to work with internal memory, but actually works with
external (virtual) memory, can be *extremely* slow.  Take qsort(), for
instance.  If that function gets called on a virtual array, it can take
literally *hours*, whereas a routine specially written to make a few
sequential passes through intermediate disk files can take only a few times
longer than the file I/O to read the input file and write the output file.
On the other hand, external sorting routines like that are generally less
efficient than a routine that is allowed the privilege of working entirely
in internal memory.

-Eric Rudd
rudd AT cyberoptics DOT com

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019