www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1995/10/12/05:03:41

Date: Thu, 12 Oct 1995 08:50:43 +0100
From: dgardner AT mcsilo DOT ilo DOT dec DOT com (Damien Gardner)
To: djgpp AT sun DOT soe DOT clarkson DOT edu
Subject: Re: Many small files versus big clusters

  A.Appleyard wrote:
> If (say) all the files LIBSRC\C\IO\*.C are chained into one big file
> LIBSRC\C\IO.C, and after each function (plus its associated outermost-level
> declarations) you insert a new preprocessor command `#libunit' ...

There is another problem with this:

If (say) all the files LIBSRC\C\IO\*.C are chained into one big file
and I edit one statement in that file and then go to rebuild the library,
the entire body of code has to be recompiled and added to the library
instead of just the one small file that has to be recompiled as things
stand at the moment.  The make program just sees one file that has been
touched and the compiler couldn't determine what #libunit sections were
modified unless you make a few more changes ...

Maybe I'm biased because I only own a 386DX33, but I also only have a
540Mb harddrive so space is also precious.  However, the short time it
takes to rebuild a library or executable using the present *standard*
system more than compensates me for the storage penalties.  

Maybe I'll upgrade to a 133MHz Pentium with a 200Mb hard drive tomorrow
and then I could live with the slowdown and the space-saving.  Of course
with 200Mb my cluster size would be smaller so it wouldn't matter anymore.

Just my 0.02 ECUs,
Damien Gardner.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019