Mail Archives: djgpp/1997/03/21/10:46:41
In article <9703181723 DOT aa06452 AT paris DOT ics DOT uci DOT edu>, Dan Hirschberg <dan AT verity DOT ICS DOT UCI DOT EDU> wrote:
> > > 3. df.exe seems unable to handle my large disk.
> > > It tells me that I have 999968 1024-blocks (adds up to 1G)
> > > and that I have used 0 of them, when I have 2G disk
> > > and I have about 1.3G free.
> >
> > df uses the library function `statfs' to report disk usage. `statfs'
> > uses function 36h of the DOS Int 21h to get that info. I suspect that
> > either the NT DOS box doesn't support that function completely, or it
> > has problems with large partitions such as yours. Could you please
> > write a short test program that calls `statfs' and prints the results,
> > test it on your machine and tell here what you see printed? I would
>
>I wrote the following program and ran it (and variant) as well as df.
>Here are the results:
Uh, fellas, maybe y'all should step back and think for a second.
Sure, this well may be a bug in the df.exe program, but it is actually a
blessing in disguise. The problem here is not really the df.exe, but rather
the fact that someone is using a computer with a 2 gig partition. This makes
absolutely no sense. You guys do realize that it doesn't matter how "big" a
hard drive is, in DOS, it can never contain more than 32767 files per
partition. You're wasting a ton of disk space by setting your cluster size to
64K. So maybe rewriting df.exe isn't the solution to this problem, but
rather, repatitioning your disk to something reasonable is.
PV
______________________________________________________________________________
Paul Peavyhouse
http://www.cs.montana.edu/~pv
email: pv AT cs DOT montana DOT edu
______________________________________________________________________________
- Raw text -