www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/1997/06/11/11:17:41

From: sandmann AT clio DOT rice DOT edu (Charles Sandmann)
Message-Id: <9706111459.AA14837@clio.rice.edu>
Subject: Re: more patches for minkeep issures
To: dj AT delorie DOT com (DJ Delorie)
Date: Wed, 11 Jun 1997 09:59:31 -0600 (CDT)
Cc: billc AT blackmagic DOT tait DOT co DOT nz, djgpp-workers AT delorie DOT com
In-Reply-To: <199706111231.IAA01123@delorie.com> from "DJ Delorie" at Jun 11, 97 08:31:04 am

> That's a lot of changes just to get an extra 512 bytes of buffer.
> Can't we just change the stub so that if the value is zero, we use
> 63.5k ?  That's the least amount of change to solve the problem, and
> it doesn't have backwards compatibility issues.

Since this is supposed to be the "transfer buffer", it doesn't make sense
for it to be over 64K anyway.  Applications needing more than this can
either allocate more dos memory or do a DOS resize call.  Adding
complications for what is supposed to be a simple use is asking for
trouble.

DOS can't handle a 64K I/O request - it's 0xffff maximum, and the 
transfer buffer should be the same.  The stub assumes it is always 
paragraph sized (low nibble zero).  Our stubedit proc should only
write values in the 0x0 to 0xffff range, ever.  Now, should the stub
add a paragraph to handle the round up (so instead of 0xfff0 maximum
you could actually get 0xffff) or should we just enforce low nibble zero 
in stubedit?  (The real thing will always be paragraph sized).

BTW, it looks like our "resize" computation is an infinite loop?

Clearing the low nibble, and maximizing the value at 0xfff0 
in the stubedit code will solve 99.9% of the problems here without changing
the stub or breaking compatibility ...

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019