www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1996/04/20/14:47:06

From: elf AT netcom DOT com (Marc Singer)
Message-Id: <199604201839.LAA28457@netcom5.netcom.com>
Subject: Re: LONG_MIN question
To: ILGES AT cc DOT uab DOT es
Date: Sat, 20 Apr 1996 11:39:01 -0700 (PDT)
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu (DJGPP List Alias)
In-Reply-To: <01I3RZ7ZOPMQ005280@cc.uab.es> from "x.pons@cc.uab.es" at Apr 20, 96 08:04:04 pm
Mime-Version: 1.0

> It is possible to do the right comparison by writing
>   if (0L < LONG_MIN)
>      .....
> 
> LONG_MIN is defined in <limits.h> as:
>    #define LONG_MIN (-2147483647L-1L)
> 
> I suppose this makes the code a bit slower. I'd like to know:
>   1-Is there some way to avoid this subtraction each time I compare a value
>     and -2147483648L
>   2-Why this problem exists? Why doing (-2147483647L-1L) does not produce
>     the same problem?
>   3-Why DJGPP defines
>       #define SHRT_MIN (-32768)
>     and BC++4.52 defines
>       #define SHRT_MIN (-32767-1)
>     even for 32 bit applications?
> 

You can look at the assembler output from the compiler to verify that
it is smart enough to know that subtraction of constants can be done
at compile time.  

The definition of these constants is specific to each compiler.  While
I don't know why each uses these different forms, the results are
guaranteed to be identical. 

Marc Singer

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019