www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1996/04/20/14:11:01

Date: Sat, 20 Apr 1996 20:04:04 +0000
From: "x DOT pons AT cc DOT uab DOT es" <ILGES AT cc DOT uab DOT es>
Subject: LONG_MIN question
To: djgpp AT sun DOT soe DOT clarkson DOT edu
Message-Id: <01I3RZ7ZOPMQ005280@cc.uab.es>
Organization: Universitat Autonoma de Barcelona
Mime-Version: 1.0

Dear programmers,

If I write a sentence like this: 

  if (0L < -2147483647L)
     .....

the result is TRUE, although it should not be. In fact, DJGPP warns it
when compiling:
  "warning: decimal constant is so large that it is unsigned"

It is possible to do the right comparison by writing
  if (0L < LONG_MIN)
     .....

LONG_MIN is defined in <limits.h> as:
   #define LONG_MIN (-2147483647L-1L)

I suppose this makes the code a bit slower. I'd like to know:
  1-Is there some way to avoid this subtraction each time I compare a value
    and -2147483648L
  2-Why this problem exists? Why doing (-2147483647L-1L) does not produce
    the same problem?
  3-Why DJGPP defines
      #define SHRT_MIN (-32768)
    and BC++4.52 defines
      #define SHRT_MIN (-32767-1)
    even for 32 bit applications?

Thank you,


Xavier Pons

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019