Date: Sat, 20 Apr 1996 20:04:04 +0000 From: "x DOT pons AT cc DOT uab DOT es" Subject: LONG_MIN question To: djgpp AT sun DOT soe DOT clarkson DOT edu Message-Id: <01I3RZ7ZOPMQ005280@cc.uab.es> Organization: Universitat Autonoma de Barcelona Mime-Version: 1.0 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII Content-Transfer-Encoding: 7BIT Dear programmers, If I write a sentence like this: if (0L < -2147483647L) ..... the result is TRUE, although it should not be. In fact, DJGPP warns it when compiling: "warning: decimal constant is so large that it is unsigned" It is possible to do the right comparison by writing if (0L < LONG_MIN) ..... LONG_MIN is defined in as: #define LONG_MIN (-2147483647L-1L) I suppose this makes the code a bit slower. I'd like to know: 1-Is there some way to avoid this subtraction each time I compare a value and -2147483648L 2-Why this problem exists? Why doing (-2147483647L-1L) does not produce the same problem? 3-Why DJGPP defines #define SHRT_MIN (-32768) and BC++4.52 defines #define SHRT_MIN (-32767-1) even for 32 bit applications? Thank you, Xavier Pons