Date: Mon, 22 Sep 1997 14:26:37 +0300 (IDT) From: Eli Zaretskii To: Brett Porter cc: DJGPP Subject: Re: %d In-Reply-To: <199709220850.SAA00347@rabble.uow.edu.au> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk On Mon, 22 Sep 1997, Brett Porter wrote: > I didn't realise this, and to make sure my code was compatible between16-bit > and 32-bit (when using Borland), I never used int types to ensure that they > always were what they were meant to be. This is a different issue. You can (and sometimes should) use shorts where you know 16 bits are enough. Just remember to prototype your functions so that they get ints instead of shorts. For example: void foo (int); /* a prototype of `foo' */ .... short i; foo (i); /* a call to `foo' with a short argument */ An ANSI-complying compiler will see the prototype and convert short to an int, whereas a non-ANSI compiler will *always* promote shorts to ints when passing them to functions. > I heard a rumour somewhere (some internet page) that in a 32-bit compiler, > you should always use 32-bit ints instead of shorts because they don't have > to be "played with" so much in the registers (sorry, I didn't know how to > say that). Is this true and would it really give that much of a speed gain? It is true that for some architectures, ints are faster than shorts. However, when portability is an issue, IMHO, use shorts and don't look back. In most programs you won't see the difference, since the optimizer usually does a good job. The only place where you should use ints is in tight loops, where the speed really matters; but an index in a loop can always be declared int (if 16 bits are enough) without hampering portability.