www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1994/10/08/14:42:06

Date: Sat, 8 Oct 1994 08:52:58 -0700 (PDT)
From: "Frederick W. Reimer" <fwreimer AT crl DOT com>
Subject: Re: djgpp and the 386SX
To: Stephen Turnbull <turnbull AT shako DOT sk DOT tsukuba DOT ac DOT jp>
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu

On Sat, 8 Oct 1994, Stephen Turnbull wrote:

>    On Thu, 6 Oct 1994, Don L. wrote:
>    > > Wrong!  GCC produces 32-bit code, which means int is 32 bit, not 16.  So
>    > > it can hold values upto 2 million.
>    > > 
>    > > 	Eli Zaretskii
> 
>    > No this is not wrong, it is completly dependent on the platform
>    > and compiler you are working with, for portablity reasons you
>    > should assume a 16-bit int.
>    > Don ;)
> 
>    Yes, I think it is wrong, What Eli is talking about is DJgpp (that is 
>    what this list server is about!).  If you want to talk about some other 
>    implementation of C, there are other mail groups available.  BUT, for the 
>    purposes of DJGPP, THIS compler DOES use 32-bit int's.  SO, the answer is 
>    definately -- WRONG!
> 
>    Fred Reimer
> 
> Not only is it definitely wrong, but it's wrong-headed, too.  "For
> portability reasons" one makes *no* assumptions that one can avoid.
> Even assuming ANSI minimum sizes can bite you.  (Eg, in packed
> structures with DJGPP you can't use unsigned, you must use unsigned
> short if you want to map to a 16-bit field.  And of course in X
> Consortium code, not even shorts are used---eg for wide characters
> they use two-byte arrays.)  On an ANSI compiler you don't need to make
> assumptions, you can use the values that are required to be defined in
> limits.h (I think that's the right file).  
> -- 
> Steve "My 2 yen is worth 2.8% more than your 2 cents" Turnbull
> 

I think we are really talking about two things here.  The first issue is 
what is acceptable and standard programming practices, and the second is 
what features programmers expect a compiler to have when it is touted as 
a 32-bit compiler.

I think you are right about the programming practices.  There is a 
problem, IMHO, with the ANSI standard though.  If you go strictly by the 
standard, and also don't assume anything about the platform/compiler you 
are using, then you end up with code that doesn't use int's, shorts, 
longs, or anything.  Since you can never be absolutely sure what sizes 
these types are, you end up having to use a bunch of #defines and 
creating types like INT16, INT32, UINT16, UINT16, etc.  Your code ends up 
not looking like C at all.  Again this is just MHO, but I don't see 
another way around it.

I think you are wrong on the second point, if you believe that a compiler 
said to be 32-bit can get away with using 16-bit int's.  I don't think 
there has ever been a poll or anything, but I would guess that most 
programmers are smart enough to know when they are getting reamed.  Look 
at these facts on the Intel CPUs:

They have a limited number of general registers that can be used for C 
"register" variables (or even to manipulate memory variables).  If a 
program has several variables that need to be 32-bits long, a 16-bit 
compiler would have to either use two registers for one variable, or 
not use registers at all and store the variables in memory.  Intel code 
takes a big performance hit when memory is accessed.  Especially when you 
are running it on a DX/2 CPU where memory access is at half the speed of 
the CPU.  Most of these "big" programs that use large arrays (that need 
to be referenced with 32-bit pointers) will defeat any benefit of a 
primary or secondary cache.  This is evident when you are sorting a huge 
array.  Compilers that use 16-bit pointers take an even greater hit 
because they need to use segments and need to update the segment and 
offset values to manipulate these huge arrays.  Even if you don't use 
"real" C pointers, and just use an int, you would get a performance hit 
if the compiler didn't provide 32-bit types that are accessed with a 
single CPU register.

Maybe this is not an issue on other platforms.  Maybe these other 
platforms use 16-bits for ints and 32-bits for longs.  This is acceptable 
to me if the compiler calls itself a 32-bit compiler AND uses single CPU 
instructions for manipulating these 32-bit values.  But to manipulate 
32-bit values (whether they be ints or longs) using 16-bit math 
internally and still call yourself a 32-bit compiler is just wrong.

Fred Reimer

+-------------------------------------------------------------+
| The views expressed in the above are solely my own, and are |
| not necessarily the views of my employer.  Have a nice day! |
| PGP2.6 public key available via `finger fwreimer AT crl DOT com`   |
+-------------------------------------------------------------+



- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019