www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1994/10/08/15:29:26

Date: Sat, 8 Oct 1994 12:57:34 -0400 (EDT)
From: Chris Tate <FIXER AT FAXCSL DOT DCRT DOT NIH DOT GOV>
To: djgpp AT sun DOT soe DOT clarkson DOT edu
Subject: Re: djgpp and the 386SX

I'll start this off with an observation that the terms "32-bit compiler"
and "32-bit code" tend to be Humpty-Dumpty terms - "they mean precisely
what I intend them to mean, neither more nor less."  It's therefore a
bit difficult to have a meaningful discussion when the parties involved
have different ideas about what the words *should* mean.  But here I go
anyway....  This is pretty long, so anybody who has gotten sick of this
thread already had better just nuke this note now, and be done with it. :)

Chris Tate wrote:

>> The fact that DJ's port of GCC produces "32-bit" code means that *pointers*
>> are 32 bits, not necessarily integers.  It happens to be the case that
>> integers are also 32 bits in size, but that's by no means a requirement
>> for "32-bit compilers" in and of itself.

and Fred Reimer responds:

> This is simply not true!  A TRUE 32 bit compiler is 32-bits throughout, 
> instead of using 16 or 32 bits here and there.  This is a major 
> complaint of users of Borland and Microsoft compilers that are touted to 
> be 32-bit compilers where they don't use 32-bits for their integers.  
> There is a MAJOR difference on Intel CPU where integers are only 
> 16-bits.  This may not be true for other CPU's, but I would hazard to 
> guess that it is a standard truth.  I don't know about the MacIntosh or 
> other comilers, but for Intel computers, 32-bits is definately faster.

Borland and Microsoft certainly *do* use 32-bit integers; they're just
declared "long" instead of "int."  Why is this a problem?

> If anyone is interested, I have Intel's programer's guide to the 386+ 
> processors, and I would be glad to post the timing specs for both 16-bit 
> and 32-bit instructions.

I'm frankly less interested in the 386+ processors than in the 286 and
whatever bizarre compatability mode it is that later processors run under
in DOS.  Isn't it the case that the instruction timings are different for
the different chip modes?  Remember that go32 provides its own pseudo-
OS in order to use the flat memory model; I'd argue that comparing it
with BC/MSVC is an apples-and-oranges job.  BC can compile Windows programs
and DJGPP can't, and so forth....

>As far as Unix programs and such are concerned, what's wrong with 
>assuming that the size of an int is the same as the size of a pointer?  
>This is almost a given in the C programming world, or at least is should 
>be.

Absolutely NOT!

You've never done much cross-platform porting, have you?  There are
compilers that give you just as much control over whatever it is you're
doing, but use different 'int' sizes for various reasons.  And you'd
better not make any assumptions about the size of 'int' in your code,
or else you'll wind up with subtle bugs when you move to a different
int-sized compiler.

>> For example, many Macintosh compilers have 32-bit pointers, and 16-bit
>> 'int's.  Or, for example, the Metrowerks C/C++ compilers (again for the
>> Mac), in which an 'int' is 16 bits when compiling for the MC680x0, and
>> 32 bits when compiling for the PowerPC.  The reason is efficiency; 16-bit
>> operations are faster on the Motorola chips than 32-bit operations.  Thus
>> the choice of a 16-bit 'int', the "natural" integer type.  For the PPC,
>> the 32-bit integer is faster, so *it* is made the 'default' int.
>
>I would suggest that the compilers for the Mac which use 16-bit ints are 
>not true 32-bit compilers.  When you say that a particular compiler is 
>32-bit, what do YOU assume?  Than POINTERS are 32-bits?  That's IT?  I 
>assume that it uses 32-bit int's.  I could care less what type it's 
>pointers are.  If it's an Intel machine, let it use far pointers or 
>something, I really don't care.  But to say that a compiler is 32-bits 
>when it uses 16-bit int's is streaching it a bit, if you ask me.

Actually, Metrowerks lets you pick whether you want 16-bit or 32-bit
integers under its 68K backend.  16-bit integers are faster on earlier
chips (which are still in use!), and have the advantage of taking up
a lot less space.  32-bit integers are more accomodating of large values,
and just as fast on later chips.  You pick the one most suited to your
needs.  If you expect your code to be run on all Macs, you use 16-bit
ints.

'far' pointers are one of those incredibly skanky kludges that were
intended to stretch a poorly-architectured machine into the modern era.
And they're not portable by any stretch of the imagination.

Why do you require that the 'int' type be 32 bits?  Just declare things
'long' when you need that many bits; otherwise, use 'int' and let the
compiler sort things out.  Assumptions such as you describe above can
and have caused a lot of grief when it comes to porting.

>Yes, they may be faster, but if you have to compute a 32-bit aritmetic 
>value, what would be faster?  Two (actually more because of the 
>conditional carry) instructions or just one?  I would submit that the one 
>32-bit instruction is faster, and I would be willing to provide the 
>instruction timmings to prove it.

If you have to do 32-bit math, you declare a 32-bit variable by calling it
'long' instead of 'int.'  What *I* am saying is that it makes no sense to
require the *default* 'int' size to be 32 bits, especially when that may
not be the most efficient integer size.

Real history:  About five years ago, most Macs were still running MC68000
chips, rather than the 68020.  The 68030 was brand-new.  There were also
two major C compilers on the market, MPW C and THINK C.  Commercial apps
compiled under THINK C were often sleeker and faster than those compiled
under MPW, because THINK C used a 16-bit 'int' and MPW used a 32-bit 'int'.

Both compilers offered full access to the entire memory space of the
machine, and both offered full 32-bit codegen.  How can you say that
THINK C was somehow "not a true 32-bit compiler" if it didn't have *any*
shortcomings relative to its (supposedly "truly 32-bit") competitor?

> Yes, Unix code does have a lot to be desired, but at least I can 
> compile the majority of it on my PC without chages with DJGPP.  Can you 
> say that for your Mac? (just wondering)...

You're comparing Apples and oranges again.  :-)

Actually, I can port a great deal of (TTY-based) Unix code to my Mac without
any major modifications.  Now:  How much Unix code can you port to *Windows*
without any major changes?  That's a much more balanced comparison.
	
--------------------------------------------------------------------
Christopher Tate           | "Apple Guide makes Windows' help engine
fixer AT faxcsl DOT dcrt DOT nih DOT gov  |  look like a quadruple amputee."
eWorld:  cTate             |      -- Pete Gontier (gurgle AT dnai DOT com)

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019