www.delorie.com/archives/browse.cgi   search  
Mail Archives: pgcc/1998/03/12/20:08:41

X-pop3-spooler: POP3MAIL 2.1.0 b 3 961213 -bs-
Delivered-To: pcg AT goof DOT com
Date: Thu, 12 Mar 1998 20:44:39 +0100
From: Wolfgang Formann <wolfi AT unknown DOT ruhr DOT de>
Message-Id: <199803121944.UAA18589@unknown.ruhr.de>
To: pcg AT goof DOT com, tuukkat AT ees2 DOT oulu DOT fi
Subject: Re: paranoia & extra precision [was -fno-float-store in pgcc]
Cc: andrewc AT rosemail DOT rose DOT hp DOT com, beastium-list AT Desk DOT nl
Sender: Marc Lehmann <pcg AT goof DOT com>
Status: RO
X-Status: A
Lines: 71

Tuukka Toivonen wrote:

>On Thu, 12 Mar 1998, Marc Lehmann wrote:

>>> What do you make of the following code.   PGCC produces different
>>> results when optimizing then when not optimization.  I was
>>> told it has to do with -fno-float-store, but pgcc doesn't appear

>Same goes to standard GCC... I was disappointed to found that it didn't
>regard -ffloat-store. (But don't trust my word; it was so a long time
>ago that better to check it yourself.)

>>the x86 chips are not really ieee compliant. that's not too serious, as I'll

>This seems to be general misbelief (I've heard it before...).
>The problem is not with Intel FPU. The problem is in C-compiler
>(or maybe in OS... but not in FPU).

I think you forgot this braindamaged FPU, which has only 7 registers for
storing intermediate values/operands (the 8th is used for the next result).
So when you geht out of registers, than you *HAVE* to store one of the
intermidiates to memory. And from that moment you mix 80-bits and 64-bits
and get neither IEEE-compliant nor extended results!

???????????????? not sure about this ???????????????? 
Well, I think, there is an additional bit used inside the FPU as a helper
for rounding problems, if that is true the real internal format is 81 bits
wide. When there is a lot of task-switching to other processes which
too use the FPU, then you will lose this 81th bit by saving and restoring
the FPU-state of your process. In this case, the significance of whatever
program depends on the moon!
Maybe that is why the adden a new opcode in some Pentium-chips ?
???????????????? not sure about this ???????????????? 

>Usually, you want as much precision as possible. So usually it's just good
>that there's some extra precision.

>Is some cases, truly IEEE-compliant floating point system is needed. For
>these cases there's a state bit in FPU that tells it to use less precision.
>(Another good use for this bit is to speed up fdiv about 20 clocks; but
>that's another story).

Yeah, this one does exist, but it affects only multiplication, division,
addition, subtraction and the square root. So id does not affect any of
these transcendent functions.

>Generally code generated by the C compiler should use the higher precision.
>But there really should be a switch to use IEEE-style lesser precision
>floats (maybe (p)gcc has it, I don't know).

Which means, that all your external libraries have to be recompiled
with this 80-bit precision and a lot of prebuild application will not
run.

>Even better would be #pragma or something which would allow one to use
>IEEE-floats in some part of code and extra precision in another part.

Nice idea, but works only with
*) good will without prototypes (K&R-C)
*) prototypes in ANSI-C
*) new name-mangling in C++
when using double as arguments.
Will this give us the same flood of warnings as <typeof char>, <typeof
unsigned char> and <typeof signed char> which are all treated as different
types?

>>Any questions left? Don't hesitate - Ask!

>This was not exactly a question, but I hope you don't mind ;)
--
Wolfgang Formann (wolfi AT unknown DOT ruhr DOT de)

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019