www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1993/04/08/00:25:30

Date: 08 Apr 1993 13:37:33 +1100
From: Bill Metzenthen <APM233M AT vaxc DOT cc DOT monash DOT edu DOT au>
Subject: Re: Bug in floating point constants
To: djgpp AT sun DOT soe DOT clarkson DOT edu

Here are the results for gcc 2.3.3 under Linux with math emulation and
the 4.3.2 libs:

.globl _d
	.align 2
_d:
	.double 0d9.99999999999999982737e+66
	.double 0d9.99999999999999952805e+67
	.double 0d1.00000000000000007253e+69
	.double 0d1.00000000000000007253e+70
	.double 0d1.00000000000000004188e+71

For interest, below are the values of the 5 numbers under Linux, each
in four lines: the first line has the bit pattern, the second line is
the result of printf() with a "%79.3f" format, the third line is the
ascii string representation obtained by converting the double with 400
digit (decimal) precision arithmetic, and the fourth line gives the
relative error and the number of bits that this error represents. Note
that all of the numbers are the closest which is possible with an
80387 double. It is interesting to note that the printf() statement
produced the correct result in each case (it is also correct for the
300+ decimal digit number MAXDOUBLE).


d[0] = 4dd7bd29 d1c87a19
        9999999999999999827367757839185598317239782875580932278577147150336.000
        9999999999999999827367757839185598317239782875580932278577147150336.000
error = -1.7e-17,  55.685076 bits
d[1] = 4e0dac74 463a989f
       99999999999999995280522225138166806691251291352861698530421623488512.000
       99999999999999995280522225138166806691251291352861698530421623488512.000
error = -4.7e-17,  54.234150 bits
d[2] = 4e428bc8 abe49f64
     1000000000000000072531436381529235126158374409646521955518210155479040.000
     1000000000000000072531436381529235126158374409646521955518210155479040.000
error = 7.3e-17,  53.614171 bits
d[3] = 4e772eba d6ddc73d
    10000000000000000725314363815292351261583744096465219555182101554790400.000
    10000000000000000725314363815292351261583744096465219555182101554790400.000
error = 7.3e-17,  53.614171 bits
d[4] = 4eacfa69 8c95390c
   100000000000000004188152556421145795899143386664033828314342771180699648.000
   100000000000000004188152556421145795899143386664033828314342771180699648.000
error = 4.2e-17,  54.406464 bits

(Although the Linux gcc appears to work quite well for this problem,
there are sometimes errors with the conversion of strings which
represent numbers near DOUBLEMAX. This problem may be fixed with the
4.3.3 libs but I haven't checked yet.)


--Bill


- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019