www.delorie.com/archives/browse.cgi   search  
Mail Archives: pgcc/2000/07/06/13:31:13.1

Sender: scott AT b2 DOT swiftview DOT com
Message-ID: <3964C0E2.652FE615@swiftview.com>
Date: Thu, 06 Jul 2000 10:24:50 -0700
From: Scott Long <scott AT swiftview DOT com>
Organization: SwiftView, Inc.
X-Mailer: Mozilla 4.72 [en] (X11; U; Linux 2.2.16pgcc i686)
X-Accept-Language: en
MIME-Version: 1.0
To: pgcc AT delorie DOT com
Subject: Multiplicative distribution
Reply-To: pgcc AT delorie DOT com

This is not a bug. More of an oversight.

To its credit, pgcc recognizes the following mathematical identity
(which is apparently unknown to gcc, at any level of optimization):

f * a + f * b == f * (a + b)

But only if the types involved are integer. When the types involved are
double, two multiplications take place. Why is this? I realize that
there are accuracy issues with any operations on floating point, but I
do not see how they could possibly apply in this case.

A related case is the identity:

a / f + b / f == (a + b) / f

This cannot be done with integer types, as I have worked out in terms of
an integral factor and a remainder:

a / f == x_a * f + r_a
b / f == x_b * f + r_b

a / f + b / f = f * (x_a + x_b) + r_a + r_b

If r_a + r_b exceeds f (as it possibly could, for values of f greater
than 2), then we get an incorrect result.

But why can't this happen when the types are double?

Please CC any replies to my email since I am not subscribed to this
list.

Thank you,
Scott

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019