www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/02/02/00:01:30

To: djgpp AT delorie DOT com
Subject: SHIFTS vs MULS
Message-ID: <19970201.205029.4943.0.chambersb@juno.com>
From: chambersb AT juno DOT com (Benjamin D Chambers)
Date: Sat, 01 Feb 1997 23:48:49 EST

Well, I was asked earlier to prove that for the majority of numbers,
multiplying via shifts and adds was faster than using MUL.  (I was also
asked to do this on a 386, a 486, and a Pentium for comparison.  I only
have access to my 486, so I can't do that.)  I wrote a two programs to do
this:
1) Generates 2000 random integers.  Spits them out to a binary file, and
creates a new .c file that consists of a function (which multiplies every
other number by the number after it) and a main (which calls that
function 500 times repeatedly).

2) Loads the binary file from above, and multiplies every other number by
the number after it 500 times repeatedly.

Now, the only difference between the two resultant programs (the second
one and the program the first one spat out) is that gcc translates one
into shifts and adds, while the other has no choice but to use mul.  I
timed the two programs, and:
real.exe: 12 seconds.
shift.exe: No perceptible run time.

This sounds a little fishy to me, especially considering the code sizes
of the two programs are (when stripped) roughly equivalent.  Is there any
reason that Gcc might skip the mul's in the shift&add one as an
optimization?  (ie looking ahead and seeing the value isn't used so not
doing the calculation, or something?)

If anyone would like to see the offending programs, just ask and I'll
send them.

...Chambers

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019