Mail Archives: djgpp/1997/01/06/23:14:11
In article <32cedb2d DOT 17212822 AT ursa DOT smsu DOT edu>, Tony O'Bryan
<aho450s AT nic DOT smsu DOT edu> writes
>I did a quick check on floating point vs. integers not too long ago.
>I wrote a small loop that only added an integer to an integer counter,
>then rewrote it using floating point variables. On my Pentium 120,
>integers were THOUSANDS of times faster. I don't remember the exact
>numbers, but 50,000 loops required a few seconds with the floating
>point. The integers were so fast that the timer (calculated to
>several digits [7 or 8, I think]) couldn't register the elapsed time.
First: floating point is slower at add and subtracts. If all you ever do
is add/sub then integer will be 2* faster *at least*. If you use
multiplies floating point is ~4-12* faster. Pipelined you get that 12*
speedup.
Divides are slightly faster in fpu but a lot less hassle than fixed
point. If you use single precision fpu mode they are always 2* faster.
Also its possible to continue issuing integer instructions while a float
divide executes. That allows tricks like performing a perspective divide
in effectively 1 clk.
Second: in your test the compiler probably replaced you entire loop with
a single load in the integer code. It could safely work out the final
value!
The float version had to actually do the calculations, its not allowed
optimise away floating point calculations.
Finally: The one thing to never do is mix integer and float operands.
Float->int conversion are very slow, fpu integer operations are slow.
---
Paul Shirley: shuffle chocolat before foobar for my real email address
- Raw text -