www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1998/09/17/04:16:13

From: "Clive Paterson" <epatersoncd AT cc DOT curtin DOT edu DOT au>
Newsgroups: comp.os.msdos.djgpp
Subject: Re: Floating/fixed point
Date: 17 Sep 1998 07:28:21 GMT
Organization: iiNet Technologies
Lines: 27
Message-ID: <01bde20c$af410200$0200a8c0@clive>
References: <000101bddf18$9db5fa00$d54b08c3 AT arthur>
NNTP-Posting-Host: reggae-09-64.nv.iinet.net.au
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp



Just incase anyone is interested I've done a fair bit of experimentation
with fixed point verses floating point math. I found some very interesting
results.

Firstly, intel definitly has superior floating point. You will  see more of
a speed increase using fixed point math on a K6 or a cyrix than an intel. I
don't know about the K6-2 but I hear they've improved their floating point.

Secondly, most people will say that the load/store operations are slow with
the FPU. This is partially true because a load/store operation will take
about 33 clock ticks on the old intel FPU. But with newer FPU's, everything
is cached so if you are doing repetitive calcuations using the FPU the
load/store time is neglegible.

Also, signed divions will take longer using fixed point math because you
have to check the signs, and negate negative numbers then if neccessary
negate the result back to a negative number.

Overall though, fixed point math can be much quicker when applied
correctly. I tested fixed point vs floating point math on my K6 200 and a
pentium 100. The program made was a fractal drawer using assembler. The K6
a speed increase of about 3 times using fixed point math and the p100
increase in speed about 2 times.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019