www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1998/06/20/12:31:02

From: Paul Shirley <Paul AT chocolat DOT obvious DOT fake DOT foobar DOT co DOT uk>
Newsgroups: comp.os.msdos.djgpp
Subject: Re: Fixed vs floating point?
Date: Fri, 19 Jun 1998 18:40:11 +0100
Organization: wot? me?
Message-ID: <2VQrmCA7Jqi1Ew76@foobar.co.uk>
References: <Pine DOT SUN DOT 3 DOT 96 DOT 980619100615 DOT 14404A-100000 AT xs2 DOT xs4all DOT nl>
NNTP-Posting-Host: chocolat.foobar.co.uk
Mime-Version: 1.0
Lines: 20
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp

In article <Pine DOT SUN DOT 3 DOT 96 DOT 980619100615 DOT 14404A-100000 AT xs2 DOT xs4all DOT nl>, Rob
Kramer <rkramer AT xs4all DOT nl> writes
>Can anyone make a guess if multiplications/devisions in fixed point math
>are still faster on a machine that has a FPU? I was wondering if it would
>do any good to #define my code to use conventional floats if the machine
>supports it. (I'm using Allegro's fixed math stuff b.t.w)

For Pentium and better processors:
In principle multiple and divide are significantly faster with floats,
add/sub a lot slower. Overall, hand optimised assembler can run maths
algorithms noticeably faster.
Back in the real world, you will probably see no overall difference in
good (but untuned) C code, little improvement in tuned C code. Once you
stray away from simple arithmetic (eg conditionals) floats become a
serious liability.

Most x86 compilers (djgpp included) don't do a good enough job with fpu
code for you to simply switch variable types and get an improvement.
---
Paul Shirley: my email address is 'obvious'ly anti-spammed

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019