Mail Archives: djgpp/1997/02/07/06:19:21
On Thu, 6 Feb 1997, Mr. Fuju wrote:
> When designers measure computer speed, they measure it in flops...
> mega-flops, tera-flops, etcs. What program do they use to come up with
> those numbers? Could someone here write something that is comperable?
A flop is a floating-point multiplication followed by a floating-point
addition:
double a, b, c;
double d = a * b + c;
Speed is measured in flops (or mega-flops, or tera-flops) PER SECOND, not
in flops.
So the simplest benchmark that measures raw flops would be a loop that,
say, multiplies two vectors to get their scalar product:
double a[10], b[10], c = 0.0;
int i;
for (i = 0; i < 10; i++)
c += a[i] * b[i];
This is basically a 10-flop loop. If you measure the time it runs, you
can compute the speed of your machine in flops per second.
However, the real FP benchmarks (such as Whetstone) are much more complex
and sophisticated. They need to cope with such factors as on-chip cache
(which make benchmarks seem to be extremely fast until the size of the
arrays overflows the cache) and other calamities.
- Raw text -