www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1994/10/12/06:20:08

Date: Wed, 12 Oct 94 16:01:43 JST
From: Stephen Turnbull <turnbull AT shako DOT sk DOT tsukuba DOT ac DOT jp>
To: hodder AT geop DOT ubc DOT ca
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu
Subject: Solved: Very strange behaviour!

   So now it works except when I use the -O flag to optimise it - and
   then it only crashes in a particular place. That's not really a
   problem - I just won't optimise it! - but it did lead me to wonder
   what exactely the -O switch does to the code. Since there is no

Do a "diff -C 2" on the assembler output.

   "ANSI" standard for optimization I suppose that the behaviour
   depends on the compiler - and isn't *guaranteed* to work?

Note that (at least as of two years ago) the GNU docs recommended the
use of -O2 as more robust than unoptimized code.  So for GNU C 2.3.3
or so, the optimized code was more likely to work!

The documentation for the -O switch is in the usual place: the info
and man pages.  If you want to know what "unrolling loops," etc,
mean, I suggest a compiler construction text.  I enjoyed Aho, Sethi,
& Ullman a lot, but others may have other preferences.

The behavior of the optimized code does *not* depend on the compiler,
and is guaranteed to work---just as well as the unoptimized code,
which is to say, not much of a guarantee at all.  But if it doesn't
work, it is a compiler bug.  (The actual rule here is quite
complicated; it is possible the behavior you observe is due to an
undefined or implementation-defined construct you are using improperly
for which the unoptimized code works, but the optimized code does not,
which is not a bug.)

To be less opaque, optimizers are *not* permitted to change the
semantics of any code whose semantics are specified by the standard.
For example, on some machines use of unsigned chars is faster than
signed chars.  Given

    signed char sc = 0;
    unsigned char uc = 0;
    char c = 0;
    int i = 1024;
    while(i--) printf ("%d %d %d\n", (int) sc++, (int) uc++, (int) c++);
            /* explicit casts used for clarity */

it might be that the unoptimized code implements c as a signed char
(but no compiler I know of would, of course it would choose the
natural implementation for the machine---this is a stupid example); the
optimizer could then reimplement c as an unsigned char to improve the
speed.  The optimizer is *not* allowed to reimplement sc as unsigned to
improve the speed.

I hope that helps; I think I'm more confused than when I started.
    --Steve

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019