www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/02/26/12:56:40

From: jesse AT lenny DOT dseg DOT ti DOT com (Jesse Bennett)
Newsgroups: comp.os.msdos.djgpp
Subject: Re: Netlib code [was Re: flops...]
Followup-To: poster
Date: 26 Feb 1997 16:43:37 GMT
Organization: Texas Instruments
Lines: 58
Message-ID: <5f1p7p$uj$1@superb.csc.ti.com>
References: <Pine DOT SUN DOT 3 DOT 91 DOT 970218122520 DOT 20000K-100000 AT is>
<5egilh$k7g$1 AT superb DOT csc DOT ti DOT com> <rzq7mjzxr9i DOT fsf AT djlvig DOT dl DOT ac DOT uk>
Reply-To: jbennett AT ti DOT com (Jesse Bennett)
NNTP-Posting-Host: lenny.dseg.ti.com
Mime-Version: 1.0
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp

[Posted and mailed]

In article <rzq7mjzxr9i DOT fsf AT djlvig DOT dl DOT ac DOT uk>,
	Dave Love <d DOT love AT dl DOT ac DOT uk> writes:
>>>>>> "Jesse" == Jesse Bennett <jesse AT lenny DOT dseg DOT ti DOT com> writes:
> 
>  Jesse> 1.  I am *not* interested in a FORTRAN vs. C war.
> 
> Likewise.  My question was rhetorical.

OK.

> The only points relevant to DJGPP are:
>  * If you find examples of G77 (at least) being unable to generate
>    essentially optimal code for straightforward loops you should
>    report it as a bug.  (Modulo questions of memory hierarchies and
>    within the constraints of processor types supported.)
>  * f2c+gcc produces pretty good code which G77 is only recently
>    beginning to beat in various ways.

Actually, you would be hard pressed to find anything in this thread
that is relevant to DJGPP.  At the very least, there is nothing about
this discussion that is specific to DJGPP.

> Other points should probably be addressed in comp.lang.fortran.

I would prefer to move the discussion out of the newsgroups altogether.

> Decent compilers are the Right Thing, not system-specific hacks in
> low-level languages (though, sadly, assembler is apparently still
> necessary to get the ultimate performance in some cases, such as
> machine-specific BLAS 2&3).

I agree with this.  If I want to solve a linear algebra problem I
would like to simply write C = A^{-1}B and not be bothered with the
order of loop indices, cache issues and other system specific issues.
I can do this with Matlab/Octave/etc...  and my program will run to
completion sometime early in the next century.  For many of the
problems of interest to researchers today it is necessary to push the
available hardware to its limits.  Unfortunately, this often results
in code that is tailored to a specific machine/architecture.  My
personal take on this is that it is better to retain portability by
using HLL hacks than to resort to assembler when given the choice of
the two evils.

Dave, you appear to be knowledgeble about language and compiler
issues.  I have recently become involved in an effort by the original
BLAS developers to discuss current issues with linear algebra
libraries.  One of the goals of this forum is to develop a C language
interface to the BLAS library functions.  There is a web page at
http://www.netlib.org/utk/papers/blast-forum.html which describes this
effort.  I would like to hear your thoughts about what is being
proposed, especially w.r.t. the C language interface.

Followups set appropriately.

Best Regards,
Jesse

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019