From: Radical NetSurfer Newsgroups: comp.os.msdos.djgpp Subject: BAD strupr, BAD getw Date: Fri, 25 Aug 2000 02:44:18 -0400 Message-ID: X-Newsreader: Forte Agent 1.8/32.548 X-No-Archive: yes MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit NNTP-Posting-Host: 216.202.134.196 X-Original-NNTP-Posting-Host: 216.202.134.196 X-Trace: 25 Aug 2000 02:47:15 -0400, 216.202.134.196 Lines: 69 X-Original-NNTP-Posting-Host: 64.31.79.51 To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp Reply-To: djgpp AT delorie DOT com Tonight we had to create our very own code for strupr simply because DJGPP does NOT implement this function correctly at ALL :( experiment strupr(): "abcédef" --> "ABCéDEF" expected! BORLAND: CORRECT! " ABC?def" <-- DJGPP INCORRECT :( observed behavior is: STOP when FIRST non Alpha character is encounted! this is NOT the way compilers as far back as I am aware of, have ever handled strupr, strlwr !! Borland, VisualC, both seem to know enough to actually read the ENTIRE string and ONLY effect those Alpha characters encountered for the ENTIRE LENGTH of the STRING! sheesh! PURPOSE: especially in this day of the Internet and "connectivity-applications" it should be realized that FOREIGN characters would ROUTINELY be encountered. WHY would you want a StrUpr/Lwr routine to STOP on the first non-Alpha character --- or replace what appear to be ILLEGAL characters -- if simple DIGITs where also encountered what happens (check that too guys)? Digits, and Foreign characters should be 1) skipped completely, with the rest of the string continuing to be scanned, 2) an ACCURATE and properly implemented method of ACTUALLY converting Upper/Lower FOREIGN character set should be implemented. (ah, Mr. Wizard, sir, how do we know an ASCII char represents a foreign character, please?) For now, our routine simply skips them, DJGPP places illegal characters in the string || stops at the illegal character. -------------- problem #2 getw() -------------- also, getw is defined in LIBC.TXT as: int getw(FILE *file); this is INCORRECT! get-WORD() should of been defined as: short getw(FILE*); as a WORD is "typically/natively" DEFINED AS 16-BIT! When did a WORD become PLATFORM SPECIFIC? DWORD == 32 bit == PC's for as long as anyone can remember! I simply can not wait to hear the explanation for WORD being 16 bits. email always appreciated (even yours): radsmail AT juno DOT com SPAM WILL BE REPORTED IMMEDIATELY! Http://members.tripod.com/~RadSurfer/