www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/2001/02/13/12:44:01

Message-ID: <20010213173723.1386.qmail@lauras.lt>
From: "Laurynas Biveinis" <lauras AT softhome DOT net>
Date: Tue, 13 Feb 2001 19:37:23 +0200
To: djgpp-workers AT delorie DOT com
Subject: Re: PATCH: new djtar option
Mail-Followup-To: djgpp-workers AT delorie DOT com
References: <20010213155608 DOT 236 DOT qmail AT lauras DOT lt> <Pine DOT LNX DOT 4 DOT 10 DOT 10102131708550 DOT 4040-100000 AT acp3bf>
Mime-Version: 1.0
User-Agent: Mutt/1.3.12i
In-Reply-To: <Pine.LNX.4.10.10102131708550.4040-100000@acp3bf>; from broeker@physik.rwth-aachen.de on Tue, Feb 13, 2001 at 05:11:53PM +0100
Reply-To: djgpp-workers AT delorie DOT com
Errors-To: nobody AT delorie DOT com
X-Mailing-List: djgpp-workers AT delorie DOT com
X-Unsubscribes-To: listserv AT delorie DOT com

> You can always check all parent directories against the hash table, i.e.
> instead of testing only "gcc-3.0/libjava/foo", you'ld also check if
> "gcc-3.0/libjava" and "gcc-3.0" are in the hash table.
> 
> Linked lists are a lot slower than hash tables, as soon as there are
> lots of entries. Hash tables have a probabilistic behaviour of average
> O(1) search time, and worst-case O(n). Linear lists are always O(n)
> on average, too.

Yes, I've done my CS homework ;-), but you should consider few other
things - hash tables are very good for file name mappings, because there
are many of them. However, a set of directories to skip is going to be
much smaller (I can barely imagine more than a few). So in this case
linked lists aren't that bad. And cleaner to implement than checking
all parent directories, I think.

Of course, corrections to my thoughts are welcome ;-)
Laurynas

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019