Message-ID: <20010213173723.1386.qmail@lauras.lt> From: "Laurynas Biveinis" Date: Tue, 13 Feb 2001 19:37:23 +0200 To: djgpp-workers AT delorie DOT com Subject: Re: PATCH: new djtar option Mail-Followup-To: djgpp-workers AT delorie DOT com References: <20010213155608 DOT 236 DOT qmail AT lauras DOT lt> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.12i In-Reply-To: ; from broeker@physik.rwth-aachen.de on Tue, Feb 13, 2001 at 05:11:53PM +0100 Reply-To: djgpp-workers AT delorie DOT com Errors-To: nobody AT delorie DOT com X-Mailing-List: djgpp-workers AT delorie DOT com X-Unsubscribes-To: listserv AT delorie DOT com Precedence: bulk > You can always check all parent directories against the hash table, i.e. > instead of testing only "gcc-3.0/libjava/foo", you'ld also check if > "gcc-3.0/libjava" and "gcc-3.0" are in the hash table. > > Linked lists are a lot slower than hash tables, as soon as there are > lots of entries. Hash tables have a probabilistic behaviour of average > O(1) search time, and worst-case O(n). Linear lists are always O(n) > on average, too. Yes, I've done my CS homework ;-), but you should consider few other things - hash tables are very good for file name mappings, because there are many of them. However, a set of directories to skip is going to be much smaller (I can barely imagine more than a few). So in this case linked lists aren't that bad. And cleaner to implement than checking all parent directories, I think. Of course, corrections to my thoughts are welcome ;-) Laurynas