| www.delorie.com/gnu/docs/wget/wget_31.html | search |
![]() Buy GNU books! | |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
wget -i file |
If you specify `-' as file name, the URLs will be read from standard input.
wget -r http://www.gnu.org/ -o gnulog |
wget --convert-links -r http://www.gnu.org/ -o gnulog |
wget -p --convert-links http://www.server.com/dir/page.html |
The HTML page will be saved to `www.server.com/dir/page.html', and the images, stylesheets, etc., somewhere under `www.server.com/', depending on where they were on the remote server.
wget -p --convert-links -nH -nd -Pdownload \
http://www.server.com/dir/page.html
|
wget -S http://www.lycos.com/ |
wget -s http://www.lycos.com/ more index.html |
wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/ |
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/ |
More verbose, but the effect is the same. `-r -l1' means to retrieve recursively (see section 3. Recursive Retrieval), with maximum depth of 1. `--no-parent' means that references to the parent directory are ignored (see section 4.3 Directory-Based Limits), and `-A.gif' means to download only the GIF files. `-A "*.gif"' would have worked too.
wget -nc -r http://www.gnu.org/ |
wget ftp://hniksic:mypassword@unix.server.com/.emacs |
Note, however, that this usage is not advisable on multi-user systems
because it reveals your password to anyone who looks at the output of
ps.
wget -O - http://jagor.srce.hr/ http://www.srce.hr/ |
You can also combine the two options and make pipelines to retrieve the documents from remote hotlists:
wget -O - http://cool.list.com/ | wget --force-html -i - |
| [ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
| webmaster donations bookstore | delorie software privacy |
| Copyright © 2003 by The Free Software Foundation | Updated Jun 2003 |