WGET(1) GNU Wget WGET(1)
wget - GNU Wget Manual
wget [option]... [URL]...
GNU Wget is a free utility for non-interactive download of files from
the Web. It supports HTTP, HTTPS, and FTP protocols, as well as
retrieval through HTTP proxies.
Wget is non-interactive, meaning that it can work in the background,
while the user is not logged on. This allows you to start a retrieval
and disconnect from the system, letting Wget finish the work. By con-
trast, most of the Web browsers require constant user's presence, which
can be a great hindrance when transferring a lot of data.
Wget can follow links in HTML pages and create local versions of remote
web sites, fully recreating the directory structure of the original
site. This is sometimes referred to as ``recursive downloading.''
While doing that, Wget respects the Robot Exclusion Standard
(/robots.txt). Wget can be instructed to convert the links in down-
loaded HTML files to the local files for offline viewing.
Wget has been designed for robustness over slow or unstable network
connections; if a download fails due to a network problem, it will keep
retrying until the whole file has been retrieved. If the server sup-
ports regetting, it will instruct the server to continue the download
from where it left off.
Basic Startup Options
Display the version of Wget.
Print a help message describing all of Wget's command-line options.
Go to background immediately after startup. If no output file is
specified via the -o, output is redirected to wget-log.
Execute command as if it were a part of .wgetrc. A command thus
invoked will be executed after the commands in .wgetrc, thus taking
precedence over them.
Logging and Input File Options
Log all messages to logfile. The messages are normally reported to
Append to logfile. This is the same as -o, only it appends to log-
file instead of overwriting the old log file. If logfile does not
exist, a new file is created.
Turn on debug output, meaning various information important to the
developers of Wget if it does not work properly. Your system
administrator may have chosen to compile Wget without debug sup-
port, in which case -d will not work. Please note that compiling
with debug support is always safe---Wget compiled with the debug
support will not print any debug info unless requested with -d.
Turn off Wget's output.
Turn on verbose output, with all the available data. The default
output is verbose.
Non-verbose output---turn off verbose without being completely
quiet (use -q for that), which means that error messages and basic
information still get printed.
Read URLs from file, in which case no URLs need to be on the com-
mand line. If there are URLs both on the command line and in an
input file, those on the command lines will be the first ones to be
retrieved. The file need not be an HTML document (but no harm if
it is)---it is enough if the URLs are just listed sequentially.
However, if you specify --force-html, the document will be regarded
as html. In that case you may have problems with relative links,
which you can solve either by adding "<base href="url">" to the
documents or by specifying --base=url on the command line.
When input is read from a file, force it to be treated as an HTML
file. This enables you to retrieve relative links from existing
HTML files on your local disk, by adding "<base href="url">" to
HTML, or using the --base command-line option.
When used in conjunction with -F, prepends URL to relative links in
the file specified by -i.
When making client TCP/IP connections, "bind()" to ADDRESS on the
local machine. ADDRESS may be specified as a hostname or IP
address. This option can be useful if your machine is bound to
Set number of retries to number. Specify 0 or inf for infinite
The documents will not be written to the appropriate files, but all
will be concatenated together and written to file. If file already
exists, it will be overwritten. If the file is -, the documents
will be written to standard output. Including this option automat-
ically sets the number of tries to 1.
If a file is downloaded more than once in the same directory,
Wget's behavior depends on a few options, including -nc. In cer-
tain cases, the local file will be clobbered, or overwritten, upon
repeated download. In other cases it will be preserved.
When running Wget without -N, -nc, or -r, downloading the same file
in the same directory will result in the original copy of file
being preserved and the second copy being named file.1. If that
file is downloaded yet again, the third copy will be named file.2,
and so on. When -nc is specified, this behavior is suppressed, and
Wget will refuse to download newer copies of file. Therefore,
``"no-clobber"'' is actually a misnomer in this mode---it's not
clobbering that's prevented (as the numeric suffixes were already
preventing clobbering), but rather the multiple version saving
When running Wget with -r, but without -N or -nc, re-downloading a
file will result in the new copy simply overwriting the old.
Adding -nc will prevent this behavior, instead causing the original
version to be preserved and any newer copies on the server to be
When running Wget with -N, with or without -r, the decision as to
whether or not to download a newer copy of a file depends on the
local and remote timestamp and size of the file. -nc may not be
specified at the same time as -N.
Note that when -nc is specified, files with the suffixes .html or
(yuck) .htm will be loaded from the local disk and parsed as if
they had been retrieved from the Web.
Continue getting a partially-downloaded file. This is useful when
you want to finish up a download started by a previous instance of
Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and
will ask the server to continue the retrieval from an offset equal
to the length of the local file.
Note that you don't need to specify this option if you just want
the current invocation of Wget to retry downloading a file should
the connection be lost midway through. This is the default behav-
ior. -c only affects resumption of downloads started prior to this
invocation of Wget, and whose local files are still sitting around.
Without -c, the previous example would just download the remote
file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and it
turns out that the server does not support continued downloading,
Wget will refuse to start the download from scratch, which would
effectively ruin existing contents. If you really want the down-
load to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of
equal size as the one on the server, Wget will refuse to download
the file and print an explanatory message. The same happens when
the file is smaller on the server than locally (presumably because
it was changed on the server since your last download
attempt)---because ``continuing'' is not meaningful, no download
On the other side of the coin, while using -c, any file that's big-
ger on the server than locally will be considered an incomplete
download and only "(length(remote) - length(local))" bytes will be
downloaded and tacked onto the end of the local file. This behav-
ior can be desirable in certain cases---for instance, you can use
wget -c to download just the new portion that's been appended to a
data collection or log file.
However, if the file is bigger on the server because it's been
changed, as opposed to just appended to, you'll end up with a gar-
bled file. Wget has no way of verifying that the local file is
really a valid prefix of the remote file. You need to be espe-
cially careful of this when using -c in conjunction with -r, since
every file will be considered as an "incomplete download" candi-
Another instance where you'll get a garbled file if you try to use
-c is if you have a lame HTTP proxy that inserts a ``transfer
interrupted'' string into the local file. In the future a ``roll-
back'' option may be added to deal with this case.
Note that -c only works with FTP servers and with HTTP servers that
support the "Range" header.
Select the type of the progress indicator you wish to use. Legal
indicators are ``dot'' and ``bar''.
The ``bar'' indicator is used by default. It draws an ASCII
progress bar graphics (a.k.a ``thermometer'' display) indicating
the status of retrieval. If the output is not a TTY, the ``dot''
bar will be used by default.
Use --progress=dot to switch to the ``dot'' display. It traces the
retrieval by printing dots on the screen, each dot representing a
fixed amount of downloaded data.
When using the dotted retrieval, you may also set the style by
specifying the type as dot:style. Different styles assign differ-
ent meaning to one dot. With the "default" style each dot repre-
sents 1K, there are ten dots in a cluster and 50 dots in a line.
The "binary" style has a more ``computer''-like orientation---8K
dots, 16-dots clusters and 48 dots per line (which makes for 384K
lines). The "mega" style is suitable for downloading very large
files---each dot represents 64K retrieved, there are eight dots in
a cluster, and 48 dots on each line (so each line contains 3M).
Note that you can set the default style using the "progress" com-
mand in .wgetrc. That setting may be overridden from the command
line. The exception is that, when the output is not a TTY, the
``dot'' progress will be favored over ``bar''. To force the bar
output, use --progress=bar:force.
Turn on time-stamping.
Print the headers sent by HTTP servers and responses sent by FTP
When invoked with this option, Wget will behave as a Web spider,
which means that it will not download the pages, just check that
they are there. You can use it to check your bookmarks, e.g. with:
wget --spider --force-html -i bookmarks.html
This feature needs much more work for Wget to get close to the
functionality of real WWW spiders.
Set the read timeout to seconds seconds. Whenever a network read
is issued, the file descriptor is checked for a timeout, which
could otherwise leave a pending connection (uninterrupted read).
The default timeout is 900 seconds (fifteen minutes). Setting
timeout to 0 will disable checking for timeouts.
Please do not lower the default timeout value with this option
unless you know what you are doing.
Limit the download speed to amount bytes per second. Amount may be
expressed in bytes, kilobytes with the k suffix, or megabytes with
the m suffix. For example, --limit-rate=20k will limit the
retrieval rate to 20KB/s. This kind of thing is useful when, for
whatever reason, you don't want Wget to consume the entire evail-
Note that Wget implementeds the limiting by sleeping the appropri-
ate amount of time after a network read that took less time than
specified by the rate. Eventually this strategy causes the TCP
transfer to slow down to approximately the specified rate. How-
ever, it takes some time for this balance to be achieved, so don't
be surprised if limiting the rate doesn't work with very small
files. Also, the "sleeping" strategy will misfire when an
extremely small bandwidth, say less than 1.5KB/s, is specified.
Wait the specified number of seconds between the retrievals. Use
of this option is recommended, as it lightens the server load by
making the requests less frequent. Instead of in seconds, the time
can be specified in minutes using the "m" suffix, in hours using
"h" suffix, or in days using "d" suffix.
Specifying a large value for this option is useful if the network
or the destination host is down, so that Wget can wait long enough
to reasonably expect the network error to be fixed before the
If you don't want Wget to wait between every retrieval, but only
between retries of failed downloads, you can use this option. Wget
will use linear backoff, waiting 1 second after the first failure
on a given file, then waiting 2 seconds after the second failure on
that file, up to the maximum number of seconds you specify. There-
fore, a value of 10 will actually make Wget wait up to (1 + 2 + ...
+ 10) = 55 seconds per file.
Note that this option is turned on by default in the global wgetrc
Some web sites may perform log analysis to identify retrieval pro-
grams such as Wget by looking for statistically significant simi-
larities in the time between requests. This option causes the time
between requests to vary between 0 and 2 * wait seconds, where wait
was specified using the -w or --wait options, in order to mask
Wget's presence from such analysis.
A recent article in a publication devoted to development on a
popular consumer platform provided code to perform this analysis on
the fly. Its author suggested blocking at the class C address
level to ensure automated retrieval programs were blocked despite
changing DHCP-supplied addresses.
The --random-wait option was inspired by this ill-advised recommen-
dation to block many unrelated users from a web site due to the
actions of one.
Turn proxy support on or off. The proxy is on by default if the
appropriate environmental variable is defined.
Specify download quota for automatic retrievals. The value can be
specified in bytes (default), kilobytes (with k suffix), or
megabytes (with m suffix).
Note that quota will never affect downloading a single file. So if
you specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all of
the ls-lR.gz will be downloaded. The same goes even when several
URLs are specified on the command-line. However, quota is
respected when retrieving either recursively, or from an input
file. Thus you may safely type wget -Q2m -i sites---download will
be aborted when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download quota.
Do not create a hierarchy of directories when retrieving recur-
sively. With this option turned on, all files will get saved to
the current directory, without clobbering (if a name shows up more
than once, the filenames will get extensions .n).
The opposite of -nd---create a hierarchy of directories, even if
one would not have been created otherwise. E.g. wget -x
http://fly.srk.fer.hr/robots.txt will save the downloaded file to
Disable generation of host-prefixed directories. By default,
invoking Wget with -r http://fly.srk.fer.hr/ will create a struc-
ture of directories beginning with fly.srk.fer.hr/. This option
disables such behavior.
Ignore number directory components. This is useful for getting a
fine-grained control over the directory where recursive retrieval
will be saved.
Take, for example, the directory at
ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it
will be saved locally under ftp.xemacs.org/pub/xemacs/. While the
-nH option can remove the ftp.xemacs.org/ part, you are still stuck
with pub/xemacs. This is where --cut-dirs comes in handy; it makes
Wget not ``see'' number remote directory components. Here are sev-
eral examples of how --cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
If you just want to get rid of the directory structure, this option
is similar to a combination of -nd and -P. However, unlike -nd,
--cut-dirs does not lose with subdirectories---for instance, with
-nH --cut-dirs=1, a beta/ subdirectory will be placed to
xemacs/beta, as one would expect.
Set directory prefix to prefix. The directory prefix is the direc-
tory where all other files and subdirectories will be saved to,
i.e. the top of the retrieval tree. The default is . (the current
If a file of type text/html is downloaded and the URL does not end
with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the
suffix .html to be appended to the local filename. This is useful,
for instance, when you're mirroring a remote site that uses .asp
pages, but you want the mirrored pages to be viewable on your stock
Apache server. Another good use for this is when you're download-
ing the output of CGIs. A URL like http://site.com/article.cgi?25
will be saved as article.cgi?25.html.
Note that filenames changed in this way will be re-downloaded every
time you re-mirror a site, because Wget can't tell that the local
X.html file corresponds to remote URL X (since it doesn't yet know
that the URL produces output of type text/html. To prevent this
re-downloading, you must use -k and -K so that the original version
of the file will be saved as X.orig.
Specify the username user and password password on an HTTP server.
According to the type of the challenge, Wget will encode them using
either the "basic" (insecure) or the "digest" authentication
Another way to specify username and password is in the URL itself.
Either method reveals your password to anyone who bothers to run
"ps". To prevent the passwords from being seen, store them in
.wgetrc or .netrc, and make sure to protect those files from other
users with "chmod". If the passwords are really important, do not
leave them lying in those files either---edit the files and delete
them after Wget has started the download.
For more information about security issues with Wget,
When set to off, disable server-side cache. In this case, Wget
will send the remote server an appropriate directive (Pragma: no-
cache) to get the file from the remote service, rather than return-
ing the cached version. This is especially useful for retrieving
and flushing out-of-date documents on proxy servers.
Caching is allowed by default.
nism for maintaining server-side state. The server sends the
client a cookie using the "Set-Cookie" header, and the client
responds with the same cookie upon further requests. Since cookies
allow the server owners to keep track of visitors and for sites to
exchange this information, some consider them a breach of privacy.
Load cookies from file before the first HTTP retrieval. file is a
textual file in the format originally used by Netscape's cook-
You will typically use this option when mirroring sites that
require that you be logged in to access some or all of their con-
tent. The login process typically works by the web server issuing
an HTTP cookie upon receiving and verifying your credentials. The
cookie is then resent by the browser when accessing that part of
the site, and so proves your identity.
Mirroring such a site requires Wget to send the same cookies your
browser sends when communicating with the site. This is achieved
by --load-cookies---simply point Wget to the location of the cook-
ies.txt file, and it will send the same cookies your browser would
send in the same situation. Different browsers keep textual cookie
files in different locations:
The cookies are in ~/.netscape/cookies.txt.
Mozilla and Netscape 6.x.
Mozilla's cookie file is also named cookies.txt, located some-
where under ~/.mozilla, in the directory of your profile. The
full path usually ends up looking somewhat like
You can produce a cookie file Wget can use by using the File
menu, Import and Export, Export Cookies. This has been tested
with Internet Explorer 5; it is not guaranteed to work with
If you are using a different browser to create your cookies,
--load-cookies will only work if you can locate or produce a
cookie file in the Netscape format that Wget expects.
If you cannot use --load-cookies, there might still be an alterna-
tive. If your browser supports a ``cookie manager'', you can use
it to view the cookies used when accessing the site you're mirror-
ing. Write down the name and value of the cookie, and manually
instruct Wget to send those cookies, bypassing the ``official''
wget --cookies=off --header "Cookie: I<name>=I<value>"
Save cookies from file at the end of session. Cookies whose expiry
time is not specified, or those that have already expired, are not
Unfortunately, some HTTP servers (CGI programs, to be more precise)
send out bogus "Content-Length" headers, which makes Wget go wild,
as it thinks not all the document was retrieved. You can spot this
syndrome if Wget retries getting the same document again and again,
each time claiming that the (otherwise normal) connection has
closed on the very same byte.
With this option, Wget will ignore the "Content-Length" header---as
if it never existed.
Define an additional-header to be passed to the HTTP servers.
Headers must contain a : preceded by one or more non-blank charac-
ters, and must not contain newlines.
You may define more than one additional header by specifying
--header more than once.
wget --header='Accept-Charset: iso-8859-2' \
--header='Accept-Language: hr' \
Specification of an empty string as the header value will clear all
previous user-defined headers.
Specify the username user and password password for authentication
on a proxy server. Wget will encode them using the "basic" authen-
Security considerations similar to those with --http-passwd pertain
here as well.
Include `Referer: url' header in HTTP request. Useful for retriev-
ing documents with server-side processing that assume they are
always being retrieved by interactive web browsers and only come
out properly when Referer is set to one of the pages that point to
Save the headers sent by the HTTP server to the file, preceding the
actual contents, with an empty line as the separator.
Identify as agent-string to the HTTP server.
The HTTP protocol allows the clients to identify themselves using a
"User-Agent" header field. This enables distinguishing the WWW
software, usually for statistical purposes or for tracing of proto-
col violations. Wget normally identifies as Wget/version, version
being the current version number of Wget.
However, some sites have been known to impose the policy of tailor-
ing the output according to the "User-Agent"-supplied information.
While conceptually this is not such a bad idea, it has been abused
by servers denying information to clients other than "Mozilla" or
Microsoft "Internet Explorer". This option allows you to change
the "User-Agent" line issued by Wget. Use of this option is dis-
couraged, unless you really know what you are doing.
Don't remove the temporary .listing files generated by FTP
retrievals. Normally, these files contain the raw directory list-
ings received from FTP servers. Not removing them can be useful
for debugging purposes, or when you want to be able to easily check
on the contents of remote server directories (e.g. to verify that a
mirror you're running is complete).
Note that even though Wget writes to a known filename for this
file, this is not a security hole in the scenario of a user making
.listing a symbolic link to /etc/passwd or something and asking
"root" to run Wget in his or her directory. Depending on the
options used, either Wget will refuse to write to .listing, making
the globbing/recursion/time-stamping operation fail, or the sym-
bolic link will be deleted and replaced with the actual .listing
file, or the listing will be written to a .listing.number file.
Even though this situation isn't a problem, though, "root" should
never run Wget in a non-trusted user's directory. A user could do
something as simple as linking index.html to /etc/passwd and asking
"root" to run Wget with -N or -r so the file will be overwritten.
Turn FTP globbing on or off. Globbing means you may use the shell-
like special characters (wildcards), like *, ?, [ and ] to retrieve
more than one file from the same directory at once, like:
By default, globbing will be turned on if the URL contains a glob-
bing character. This option may be used to turn globbing on or off
You may have to quote the URL to protect it from being expanded by
your shell. Globbing makes Wget look for a directory listing,
which is system-specific. This is why it currently works only with
Unix FTP servers (and the ones emulating Unix "ls" output).
Use the passive FTP retrieval scheme, in which the client initiates
the data connection. This is sometimes required for FTP to work
Usually, when retrieving FTP directories recursively and a symbolic
link is encountered, the linked-to file is not downloaded.
Instead, a matching symbolic link is created on the local filesys-
tem. The pointed-to file will not be downloaded unless this recur-
sive retrieval would have encountered it separately and downloaded
When --retr-symlinks is specified, however, symbolic links are tra-
versed and the pointed-to files are retrieved. At this time, this
option does not cause Wget to traverse symlinks to directories and
recurse through them, but in the future it should be enhanced to do
Note that when retrieving a file (not a directory) because it was
specified on the commandline, rather than because it was recursed
to, this option has no effect. Symbolic links are always traversed
in this case.
Recursive Retrieval Options
Turn on recursive retrieving.
Specify recursion maximum depth level depth. The default maximum
depth is 5.
This option tells Wget to delete every single file it downloads,
after having done so. It is useful for pre-fetching popular pages
through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to not create
Note that --delete-after deletes files on the local machine. It
does not issue the DELE command to remote FTP sites, for instance.
Also note that when --delete-after is specified, --convert-links is
ignored, so .orig files are simply not created in the first place.
After the download is complete, convert the links in the document
to make them suitable for local viewing. This affects not only the
visible hyperlinks, but any part of the document that links to
external content, such as embedded images, links to style sheets,
hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
o The links to files that have been downloaded by Wget will be
changed to refer to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.html links to
/bar/img.gif, also downloaded, then the link in doc.html will
be modified to point to ../bar/img.gif. This kind of transfor-
mation works reliably for arbitrary combinations of directo-
o The links to files that have not been downloaded by Wget will
be changed to include host name and absolute path of the loca-
tion they point to.
Example: if the downloaded file /foo/doc.html links to
/bar/img.gif (or to ../bar/img.gif), then the link in doc.html
will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file
was downloaded, the link will refer to its local name; if it was
not downloaded, the link will refer to its full Internet address
rather than presenting a broken link. The fact that the former
links are converted to relative links ensures that you can move the
downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links
have been downloaded. Because of that, the work done by -k will be
performed at the end of all the downloads.
When converting a file, back up the original version with a .orig
suffix. Affects the behavior of -N.
Turn on options suitable for mirroring. This option turns on
recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to -r -N
-l inf -nr.
This option causes Wget to download all the files that are neces-
sary to properly display a given HTML page. This includes such
things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite doc-
uments that may be needed to display it properly are not down-
loaded. Using -r together with -l can help, but since Wget does
not ordinarily distinguish between external and inlined documents,
one is generally left with ``leaf documents'' that are missing
For instance, say document 1.html contains an "<IMG>" tag referenc-
ing 1.gif and an "<A>" tag pointing to external document 2.html.
Say that 2.html is similar but that its image is 2.gif and it links
to 3.html. Say this continues up to some arbitrarily high number.
If one executes the command:
wget -r -l 2 http://I<site>/1.html
then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded.
As you can see, 3.html is without its requisite 3.gif because Wget
is simply counting the number of hops (up to 2) away from 1.html in
order to determine where to stop the recursion. However, with this
wget -r -l 2 -p http://I<site>/1.html
all the above files and 3.html's requisite 3.gif will be down-
wget -r -l 1 -p http://I<site>/1.html
will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One
might think that:
wget -r -l 0 -p http://I<site>/1.html
would download just 1.html and 1.gif, but unfortunately this is not
the case, because -l 0 is equivalent to -l inf---that is, infinite
recursion. To download a single HTML page (or a handful of them,
all specified on the commandline or in a -i URL input file) and its
(or their) requisites, simply leave off -r and -l:
wget -p http://I<site>/1.html
Note that Wget will behave as if -r had been specified, but only
that single page and its requisites will be downloaded. Links from
that page to external documents will not be followed. Actually, to
download a single page and all its requisites (even if they exist
on separate websites), and make sure the lot displays properly
locally, this author likes to use a few options in addition to -p:
wget -E -H -k -K -p http://I<site>/I<document>
To finish off this topic, it's worth knowing that Wget's idea of an
external document link is any URL specified in an "<A>" tag, an
"<AREA>" tag, or a "<LINK>" tag other than "<LINK
Recursive Accept/Reject Options
-A acclist --accept acclist
-R rejlist --reject rejlist
Specify comma-separated lists of file name suffixes or patterns to
accept or reject.
Set domains to be followed. domain-list is a comma-separated list
of domains. Note that it does not turn on -H.
Specify the domains that are not to be followed..
Follow FTP links from HTML documents. Without this option, Wget
will ignore all the FTP links.
Wget has an internal table of HTML tag / attribute pairs that it
considers when looking for linked documents during a recursive
retrieval. If a user wants only a subset of those tags to be con-
sidered, however, he or she should be specify such tags in a comma-
separated list with this option.
This is the opposite of the --follow-tags option. To skip certain
HTML tags when recursively looking for documents to download, spec-
ify them in a comma-separated list.
In the past, the -G option was the best bet for downloading a sin-
gle page and its requisites, using a commandline like:
wget -Ga,area -H -k -K -r http://I<site>/I<document>
However, the author of this option came across a page with tags
like "<LINK REL="home" HREF="/">" and came to the realization that
-G was not enough. One can't just tell Wget to ignore "<LINK>",
because then stylesheets will not be downloaded. Now the best bet
for downloading a single page and its requisites is the dedicated
Enable spanning across hosts when doing recursive retrieving.
Follow relative links only. Useful for retrieving a specific home
page without any distractions, not even those from the same hosts.
Specify a comma-separated list of directories you wish to follow
when downloading Elements of list may contain wildcards.
Specify a comma-separated list of directories you wish to exclude
from download Elements of list may contain wildcards.
Do not ever ascend to the parent directory when retrieving recur-
sively. This is a useful option, since it guarantees that only the
files below a certain hierarchy will be downloaded.
The examples are divided into three sections loosely based on their
o Say you want to download a URL. Just type:
o But what will happen if the connection is slow, and the file is
lengthy? The connection will probably fail before the whole file
is retrieved, more than once. In this case, Wget will try getting
the file until it either gets the whole of it, or exceeds the
default number of retries (this being 20). It is easy to change
the number of tries to 45, to insure that the whole file will
wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
o Now let's leave Wget to work in the background, and write its
progress to log file log. It is tiring to type --tries, so we
shall use -t.
wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
The ampersand at the end of the line makes sure that Wget works in
the background. To unlimit the number of retries, use -t inf.
o The usage of FTP is as simple. Wget will take care of login and
o If you specify a directory, Wget will retrieve the directory list-
ing, parse it and convert it to HTML. Try:
o You have a file that contains the URLs you want to download? Use
the -i switch:
wget -i I<file>
If you specify - as file name, the URLs will be read from standard
o Create a five levels deep mirror image of the GNU web site, with
the same directory structure the original has, with only one try
per document, saving the log of the activities to gnulog:
wget -r http://www.gnu.org/ -o gnulog
o The same as the above, but convert the links in the HTML files to
point to local files, so you can view the documents off-line:
wget --convert-links -r http://www.gnu.org/ -o gnulog
o Retrieve only one HTML page, but make sure that all the elements
needed for the page to be displayed, such as inline images and
external style sheets, are also downloaded. Also make sure the
downloaded page references the downloaded links.
wget -p --convert-links http://www.server.com/dir/page.html
The HTML page will be saved to www.server.com/dir/page.html, and
the images, stylesheets, etc., somewhere under www.server.com/,
depending on where they were on the remote server.
o The same as the above, but without the www.server.com/ directory.
In fact, I don't want to have all those random server directories
anyway---just save all those files under a download/ subdirectory
of the current directory.
wget -p --convert-links -nH -nd -Pdownload \
o Retrieve the index.html of www.lycos.com, showing the original
wget -S http://www.lycos.com/
o Save the server headers with the file, perhaps for post-processing.
wget -s http://www.lycos.com/
o Retrieve the first two levels of wuarchive.wustl.edu, saving them
wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
o You want to download all the GIFs from a directory on an HTTP
server. You tried wget http://www.server.com/dir/*.gif, but that
didn't work because HTTP retrieval does not support globbing. In
that case, use:
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
More verbose, but the effect is the same. -r -l1 means to retrieve
recursively, with maximum depth of 1. --no-parent means that ref-
erences to the parent directory are ignored, and -A.gif means to
download only the GIF files. -A "*.gif" would have worked too.
o Suppose you were in the middle of downloading, when Wget was inter-
rupted. Now you do not want to clobber the files already present.
It would be:
wget -nc -r http://www.gnu.org/
o If you want to encode your own username and password to HTTP or
FTP, use the appropriate URL syntax.
Note, however, that this usage is not advisable on multi-user sys-
tems because it reveals your password to anyone who looks at the
output of "ps".
o You would like the output documents to go to standard output
instead of to files?
wget -O - http://jagor.srce.hr/ http://www.srce.hr/
You can also combine the two options and make pipelines to retrieve
the documents from remote hotlists:
wget -O - http://cool.list.com/ | wget --force-html -i -
Very Advanced Usage
o If you wish Wget to keep a mirror of a page (or FTP subdirecto-
ries), use --mirror (-m), which is the shorthand for -r -l inf -N.
You can put Wget in the crontab file asking it to recheck a site
0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
o In addition to the above, you want the links to be converted for
local viewing. But, after having read this manual, you know that
link conversion doesn't play well with timestamping, so you also
want Wget to back up the original HTML files before the conversion.
Wget invocation would look like this:
wget --mirror --convert-links --backup-converted \
http://www.gnu.org/ -o /home/me/weeklog
o But you've also noticed that local viewing doesn't work all that
well when HTML files are saved under extensions other than .html,
perhaps because they were served as index.cgi. So you'd like Wget
to rename all the files served with content-type text/html to
wget --mirror --convert-links --backup-converted \
--html-extension -o /home/me/weeklog \
Or, with less typing:
wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
Default location of the global startup file.
User startup file.
You are welcome to send bug reports about GNU Wget to
Before actually submitting a bug report, please try to follow a few
1. Please try to ascertain that the behaviour you see really is a bug.
If Wget crashes, it's a bug. If Wget does not behave as docu-
mented, it's a bug. If things work strange, but you are not sure
about the way they are supposed to work, it might well be a bug.
2. Try to repeat the bug in as simple circumstances as possible. E.g.
if Wget crashes while downloading wget -rl0 -kKE -t5 -Y0
http://yoyodyne.com -o /tmp/log, you should try to see if the crash
is repeatable, and if will occur with a simpler set of options.
You might even try to start the download at the page where the
crash occurred to see if that page somehow triggered the crash.
Also, while I will probably be interested to know the contents of
your .wgetrc file, just dumping it into the debug message is proba-
bly a bad idea. Instead, you should first try to see if the bug
repeats with .wgetrc moved out of the way. Only if it turns out
that .wgetrc settings affect the bug, mail me the relevant parts of
3. Please start Wget with -d option and send the log (or the relevant
parts of it). If Wget was compiled without debug support, recom-
pile it. It is much easier to trace bugs with debug support on.
4. If Wget has crashed, try to run it in a debugger, e.g. "gdb `which
wget` core" and type "where" to get the backtrace.
GNU Info entry for wget.
Originally written by Hrvoje Niksic <firstname.lastname@example.org>.
Copyright (c) 1996, 1997, 1998, 2000, 2001 Free Software Foundation,
Permission is granted to make and distribute verbatim copies of this
manual provided the copyright notice and this permission notice are
preserved on all copies.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with the
Invariant Sections being ``GNU General Public License'' and ``GNU Free
Documentation License'', with no Front-Cover Texts, and with no Back-
Cover Texts. A copy of the license is included in the section entitled
``GNU Free Documentation License''.
GNU Wget 1.8.2 2002-07-24 WGET(1)
Man(1) output converted with