> I've tried a number of things like wget to "offline" a website and had mixed success. Does anyone know of a proven way to do something like this?
What about httrack[0]? From description in OpenBSD ports:
HTTrack is an easy-to-use offline browser utility. It allows you to
download a World Wide Web site from the Internet to a local directory,
building recursively all directories, getting HTML, images, and other
files from the server to your computer. HTTrack arranges the original
site's relative link-structure. Simply open a page of the "mirrored"
website in your browser, and you can browse the site from link to link,
as if you were viewing it online. HTTrack can also update an existing
mirrored site, and resume interrupted downloads. HTTrack is fully
configurable, and has an integrated help system.
Or, you can use wget for downloading a single page or recursive download. :)
What about httrack[0]? From description in OpenBSD ports:
HTTrack is an easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system.
Or, you can use wget for downloading a single page or recursive download. :)
[0]: http://www.httrack.com/