Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It occurs to me again that I need to figure out how entire websites can be downloaded and archived. Like Archive.org, but local.



https://www.httrack.com/ is a good option


Most of SPA websites today cannot be downloaded through httrack.


Yes SPAs will be next to impossible for a tool like this; I'm not sure how any tool could archive such a site tbh?


I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.

[0] https://github.com/webrecorder/browsertrix-crawler


The entirety of the archive pre rework can be found at https://archive.org/details/c2.com-wiki_201501




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: