Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
anononaut
on May 15, 2023
|
parent
|
context
|
favorite
| on:
Come back, c2.com, we still need you
It occurs to me again that I need to figure out how entire websites can be downloaded and archived. Like Archive.org, but local.
psychoslave
on May 15, 2023
|
next
[–]
https://bash-prompt.net/guides/wget-mirror-website/
danparsonson
on May 15, 2023
|
prev
|
next
[–]
https://www.httrack.com/
is a good option
mesarvagya
on May 15, 2023
|
parent
|
next
[–]
Most of SPA websites today cannot be downloaded through httrack.
danparsonson
on May 15, 2023
|
root
|
parent
|
next
[–]
Yes SPAs will be next to impossible for a tool like this; I'm not sure how any tool could archive such a site tbh?
_a9
on May 15, 2023
|
root
|
parent
|
next
[–]
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.
[0]
https://github.com/webrecorder/browsertrix-crawler
shagie
on May 15, 2023
|
prev
[–]
The entirety of the archive pre rework can be found at
https://archive.org/details/c2.com-wiki_201501
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: