What should I use to crawl/download/archive an entire website? It's all simple static pages (no JavaScript), but has lots of links to download small binary files, which I also want to preserve. Any OS -- just want the best tools.

Follow

@cancel I have just used wget in the past. It's pretty simple to have it recursively download everything linked on a page. Don't remember the flags right now, but finding a guide would not be too difficult

Sign in to participate in the conversation
Laserbeam Productions

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!