>>1940
The Session Exporter trick is neat, although I don't think I'd have a use for it. I only really have 2 or 3 use cases for local archiving: archiving some video or media content for when it's inevitably taken down, archiving a single page that has good info, or archiving an entire site or a section of it.
For the first case I use youtube-dl, although it doesn't work on every page. When it fails to work, I try to sniff the download links manually, which is sometimes straightforward but most sites have migrated to DASH formats so it's a bit more involved and in those cases I use curl to download all the pieces and I have a script to put all the pieces together again with ffmpeg. The disadvantage is that if it's a really long video with hundreds or thousands of pieces curl will only download 1 at a time, but you can always subdivide the range and use multiple instances. Multiple videos can also be done in parallel by using multiple instances.
For the second case I simply download the page through the browser, while for the third case I couldn't find anything that gave me the flexibility of wget. I tried using HTTrack but it always performed very poorly for me, while I can get almost the same functionality from wget plus it allows me to fuck around with regexes to tell wget what I want it to download from the site with enormous precision. The downsides are that it's single threaded so be prepared to wait a lot, and also although it has a mirroring mode that converts links in the downloaded files for offline browsing the devs changed the behavior so that only the files that wget downloaded in a certain invocation would get converted which is rather infuriating. So if you previously downloaded a 50GB booru and wanted to update it, a sane course of action would be to download only the HTML again and only the new images, but no, wget would require you to either fix the HTML manually or download the entire 50GB again, which is utterly retarded.
There's wget2 which fixes the the multithreading issue but although it supposedly aims to be command line compatible with wget I couldn't get it to accept my regexes correctly while they worked perfectly in wget.
>>1964
Nice, I wish I could have that kind of storage space, but with so many drives I'd go insane.