Downloading many web pages as pdf or webarchive

Hi,

I have some wikis I’d like to store the current content of, in full but without following every last link (eg, I don’t want it following page history or “other formats” links)

Is there an efficient way to do this? DevonAgent can grab the list of links quickly enough but doesn’t seem to have a way to download them all for me; and devonthink has a “download site” option but doesn’t seem to have the filtering options I’d need to just download the main content.

I should add, I have DT Pro Office & DA so hopefully I have all the tools I’ll need :slight_smile:

Any ideas?

Sounds like there’s no ideas for how to do this easier, so I’ll make a start doing it manually…

The only way to automate this is AppleScript at the moment, see the commands “get links of”, “create PDF document from” and “create web document from”.