I’ve searched through some of the discussions on capturing Web content into DT, but I haven’t found a solution to this problem:
Let’s say I’m reading an article at Salon.com that is 5 “pages” long. Typically there will be a link at the bottom of page one to continue to the next page. If I want to capture the entire article to DT, I have to grab each page separately, using one of the conventional methods people have discussed here (Services Menu, copy/paste, drag the url, Dock Menus, Folder Actions). While this gets the content into DT, those 5 pages are not linked together, ie, clicking the “continue” or whatever link at the bottom of page one either goes nowhere, or tries to take me back out to the Web via my default browser.
The only way I’ve found to preserve the structure of the original is to save each page using a web browser, put them all into a common folder, and import that folder into DT. The first document will usually be “Title of Article.html,” and the rest something like “index1”, “index2” and so on. This works, but it’s labour intensive and clumsy (“index1” is meaningless in a search if you have dozens of documents with that same name). Alternatively, I could use IE or iCab and save the entire “web archive” and just link that in DT, but then the content isn’t indexed, which defeats the point.
So… my fellow DEVONthink devotees, how are all of you saving your multi-page web documents in a clean, efficient way?
Any help appreciated!