Scanning link pages

Here’s an example how to use link pages in the crawler:

  1. Create a new crawler by pressing the "+" button on the left side of the crawler window

  2. Add the following link:

  3. Open the settings drawer

  4. Disable all filters and set the "Follow Links" level to the minimal value (on the left side of the slider).

  5. Enter "*" (without quotes) in the "Follow Links" field to follow all links

  6. Enter any (default) query you like, e.g. "G5 OR Panther" (without quotes).

This new crawler will scan all referenced pages afterwards.
Note that the upcoming release 1.2 will improve caching of redirected pages and include such a demo crawler (and of course there will be other major improvements).

I’ve been trying to get DA to scan … emap.shtml, where all the links unfortunately are stupid ugly JavaScript cr*p. What I’d like DA Crawler to do (in fact one of the reasons I bought it) is to download all the help pages from this site and import them into DT, so that I can search them the way I want.

I don’t think this is possible, or is it? ???

This is currently not possible and would require a JavaScript interpreter (Apple’s WebKit isn’t suitable for such a job - this would be painfully slow). Actually I don’t know of any download manager or site sucker supporting such links.

Me neither…  :(

Thanks for the reply anyway!