Web Clipper works very well for me on many pages but there appears to be some limitation on the amount of data it can process. Pages with textual content and comments with icons to identify the commenter are saved as paginated pdfs - content only (which is exactly what I want) but at some point, where saved pages are shown in DT3 at around 300kb the clipper doesn’t capture the data for longer web pages any more any more.
It’s true that the page can be then captured but It requires more work to clean up the result and it is quicker to simply copy the page with ctrl-A, paste it to TextEdit and delete what I don’t want, save as a pdf then drag it to DT3
Usually that’s true but in this case the website supports a printer friendly layout and therefore the paginated PDF is actually quite clean without having to use the clutter-free option.