I’m capturing lots of webpages as part of a PhD. At the moment, this sends me to Reddit, looking at the protests about Reddit’s own policies. One of the things this has incurred is that lots of communities are flagging themselves as NSFW, despite being completely safe for work, as an effort to demonetise Reddit.
When I capture these pages through the DT clipper, I end up capturing a page that just says ‘Over 18? You need to log in’. I get why this is happening - DT and the clipper do not share credentials with Safari, and it’s Apple policy, and having used the web through the 90s sadly this makes lots of sense.
However, right now (and a few times in the past) this is really frustrating. I want to be able to capture pages, associate some notes with the capture, tag it, and have the same thing show up in my DT DB as what I’m looking at. There’s no way to easily replace the item when it hits the loginwall after the metadata is associated with it. It’s a very frustrating experience. I’m having to screenshot and try to associate images with semi-working captured pages, but this loses all the search magic DT gives me.
Would it be possible to add a ‘credentials-only’ browser somewhere in DT (andor the clipper) so users can log in before running a capture? Downie (Downie - YouTube Video Downloader for macOS - Charlie Monroe Software) has a neat ‘user-guided extraction’ which allows users to log in when scraping video, which shows one way this could be handled.