This should work as long as a precision of 1 second is sufficient:
set theDate1 to current date
delay 1
set theDate2 to current date
return theDate2 - theDate1
Result is 1.
This should work as long as a precision of 1 second is sufficient:
set theDate1 to current date
delay 1
set theDate2 to current date
return theDate2 - theDate1
Result is 1.
Did you get answer why clip is not just getting what already is downloaded within the browser?
Seems like panacea for 90% of problems mentioned here.
The clipper is not using what is downloaded in the browser, it is downloading it separately (basically the clipper passes the URL to the clipper). Itâs a different approach than the Evernote web clipper.
You can use different approaches, e.g. opening a bookmark in DT and then capturing whatâs shown there (see e.g. some of my posts on this forum where I use scripts to do that)
Thanks, thatâs confirmation of the current architecture design decision.
HoweverâŚ
Thatâs exactly what carnecro suggested but it looks like nobody cares.
I agree with him that this approach is a design flaw. Trying to handle all these multiple cases out of just URL in a separate component is IMHO Sisyphean work. Look - all these contents already is downloaded at hand in your browser, already rendered and with all these logins, corporate stuff far behind you and all these cookies/GDPR/ads/newsletter/whatever messages already clicked and confirmed.
Thatâs why I need answer âwhy?â, why going a way that you just canât win. There will always be hundreds of battles lost and users that need consistency will always be unhappy.
And no, opening a bookmark in DT and capturing from within does not work for me the same way as from within browser.
This is great!
Will experiment with it for a while.
This is not necessarily true â imagine lazy loaded images or other content loaded only when it is scrolled into view. Combine that with some âcleverâ JavaScript and youâll quickly arrive at a point where not all content is already downloaded.
One could try to download the current browser DOM as HTML. But given that many websites today contain highly dynamic content, thereâll probably always be border cases. Like the brilliantly borken website recently mentioned in this forum that used JavaScript to display images.
Just a small thought experiment: a HTML document uses some server-side JavaScript to generate (part of) its content. You capture the DOM as HTML, which will also contain the JavaScript URL used to generate the content. Some time later, this (server-side!) JavaScript is changed to produce different content. When you then open the saved HTML, this new content will be loaded⌠And the old one will also be thereâŚ
True. Trying to capture possibly dynamic content that way is a battle that canât be won. Use a bookmark if you want the current state of things or a static format like PDF if you need the content at a certain point in time.
As long as thereâs a server generating content, you canât be sure that what you capture today is what itâll deliver tomorrow.
Thatâs what DEVONthink does on the Mac usually (depending on the browser and how the Sorter was activated)
Scrolling everything into view first, I guess. And what about dynamically generated (parts of the) DOM? If the responsible JavaScript URL remains in the DOM, it might generate different content later, possibly producing a wierd mix of saved and newly generated DOM?
Thatâs one used approach, another one to preprocess the HTML code (e.g. to embed dynamic images). But of course thereâs way too much and too different JavaScript code out there that this could always work.
BTW/OT 10+ years ago I used to use Snagit with a mode of automatically scroll the content of a web page. This was brilliant, 100% fidelity, though it was a very long JPG/PNG onlyâŚ
10+ years ago
⌠the web was a different place