I decided to give Gemini 2.0 Pro a try given its large context window to see if it can handle some larges files that Claude cannot handle without splitting up the PDF.
I got this response in DT4 Chat:
I am not sure I understand the business model behind (1) Advertising a large context window; and (2) Accepting paying API customers for that product; and then (3) Denying those customers access to your product because they used it too much and/or used it on very large documents. Seems to me that’s the whole point of the model.
I guess as usual Google is better at a mindset of offering a mediocre service for free rather than offering a quality service for money.
No idea what their intention is but the next beta will support Gemini 2.5 Pro. Just like GPT 4.1, O3 and O4 mini. What would a day be without new AI models? 
2 Likes
The other thing to bear in mind is that much as Google have made their latest Gemini Pro models available to non subscribers, you’ll be subject to quota limits that are far smaller than you get as a paid subscriber. And the API products from all the LLM vendors are a separate service altogether.
To be specific, a Gemini Pro subscription provides access to Gemini Pro queries via the Gemini Pro web & app interfaces, and also provides a premium version of their PDF summarisation service called notebookLM+. There’s nothing stopping one from accessing these services via bookmarks from within DEVONthink, but I access them via a third party web browser before importing text from these services to DEVONthink (via Obsidian).
And if you have a premium subscription to Perplexity or Kagi (where you’re able to query all the main Ai LLM models), once again you’re paying for their web/app interface options, not their API key options.
In terms of the way I work, I don’t have much need for API key services as I mainly use LLMs for desk research before bringing the results of that research into DEVONthink (via Obsidian). I do however use Kagi’s API summariser service to summarise historic long-form content in my data. This is a low-cost pre-pay service charged at a flat rate of $0.025 per 1000 tokens for their premium subscribers (1000 tokens is approximately 750 words).
1 Like
Indeed and I have clarified this in the documentation for beta 2.
1 Like