The Current State of DEVONthink AI Updates

Currently the AI feature set of DEVONthink feels like a toy compared to its long refined core features. You cannot branch chats, send images in chat, see the thinking process of thinking models, adjust the reasoning effort separately, or set temperature, and so on. Obviously DEVONthink may not be an AI tool, but they at least branded an AI Assistant into the big version 4 update. It seems a better choice if they could bundle their AI packs as add ons like their PDF services, Additional Scripts, Script Library and so on, instead of building them directly into the default feature set. But these things do not matter at all.

When DEVONthink 4 changed its highly regarded one time purchase model to a paid update model, I think there would be a far greater urge to drive users to update than ever, because AI models evolve really fast. Claude updated their models from 4.0 to 4.5 in a mere half year, while DEVONthink limits choices to 3.5 haiku, 4.5 sonnet and 4.1 opus. Gemini 3.0 Pro was released in November, while currently it is still the Gemini 2.5 family.

It is fortunate that the DEVONthink team does not seem to intend to use model versions as a lock for updates. Instead they simply do not seem to care about new model releases. I understand they may do a lot of customization for models to fit the DEVONthink workflow. However since DEVONthink introduced Ollama cloud support, it seems DEVONthink used to simply read the model list from the Ollama cloud. When Ollama added the Kimi k2 thinking or something similar, DEVONthink produced an error when calling that model by API. In both cases it seems the current AI model update choice is a total mess. There is not a clear rule regarding which model will be updated or which will be introduced. They are indeed busy, but allowing users to edit the model name themselves seems a relatively easy way to make this more convenient, like what many mature AI chat clients do.

1 Like

Welcome @Hblvmni

It seems a better choice if they could bundle their AI packs as add ons like their PDF services, Additional Scripts, Script Library and so on, instead of building them directly into the default feature set. But these things do not matter at all.

There is no need for such bundling and the complexity it adds to development, to support, or for the user. Just like sync, automation, email importing, etc., AI is optional. People who don’t want to use external AI don’t have to think about it. People who want to use it have to make the decision and do the nominal set up for their situation.

When DEVONthink 4 changed its highly regarded one time purchase model to a paid update model, I think there would be a far greater urge to drive users to update than ever, because AI models evolve really fast. Claude updated their models from 4.0 to 4.5 in a mere half year, while DEVONthink limits choices to 3.5 haiku, 4.5 sonnet and 4.1 opus. Gemini 3.0 Pro was released in November, while currently it is still the Gemini 2.5 family.

AI models being released quickly does not mean they are better. (Care to discuss ChatGPT 5’s release? :roll_eyes:)

Adding models is not as simple as you think it is. And who do you think gets the complaints and bug reports if AI isn’t doing what people expect? You don’t contact the Ollama team or Moonshot AI, etc. You assume their model is correct and our apps are faulty. So logically we have to vet and control what we can to know how a model works inside our apps and minimize friction and support issues as much as possible.

There is not a clear rule regarding which model will be updated or which will be introduced.

That is true as our development roadmap is intentionally private. That being said, say Claude 5 was introduced today. Don’t you think we wouldn’t get support requests asking why it’s not already supported? Of course, we would.

  • Does that mean we should stop everything and just add it? No.
  • Does that mean we’re ignoring it or don’t care? No.
  • Does that mean we would now be aware of it and it would be examined and tested thoughfully and carefully in due time? 100%.

Remember, there is far more to DEVONthink than external AI. It is not the core of the application. Like our own internal AI and the many other powerful functions available, it is technology that works toward the core, the raison d’être: document and information management.

8 Likes

I’m disappointed I can’t use ChatGPT 5.2 with DEVONthink. Took me a couple hours just in the settings to arrive at the conclusion. I’ve been using VSCode to manage large collections of project files and data, I have yet to find a better solution. Plus, VSCode is free so I’ll continue using it.

There also seems to be a bug in the UI model selector. Say’s 5.0 but says it’s using 4.

You’re welcome!

I did not expect it to be like this because AI clients typically support all available models as soon as possible.

Bespoke AI applications logically try to support everything, all the time, as fast as they can. It’s a bragging rights thing, especially as that’s all the apps generally do: interface with chat engines. DEVONthink is not a mere chat application (though impromptu chats can certainly be done, and we all have been using the Chat assistant instead of e.g., spinning up the Claude app). As I noted, access to external AI is a feature supporting our apps’ core functionality.

On what basis do you categorize something built into the software itself as optional? If it is optional, then how does that differ from installable add ons? I am trying to understand the distinction.

Installable add-ons are just a simple panel to install certain components, e.g., the OCR engine in Pro or Server. AI is not installable and is certainly more deeply threaded into various aspects of the application. However, similar to other functions, no one has to use it. For example, the See Also inspector is a powerful built-in inspector but you may not use it. And for those who really don’t want to use AI, it has to be properly enabled and choices made in order to use it.

Welcome @kdawgforever
Chat GPT5.2 was released less than two weeks ago. See the previous discussion.