Currently the AI feature set of DEVONthink feels like a toy compared to its long refined core features. You cannot branch chats, send images in chat, see the thinking process of thinking models, adjust the reasoning effort separately, or set temperature, and so on. Obviously DEVONthink may not be an AI tool, but they at least branded an AI Assistant into the big version 4 update. It seems a better choice if they could bundle their AI packs as add ons like their PDF services, Additional Scripts, Script Library and so on, instead of building them directly into the default feature set. But these things do not matter at all.
When DEVONthink 4 changed its highly regarded one time purchase model to a paid update model, I think there would be a far greater urge to drive users to update than ever, because AI models evolve really fast. Claude updated their models from 4.0 to 4.5 in a mere half year, while DEVONthink limits choices to 3.5 haiku, 4.5 sonnet and 4.1 opus. Gemini 3.0 Pro was released in November, while currently it is still the Gemini 2.5 family.
It is fortunate that the DEVONthink team does not seem to intend to use model versions as a lock for updates. Instead they simply do not seem to care about new model releases. I understand they may do a lot of customization for models to fit the DEVONthink workflow. However since DEVONthink introduced Ollama cloud support, it seems DEVONthink used to simply read the model list from the Ollama cloud. When Ollama added the Kimi k2 thinking or something similar, DEVONthink produced an error when calling that model by API. In both cases it seems the current AI model update choice is a total mess. There is not a clear rule regarding which model will be updated or which will be introduced. They are indeed busy, but allowing users to edit the model name themselves seems a relatively easy way to make this more convenient, like what many mature AI chat clients do.
