I just testing the new functionality with local AI Models via Ollama and LM Studio. So far the models can just summarize a PDF but weren’t able to answer me a single question about the particular document. Do I miss some settings beside the AI pane? Do I need to train the models?
My usecase is: I have buckets (folders) of distinct cases which are a collection of documents belonging to one person. I want to be able to ask questions about that person like: When is xyz born, which health insurance was he, etc.
Hi, I just upgraded to Devonthink 4 (v4.0.2), and I’m having the same problem. I’m using the latest LM Studio beta (0.3.21 Build 1), and I’ve tried both mlx-community/Qwen3-30B-A3B-Instruct-2507-8bit and google/gemma-3-27b. Both are supposed to be enabled for tool use.
With Qwen3, when I selected a web archive and asked, “Can you please summarize the selected item for me?”, it replied:
I’ll summarize the selected item for you. Let me retrieve its contents first.
The latest LM Studio versions seem to have major issues related to tool calls and are unable to parse the model’s response and therefore forward the response unfiltered. This issue might be related: