In my experimentation using Ollama and various models with DEVONthink, I find they do pretty well with providing responses in the chat and of course their responses vary in terms of response quality and speed commensurate with the model capabilities. After reading the manual about external AI, I decided to ask the models what their capabilities were. A number of them gave a table with general responses of what they could do. A few of them gave very specific responses including the functions they could access (which I later discovered they seemed to have issues doing - see below). Only a couple immediately created a file listing their capabilities (the qwen3:8b and qwen3:32b models) besides in the inspector chat window.
Trying to get the models to actually edit and move a file (i.e. exercise some of those functions) usually produced responses of instructions to me in the chat on how functions should be called or how I could handle it myself via DEVONthink capabilities. Again only the qwen models would perform the test actions.
I tried another case where I asked one of the models to help me create a prompt that would process an existing markdown note (which were a series of tasks to myself), and prepend a numbering to those sentences. It came up again with a good process breakdown with actions to take. Unfortunately when testing the models, again only the qwen models seemed to actually modify the file and only the qwen3:32b did it correctly. The others usually just printed the results in the chat. I used the same prompt and test file in each case. Most at least got it right at least in the content that should have been modified, but basically implied they couldn’t make the function calls themselves.
I tried this with ChatGPT nano and it processed it perfectly.
I’m trying to determine if what I’m seeing is due to the model, ollama, or how DEVONthink uses online external AI vs local external AI. I suspect it’s an Ollama issue as at least two models seemed to be able to manipulate my files in tests, although they all were able to read the selected file.
Is there anything we should be aware of relative to function calls, etc, or differences between the two external AI paths (online vs local) are handled from the DEVONthink perspective?
If not, I may try LMStudio to see if there are any differences vs Ollama in using the same models as my next step.
Thanks!


