I asked ChatGPT to generate a search string to “search for files with size > 1 mb”.

The result is always this:
# expected: size >1mb
size:>1mb
I’m not sure if this a “fixable” problem or just the “nature” of using AI for this.
I asked ChatGPT to generate a search string to “search for files with size > 1 mb”.

The result is always this:
# expected: size >1mb
size:>1mb
I’m not sure if this a “fixable” problem or just the “nature” of using AI for this.
size>1MB.Which OpenAI model actually?
4o according to the screen cap.
Didn’t have a coffee yet ![]()
Sure. I was tired searching the handbook for the correct syntax and tried to use the new AI chat as a replacement.
I’m too impatient to use it on a regular basis to write the search syntax.
What I miss is “direct” access to the search settings without searching first. This would speed up building a more complex search query. Searching first for something (else) feels counterintuitive to get access to the cog wheel.
Put your cursor in the search field and press Return.
And just in case that someone’s going to ask - support for GPT 4.1 (including nano & mini) was already added yesterday to the next beta ![]()
The next beta will support this syntax too.
I have done this. The AI button is greyed out if I am using the LM Studio models. Is this a feature?
In case of local engines this depends on the model. Which one do you use? E.g. Gemma 3 or Mistral Small 3.2 are quite useful.
Thanks for the hints on the models. I’d like more recommendations, by the way, on which models could be useful for which tasks.
I have tried it now with a new Gamma 3n, but it’s still grayed out. Same for google/gemma-3-27b. If I switch to ChatGPT for example, the AI button becomes available. I am on a MacBook Air M4 32 GB, 15.5., DT 4.0.1
A screenshot of the Models popup in Settings > AI > Chat would be useful, thanks.
Thanks! A screenshot showing the opened Model popup would be great too.
The search assistant should be available for both Gemma 3 models on your screenshot actually, e.g. I just tried it successfully although using a slightly different version of Gemma 3:
Grayed out with me. Anything else I need to adjust in settings?
Forgot to mention that a context window of at least 8k tokens is required, I’m sorry. In case of LM Studio this has to be changed both in DEVONthink’s settings and in LM Studio’s model settings.
Have you activated the option to allow external access in LM Studio? I don’t have it just now, but if you don’t enable it (and have it running) the Model window is grayed.