Font sizes and some other AI observations

I love the new changes in DEVONthink 4, very happy to see things moving in the AI direction!

Some things I’ve noticed from the first few minutes of playing around:

  • Is there any way to make the font size larger in the AI chat windows? At my age, and on my retina screen laptop, the text is too tiny for me to be able to read comfortably. I have to copy and paste into another editor to see what the AI is telling me.
  • If I open the chat window using the button in the upper right, then Command-Tab to switch to another application (because my local LLM is slow, and I want to do something else while it thinks), it seems that it aborts the chat submission. When I go back and re-click the chat tab, I see my question, but there is no response.
  • Once “background Chat” is enabled so that I don’t have to wait for the response, it would be nice to see the chat icon change color to indicate whether it’s thinking (red/orange/yellow) or done thinking so that I should click to read the response.
  • Although I use both ollama and LM Studio, I also use llama-cpp and mistral.rs. In order to use the latter (which both support the OpenAI API), the model name list needs to be populated by calling /v1/models any time the endpoint URL is changed.
  • It would be great if, in the chat window, there were icons that allowed me to select whether the model should search the web or search the database (or both), so that I can very easily choose when my query should only consider the database.
  • Of course, a future feature request: the ability to add tools using the MCP protocol, and the ability to start a vector database in the background (like Qdrant) as well as a text embedding model and splitting scheme, so that I can create vector embeddings of semantic chunks of all documents in the database, and then perform RAG queries in the chat window in addition to the current tool-assisted searches via the web server.

Glad to hear it but remember: DEVONthink 4 is not (nor will it become) “an AI application”.

No, the font size cannot be changed and development is looking into the issue with the persistence of the Chat popover.

Requests are noted, with no promises.
Did you read this…

And most importantly, the Getting Started > AI Explained section of the help.

Yes, I read the README first thing.

If the font size cannot be changed, that’s really too bad. It makes the chat feature unusable for me, at least, since I just can’t read text that tiny in any effective way.

I get that DT isn’t an “AI application”. I like that it focuses on being a really great way to store and manage huge quantities of data assets. This is my primary use for it. It just feels SO close to incorporating the best of AI when it comes to dealing with such data sets. For example, you could take the ingestion part of a RAG pipeline, and do nothing more than store vector embeddings in a local Qdrant instance. This would open the door to fully semantic document search, without even involving an LLM for chat — maybe just a tiny model for the embedding work.

Fortunately, though, I don’t really need DT for this. Since the files are present on disk, I can just index Files.noindex myself and do my own semantic queries over the same dataset, which I know Elephas is also doing. It would just be cool to integrate this into DT, because then I could mix semantic search with the existing metadata and content search, to be able to iteratively refine the results in a unified interface.

The value of the “chatting with my documents” feature is a little less obvious. In fact, I think I’d prefer to just have DT ship with an MCP tool plugin that talks to DT databases over the web interface, and then I could use that tool with existing AI tooling models, obviating the need for “in-app” chat features altogether. Then I wouldn’t run into font or async issues, and each app could do what it does best.

Thanks for noting the features, I’ve been a pretty hard-core DT user for many, many years now, with tens of thousands of documents indexed, multiple databases, and >100Gb of data being managed by DT. It’s at the center of my information life, to be sure.

Oh, and a tiny side request since I’m already here: Providing an info pane that shows just the location of replicates and duplicates in a list, so that I don’t have to expand the “Instances” dropdown in the generic info tab. kthxbye!

John

The next beta will add a button to Settings > AI > Chat to refresh the list of models.

2 Likes

And upcoming beta will either add a new setting or simply use Settings > General > Appearance > View Text Size

3 Likes

“Early cataracts” interfere with my vision, so I empathize with your dilemma.

While not panacea, I’ve found that the options in Settings > Accessibility > Zoom help a lot when I just can’t make out the text on the screen. I find the Control-Scroll gesture to be especially handy.

Hope this helps.

I basically had the same question / proposal as part of my feedback: treating scope of AI (DB - Web - Wikipedia - …) assistant similar to choice of model and bring it to UI front.

:backhand_index_pointing_right:

1 Like

I have been looking for a reliable way of doing this kind of semantic queries. Indexing DT’s database would be great. Could you elaborate on how you are doing this?

I have also found that recent agentic tools like Gemini’s “Deep Research with 2.5 Pro” work phenomenally with files indexed by Google. Are you aware of any workarounds to this kind of agentic search for local files?

@jonmoore You might also have something to say here!

Thanks!

1 Like

I’m using the LlamaIndex library, and writing Python code to do the indexing and then use it with a local LLM: GitHub - jwiegley/rag-agent: Some exploratory code for playing with RAG and agentic framework

1 Like

The second beta will fix this.

1 Like

Devonthink is brilliant. I do think, however, the developers have super Xray eye sight. All the font size choices are always too small for me. I have managed to increase the fonts in settings. My gripe in DT4 is the help section… it has a mini tiny font size and there doesn’t seem to be anyway to increase it.

Pinch to magnify and as has been discussed already, zoom buttons and shortcuts are coming in beta 2.

1 Like

I’ve been doing a lot of further work with LlamaIndex, and now have a solution that I think can I work pretty well. The new Python project is at GitHub - jwiegley/rag-client: A client program for vectorizing documents and performing similarity queries. Once installed using pip, these are the steps:

  1. Create a config.yaml file that provides options giving your Postgres and local LLM settings.
  2. Run rag-client --config config.yaml --from <dtBase2/Files.noindex> index.
  3. Once this completes (and it will be slow), run rag-client --config config.yaml --port 9000 serve.

You will now have a local AI running at port 9000 that presents an OpenAI interface that you can talk to with DEVONthink. It “wraps” the underlying LLM that you specify in config.yaml, so what you’re now dialoging with is that LLM enriched over the contents of your DEVONthink database.

This utility in effect provides some of what Elephas does, just in a very manual, command-line and open source way. The documentation has a long way to go, and I keep improving it daily, but this is the approach that I’m talking to building a query engine on top of my DEVONthink data, that is free and can scale to hundreds of gigabytes of data.

3 Likes