I love the new changes in DEVONthink 4, very happy to see things moving in the AI direction!
Some things I’ve noticed from the first few minutes of playing around:
- Is there any way to make the font size larger in the AI chat windows? At my age, and on my retina screen laptop, the text is too tiny for me to be able to read comfortably. I have to copy and paste into another editor to see what the AI is telling me.
- If I open the chat window using the button in the upper right, then Command-Tab to switch to another application (because my local LLM is slow, and I want to do something else while it thinks), it seems that it aborts the chat submission. When I go back and re-click the chat tab, I see my question, but there is no response.
- Once “background Chat” is enabled so that I don’t have to wait for the response, it would be nice to see the chat icon change color to indicate whether it’s thinking (red/orange/yellow) or done thinking so that I should click to read the response.
- Although I use both ollama and LM Studio, I also use llama-cpp and mistral.rs. In order to use the latter (which both support the OpenAI API), the model name list needs to be populated by calling
/v1/models
any time the endpoint URL is changed. - It would be great if, in the chat window, there were icons that allowed me to select whether the model should search the web or search the database (or both), so that I can very easily choose when my query should only consider the database.
- Of course, a future feature request: the ability to add tools using the MCP protocol, and the ability to start a vector database in the background (like Qdrant) as well as a text embedding model and splitting scheme, so that I can create vector embeddings of semantic chunks of all documents in the database, and then perform RAG queries in the chat window in addition to the current tool-assisted searches via the web server.