Future of AI-Integration?

First off, I really like DT!

I truly appreciate the initial steps taken towards AI integration in DT. My ultimate wish would be for the AI to be trained with my own documents, so it can answer my questions faster and more accurately.

At the moment, selecting a document and asking the AI something about it is simply too slow for me (especially when using LM Studio). I’d love to see a functionality similar to NotebookLM directly within DT. However, the currently limited quality of NotebookLM itself shows how challenging this task is right now.

So, my question is: Do you share this same “ultimate” goal for AI integration in DT, or is your roadmap heading in a completely different direction?

2 Likes

Which model and what kind of Mac do you use? And how large are your documents (typical number of words/pages)?

Our roadmap has many goals (e.g. AI is just one of them) but changes every then and now and depends also on feedback, technical feasibility and available time. That’s why we refrain to talk about future versions and features usually, I’m sorry.

The database contains approximately 1,000 documents with about five to ten pages per document. Some are larger, with 30-40 pages. I use a MacBook Pro with an M1 Max.

I will, of course, stick to DEVONthink, no matter if the developments from your roadmap do come as a surprise :wink:

1 Like

How much RAM?

RAM is 32 GB.

And which model? Did you increase the context window or use the default of 4k tokens?

I tried Gemma 3 27B und Mistral small 3.2.
I did not change any parameters in DT.

Both of them are more or less the maximum an M1 Max with 32 GB can handle. Gemma 3:12b should be faster and doesn’t require all the RAM.