It seems “chat” isn’t aware of what document is active. How does chat establish a context? How can I add the current document to the chat’s context to ask questions, etc. I’m able to share the doc to BoltAI but am hoping DT4 has this built in.
Also, am I correct in assuming DT4 will not be like a notebook LM where you can add several documents to then prompt against?
Also, am I correct in assuming DT4 will not be like a notebook LM where you can add several documents to then prompt against?
Don’t approach AI in DEVONthink with expectations from other apps, e.g., BoltAI. DEVONthink has not become “an AI application”. Access to external AI engines is added as a complement to our AI and the core functions of DEVONthink which remain unchanged – document and information management.
Have you read the Getting Started > AI Explained section of the Help? Also, AI as it relates to DEVONthink is threaded throughout the manual. Look at the Windows > Help Viewer section in the Help. There are also several places with practical examples like the one shown above in the help.
For basic queries such as if you want DT4 to provide a summary of one document followed by a separate summary of each selected document, you can select multiple documents and give your prompt.
In more complex situations such as where you want an overall integrated chronology but the documents exceed the context window of the LLM, you can write a script which first summarizes each document individually and then concatenates those summaries and runs a new query on the concatenated first-pass responses.
Regardless of the context window size of the LLM, if the level of detali of your desired response will exceed the maximum output tokens for the LLM, then multiple LLM passes on the concatenated responses may be needed.
Thus for simple cases DT4 AI is as simple as any LLM chatbox. But for complex or detailed AI queries, DT4 scripting essentially gives you the iterative power of Langchain with considerably less coding knowledge required than an actual Langchain setup.
I have a vet’s test results selected and then prompt like “tell me about the current health of the dog”. I was hoping the chat would scope to only the selected doc but maybe I have to define my prompt more specifically?
I wouldn’t make my judgements based on that or other “polls”. You can test certain engines and models, to see what fits. And not all models from an engine will respond in the same way, e.g., Claude Haiku versus Sonnet.
Excellent - it is surprising how much difference a change in prompt makes. Moreover the “Role” can be as important if not more important than the prompt in giving the LLM direction as to the type of output you are seeking.
None of the models are going to amount to a meaningful cost for a few test queries.
I would suggest trying the query with each of the models, looking at the results, and then checking your API account for the cost.
You are likely to find that the measures used in the media to report “performance” of the LLM models varies considerably by the actual use case.
Moreover when seeking factual data like this it is often worthwhile to try to minimize hallucination by adding to the prompt - “Only state facts you find in the document. If you do not know then say so. Do not make things up.”