DT4 - AI Chat - Memory and Context Questions and Workflow

I’m trying to understand how to save an AI chat to revisit later with follow up questions.

I see I can save a chat to a preferred file format (rtf, md) but will that file be used as context for follow up questions with the LLM service?

Assuming my chat is about referenced documents in DT4 will that file context also be persisted if I remove it from the chat window?

I’m looking for a way to store and return to prior chats that doesn’t require me re-processing the same documents and ask the same questions again.

BTW - I understand LLM services vary in memory and context limits so I’m looking more for a reference workflow suited to DT4’s integration with various services.

Thanks for any help you can provide. I don’t see this covered in the AI docs unless I missed it.

You didn’t miss anything as chats are currently not stored, even on a per-document basis. It’s actually the first request we’ve seen.

I understand LLM services vary in memory and context limits

Commercial AI services provide much larger context windows and performance. What are you trying to use?

You could do quite a bit in that regard with scripting.

@rkaplan – can you elaborate or point me to resources that would help me develop these scripts?

Well there is the Script Assistant under Data-New from Template-AI - for simple cases it will write the script for you

There are a number of examples in the DT4 scripting dictionary and documentation

If you start with those and get stuck then you will probably get lots of help here in the forum by posting your first draft of a script and asking for suggestions to fix/improve it.

And Claude 3.7 or Claude 4 (within DT4 or on directly on the Anthropic web app) can do a surprisingly good job at debugging Applescript. It’s not natively familiar with DT4 yet though - so to get it to write DT4 scripts it helps to copy/paste the DT4 scripting dictionary into Claude.

Perfect. Thank you and I’ll start digging into it.