DT4: What do the icons connected to AI models really mean?

There are these icons connected to the AI models (in Settings) and thanks to the wonderful manual (thanks Jim @BLUEFROG , a really nice job) I understand what they stand for.

But, as I’m not so familiar with AI tools, I would like to know what e.g. “reasoning” or “tooling” really mean. What kind of tasks do these terms stand for. I need AI for summarising scientific papers (as pdfs), answering questions based upon these papers. Do I need “reasoning” or “tooling”?

For very basic questions which are answered clearly in the paper you need neither reasoning nor tooling.

For more complex questions - or especially if you are asking a nuanced question where different viewpoints will be present in the same PDF or where you are asking AI to compare, contrast, or otherwise integrate the views of multiple PDFs - reasoning would be very helpful.

You probably do not need tooling unless you also want AI to search for additional or supporting or opposing viewpoints in other academic journals on the web.

1 Like

Thanks rkaplan, that helps. Is it possible to do this with multiple pdfs in DT4? I thought not?

Yes, you can select multiple PDFs and ask DT4 to compare, contrast, summarize multiple PDFs etc - but the total size of the files and the expected result needs to fit within the context window size of the LLM.

If you exceed the context window size then you need to either switch to an LLM with a larger context window (if one exists) or write a script with Applescript or JXA.

1 Like

I’m not sure wether understand this right. What is the “context window of the LLM”, the chat window in DT4, where the answer/result is shown?

The context window is the total size of your chat session prompts and responses. It is typically measured in “tokens” where 1 token is about 4 characters so 1 token is about 3/4 word.

Typical context window sizes:

GPT-4 Turbo - 128K

GPT-4 - 8K

GPT 3.5 Turbo - 16K

GPT-4o 12K

Gemini 1.5 Pro/Flash - 1 Million

Claude 3.5 / 3.7- 200K

Mistral Large - 128K

Llama 2 - 4K

Another factor is the maximum number of output tokens per query. Output tokens consume part of your context window but they are essential in allowing a detailed response:

GPT-4 - 4K

GPT-3.5 Turbo 16K - 10K

Claude 3.5 - 8K

Claude 3.7 - 128K [You generally need to prompt it to get output this detailed]

Llama 2 - 4K

Gemini 1.5 - 8K

LLama 4 - 10 Million tokens (according to the announcement)

1 Like

Did you read the Getting Started > AI Explained section of the help?

It looks like not enough.

Obviously, this isn’t going to be “The Compleat Guide to External AI” :wink: but did I provide enough information to be clear and helpful?

1 Like

I’m thinking of using some of my AI credits to produce summaries of the DT manual (although maybe it’s already in the most verbally economical form possible). :zany_face:

My friend, I assure you it is not :wink: There’s one specific subsection I will revise before the gold master because it feels like I was droning on and on. :stuck_out_tongue:

2 Likes