There are these icons connected to the AI models (in Settings) and thanks to the wonderful manual (thanks Jim @BLUEFROG , a really nice job) I understand what they stand for.
But, as I’m not so familiar with AI tools, I would like to know what e.g. “reasoning” or “tooling” really mean. What kind of tasks do these terms stand for. I need AI for summarising scientific papers (as pdfs), answering questions based upon these papers. Do I need “reasoning” or “tooling”?
For very basic questions which are answered clearly in the paper you need neither reasoning nor tooling.
For more complex questions - or especially if you are asking a nuanced question where different viewpoints will be present in the same PDF or where you are asking AI to compare, contrast, or otherwise integrate the views of multiple PDFs - reasoning would be very helpful.
You probably do not need tooling unless you also want AI to search for additional or supporting or opposing viewpoints in other academic journals on the web.
Yes, you can select multiple PDFs and ask DT4 to compare, contrast, summarize multiple PDFs etc - but the total size of the files and the expected result needs to fit within the context window size of the LLM.
If you exceed the context window size then you need to either switch to an LLM with a larger context window (if one exists) or write a script with Applescript or JXA.
The context window is the total size of your chat session prompts and responses. It is typically measured in “tokens” where 1 token is about 4 characters so 1 token is about 3/4 word.
Typical context window sizes:
GPT-4 Turbo - 128K
GPT-4 - 8K
GPT 3.5 Turbo - 16K
GPT-4o 12K
Gemini 1.5 Pro/Flash - 1 Million
Claude 3.5 / 3.7- 200K
Mistral Large - 128K
Llama 2 - 4K
Another factor is the maximum number of output tokens per query. Output tokens consume part of your context window but they are essential in allowing a detailed response:
GPT-4 - 4K
GPT-3.5 Turbo 16K - 10K
Claude 3.5 - 8K
Claude 3.7 - 128K [You generally need to prompt it to get output this detailed]
I’m thinking of using some of my AI credits to produce summaries of the DT manual (although maybe it’s already in the most verbally economical form possible).