I am building some web apps which utilize AI - so I am exploring AI tools beyond DT4 for those use cases.
One of the most useful features I have found in both n8n and LangChain is the option to specify a “Fallback LLM.” The basic idea is that you specify an LLM to be used for an AI query but if the query fails then instead of the app failing or giving an error it automatically goes to the designated fallback LLM and re-runs the query.
This is very useful for example in the situation where I prefer to use Claude 4 but might occasionally exceed its context window; I don’t need to calculate when that might occur as it will automatically switch to GPT or Gemini in that situation.
Or perhaps my preferred LLM is overloaded temporarily.
Or perhaps I let my API account balance get too low.
I have found this to be particularly useful for very long scripts/workflows where it is frustrating to have it run for an extended period of time and then the script fails shortly before the end and thus it all needs to be repeated; often the fallback node can save the day.
Any chance of adding such a feature to DT4?