Connection to any OpenAI server?

Hello, I use LiteLLM as a local proxy to all of the various AI services that I use, for a few purposes:

  1. A single, OpenAI-compatible API, normalized across all services whatever the backend
  2. Cost accounting per LiteLLM key
  3. Full logging of all requests and responses for debugging and archival purposes

In order to use this service, I need to point my AI clients at my local OpenAI-compatible endpoint (http://vulcan/litellm/v1/chat/completions), and then I need to supply an API Key recognized by LiteLLM.

While it seems that I can specify a custom OpenAI endpoint using the GPT4All option, this does not allow me to specify an API key. And using the OpenAI option does not let me specify a local endpoint.

Is there any way that DEVONthink could give me more control over the parameters, including headers and fields in the request body? This is possible in most AI client software, and allows me to further identify LiteLLM “sessions” and “tags” for even better accounting.

I would imagine that such an option could be under a “Custom” AI backend type in DEVONthink, or else an option that only appears if I’m holding down Option while selecting it. I realize that my situation may be a bit rare, but this is the first time I’ve run into software that wouldn’t let me use the LiteLLM proxy.

Thanks,
John

DEVONthink isn’t just an AI client software. There’s more for us to consider than bespoke AI applications and aggregators. DEVONthink houses volumes of peoples’ information so we have to ensure performance, data-safety, privacy, and functions working within our app. So it’s not as simple as adding an endpoint.

Development is investigating some things so the request is noted, with no promises.

2 Likes

Sure, just thought I’d let you know what other AI client software is doing. And no rush, I have other ways to make my DT data available to LiteLLM models, so having DEVONthink talk to AI itself is not necessary. When more features and customizability arrives, I will try it!

1 Like