Locally installed Language Models / AI and Devonthink

Several threads on the DT forum discuss OpenAI and GPT, often sparking controversial debates, particularly concerning privacy issues related to data transmission to OpenAI servers. Has anyone experimented with locally installed language models, such as those available at https://lmstudio.ai/, where data remains on your computer? I understand these models perform well on M1/M2 MacBooks. Additionally, has anyone integrated them into a DevonThink workflow? I remain somewhat skeptical due to potential inaccuracies generated by these models, but I am also open to exploring their use.

Unless you’re referring to very limited performance and scopes of inquiry, they’re not useful. In fact, some will even prompt with “Hey, I could give you even better results if you use online resources.”.

Given that LLMs consume entire datacenters worth of resources, I am highly skeptical about any product that claims to offer general purpose LLM inference on a local computer.

Agreed! I’ve had a play with Ollama and some of the available models, but they’re only casually interesting, not very productive IMHO.

I’ve been experimenting with Anything LLM and locally stored documents (which are indexed by DevonThink). I am getting surprisingly good results. It would be SO cool to be able to connect the DT Database to Anything LLM. But including documents is acceptable.

Welcome @SolarPlexus
Many of these LLM aggregator apps are popping up and we couldn’t hope to implement specific support for all of them. But we are working on some things in here that may prove interesting. And yes, that’s all I can say at this time. :slight_smile: