Locally installed Language Models / AI and Devonthink

Several threads on the DT forum discuss OpenAI and GPT, often sparking controversial debates, particularly concerning privacy issues related to data transmission to OpenAI servers. Has anyone experimented with locally installed language models, such as those available at https://lmstudio.ai/, where data remains on your computer? I understand these models perform well on M1/M2 MacBooks. Additionally, has anyone integrated them into a DevonThink workflow? I remain somewhat skeptical due to potential inaccuracies generated by these models, but I am also open to exploring their use.

Unless you’re referring to very limited performance and scopes of inquiry, they’re not useful. In fact, some will even prompt with “Hey, I could give you even better results if you use online resources.”.

Given that LLMs consume entire datacenters worth of resources, I am highly skeptical about any product that claims to offer general purpose LLM inference on a local computer.

Agreed! I’ve had a play with Ollama and some of the available models, but they’re only casually interesting, not very productive IMHO.