Depending on the amount of storage on your device/computer, possible to import the content of a public/open source LLM ( 4bit/8bit/16FP ) so as to do/ask my questions very locally.
While you theoretically could load it, there’d be no point. DEVONthink is not an AI provider so it would do nothing with the model. You need to install and run a bespoke AI application e.g., Ollama, etc. This – and the limitations of local AI, etc. – is discussed in the Getting Started > AI Explained section of the built-in Help and manual.
Thanks. I think I grasp the complexity involved. I currently use “Claude” and my intent use that LLM rather than LM Studio if I could load the “content” of a open source LLM database.
You can’t “load” the “database” of an LLM. Unless you have a huge server farm. In which case you wouldn’t need to “load” the data but could simply use it.
Perhaps I simply do not understand you. I have a local LLM available to use/run ( expression is perhaps inaccurate? ) on my MacBook, LM studio seems to access it, it can be slow compared to when I use “Claude” online. It is “X” gigabytes in size in a format That LM studio can query, my idea was to load/export it ( LM Studio has many available LLM in a variety of sizes/resolutions ) if possible and then use “Claude” internally with DT4 extract my “potential” answers from the “prompts” I entered via “Claude”, Then delete that LLM, load another LLM repeat process and compare quality of the responses. I do get that I do not have the capacity either intellectually or financial resources to load “Claude” and anthropic ever expanding dBs.
I’m still not understanding what it is you’re saying you want to do. If you want to use Claude within DT, just put your API key in the settings. If you want to switch to a local model with LM Studio or Ollama, same principle: Just set that up in the settings. You don’t need to load or download or offload or whatever anything—aside from downloading the local model to your computer, not DT specifically.
it can be slow compared to when I use “Claude” online.
Commercial AI is going to outperform local AI in practically all situations.
It is “X” gigabytes in size in a format That LM studio can query, my idea was to load/export it ( LM Studio has many available LLM in a variety of sizes/resolutions ) if possible and then use “Claude” internally with DT4 extract my “potential” answers from the “prompts” I entered via “Claude”, Then delete that LLM, load another LLM repeat process and compare quality of the responses.
This is technically impossible. Claude works with its own data. Also, you can’t load a model into DEVONthink and use it in the way you’re imagining.
Thanks. My imagination is often “faulty”! I was under the impression that my use of Claude from with in DT4 was that Claude was examining/DT4 data as it cannot reach out side of DT4. My “Faulty” thinking Claude parsing my local DT4 data, which I could then examine and help identify gaps in the projects. I have been using LMStudio to find what I ID as relevant data and then copy/paste into DT4. Trying to remove a step there. So no can do. Just have to do the search via LM Studio or Claude and then copy/paste.