I want to start a thread about using LLM’s with DT. Potentially OpenAI agents.
Please do not debate AI in this thread. If that’s your desire, please start your own thread somewhere else instead.
Please do not hijack this thread. If you don’t like AI, LLM’s, or you’re afraid about the future because of AI, please start your own thread for it. Please don’t hijack this thread.
If you are wanting to discuss AI ethics, or the shortcomings of AI, please start your own thread for it somewhere else.
I am abandoning my other thread because it is completely noisy and useless to me now, so if you want to argue about AI, you can have at it over here: OpenAI in DEVONthink? - #97
[REPOSTED]
I was able to get this first project working, but I don’t see a clear way to stop these local LLM’s from hallucinating a lot.
To me, this first one is not a viable solution yet because of the hallucinations problem. Also, you will have to solve the problem of generating all the symlinks themselves, assuming all their files are not in one single directory. If your documents are imported, then you’ll have to locate them in the DT directories. If your documents are indexed, your effort will be lower.
I also want to try out PrivateGPT but I haven’t been able to get it to work with Apple Metal GPU yet.
[MORE]
Has anyone had success with PrivateGPT? I’m stuck on a technical blocker at the moment.
You can define folders to index locally and then either chat with documents in those folders or semantically search them. The default local model is mistral 7B but you can also use others.
The setup and getting the hang of how to use it is a bit involved at first, but worth it. They provide great support via email and Discord.
Thanks for the tip, I hadn’t heard of khoj. Will try.
I just heard about MemGPT today. It looks promising. At 20:30, he gives a nice summary of the challenges for document analysis.
The approach of MemGPT is different from the RAG approach (retrieval-augmented generation), and although I’m not deeply familiar with the inner workings of RAG, I suspect its architecture is what contributes to the profuse hallucinations. MemGPT taking the approach of increasing the context window size indefinitely, sounds like a better strategy to me, and the test results seem to support that.
It’s my understanding that Khoj actually uses semantic search on the locally saved embeddings to identify a set of relevant excerpts from the indexed contents. It then provides these excerpts as context for the response to whatever LLM the user has selected.
It will essentially provide as much context as you tell it to, I.e. it’s possible to set a token limit.
Here’s a very helpful overview of the challenges with the different methods of teaching LLM’s new knowledge. It answered some questions I was frustrated with, concerning the initial (bulk) training vs fine tuning vs in-context learning vs RAG.
It looks like a lot of people are trying to solve this challenge of how to teach an LLM a large body of knowledge, and do it on local hardware.
Tonight I was able to get PrivateGPT working and the performance was marginal at best. While the response time is quite fast and the hallucinations seem to be minimal, it does appear to get some facts wrong and doesn’t do a very thorough search of the material that was provided. Perhaps MemGPT will perform better.