Like many other people, I have found it helpful to upload health records into the likes of ChatGPT or Claude, and to gain useful insights. I have been doing so direct from a browser, since so far I have not worked out how DT4 would help me. I can then export chats as html, and pull them into DT.
However, to gain more coherence, and avoid chats getting vastly long, I have wondered about how to store medical records that I curate, for example, by date and subject, so as to allow the AI an overview.
I see that CoPilot will allow the building of a coherent updateable medical history that can be placed in the likes of Google Drive or OneDrive, which will then be referenced when making a query. I could have the information on Google Drive or OneDrive mirror a database I create in Devonthink for that purpose, by assembling in DT, and then uploading etc.
My question is whether this would be a good way to do things, or whether this is exactly what Devonthink 4 makes un-necessary. Maybe the answer is some kind of hybrid arrangement? This question, of course, could apply to any body of knowledge. It’s just that in my case, medical records and blood results etc seem especially suited to what ai has to offer.
One point that occurs to me is that it is free of cost to use CoPilot in this way, whereas if the AI interrogates information held within DT4, then one must pay for that AI. So a further question is why pay?
I probably have a more European perspective on this. My medical data is exceptionally boring and mundane, and if any of it helps AI learn a little, I shall be entirely happy (and indeed surprised)….
I’m European too, and giving my health data to an unknown and uncontrolled entity would appear to be sheer madness to me. Even if my health data seems to be “boring” to me, it might not be so for others. Think about perspective employers or so.
You are free to use AI in whatever way your nerves can handle it However, with the entities you’re discussing, except for Claude, they all have poor to notoriously cavalier views on data privacy, with Google topping the list in general abuse and OpenAI dancing around lawsuits for their shenanigans.
That being said, you are referring to generalist LLMs and I would be very hesitant to take any kind of medical advice from them. While they may have some medical data in their corpus, they also have all manner of nonsense from Reddit, Facebook, etc., etc., at its disposal. There is no validation of the responses, so telling you to sprinkle fairy dust on your stomach at midnight with a newt in one hand and a toad in the other is as possible a response as “Take two aspirin and call the doctor in the morning.”
Regarding DEVONthink 4, our application has certain way to obscure personal information. (These are discussed in the Getting Started > AI Explained section of the help.) I am unaware of any such measures taken by the AI providers.
It’s good to hear that DT4 has security measures to maintain privacy. My guess is that it will be crucial to the success AI firms that they do maintain anonymity, but I take your point that data breaches occur far too often for comfort. On the issue of the quality of analysis in chats originating in a browser, I have been pretty impressed. None of the newts or fairy dust showed up in my results. However, I guess the question is how much better still is the AI where it is paid for by subscription rather than being used free. I will be happy to pay for better results. Is that really your experience? Are the newts and fairy dust eliminated by the paying service?
Even if a diagnosis sounds plausible to a layperson, it may be incorrect or completely disregard important facts. This is especially true for images/scans. Ultimately, there is not enough public data from the healthcare sector to train general LLMs like ChatGPT or Claude effectively.
Specialized medical models are, of course, a completely different story as they’re trained with a huge amount of health data.
Sure, analysing a blood panel is very different to analysing a scan. Although I will say that I uploaded a CGM to Grok and had far more useful information than my cardiologist provided. Which is not to say that specialized models won’t do even better. The overarching question is how to get the best from AI as it is now, and also as it emerges, what is or is not worth paying for, and where DT4 fits into all of this.
I did get some totally incorrect advice from ChatGPT on Polish tax law (one of Jim’s newts), mixed in with some very good advice. The same probably applies to medical advice. So the prudent thing is probably to see the current AI offerings as just throwing out ideas that do need to be checked carefully, but which can be very helpful.
How do you know? I must say that I can’t follow the reasoning. If you know that AI is better as your cardiologist, are you a cardiologist yourself? Don’t you trust your doctor? But then why not choose another one?
And how would I „check very carefully“ the output of an AI without having studied medicine? I do check the programming output of AI here occasionally, and that is abysmal.
In journalism, they say that if an outlet is consistently wrong about a topic you’re familiar with, they’re probably equally wrong about the topics you don’t know. Same with AI.
I have studied medicine and I use AI intensely in my work.
But its only purpose is to help retrieve verifiable data.
If I use AI to answer questions about medical diagnosis/treatment, the goal is for AI to help me find relevenat peer-reviewed articles that I can read.
If I use AI to summarize a large medical record, the goal is for AI to serve as a chronology or table of contents - with links back to the specific pages where any information is found.
Relying on AI in medical situations without reading the primary source is playing with fire.
It really would be great if there were doctors one could turn to who would give out tailored longevity and lifestyle advice. But, at least in my case because I live in Poland, that just isn’t possible. So, as a lawyer with some ability to sort through data, the only viable way forward is to listen to the podcasts of top American doctors, to question different AIs, and iterate based on blood results, and essentially to take responsibility for one’s own health. There are clearly very significant dangers and pitfalls in doing so, but probably even greater dangers in not doing so. The question in my mind is how best to use AI without falling into the traps that everyone is currently, in this forum, quite rightly pointing out.
Just as an aside, I would mention that a brain surgeon I know socially changed his own medication because of information I had found on an AI. That doesn’t make me a doctor. But it does point up the enormous resource that the internet and AI represent.
The answer is to use AI as an advanced search engine but always read and critique the sources it reveals just as you would any other source.
I would imagine you use AI in your legal work in a similar way. If AI identifies caselaw on point with your legal question, do you trust the response as is or do you read the case it identifies to be sure it is quoted in the correct context?
That said - for anyone looking for credible medical information I would highly recommend OpenEvidence, a joint project of Mayo Clinic, New England Journal of Medicine, and Journal of American Medical Association. It is free but full access is only available to healthcare providers or students in USA (including physicians, nurses, and medical/nursing students). Even the limited public access can be really useful.
Sure, I have no doubt that doctors are driven mad by stupid people reaching stupid conclusions following stupid research. The question, perhaps, is how to avoid being stupid oneself.
I will say, incidentally, that I doubt there is anyone stupid using DEVONthink. (I still remember recent advice from rmschne who told me I maybe had a solution looking for a problem when I raised a question related to my unfilled Dropbox account. There is a great deal of wisdom on this forum).
I’m pretty sure that was me But I’m glad to see you took it as a wise observation rather than quibbling. Even when your intentions are entirely constructive, you can’t always be sure how it’s received.