Drafts is latest integration with OpenAI.. Is DT3 next?

The trend is unmistakable.

Announcements like this are happening daily.

When will DevonTech release a similar announcement for DT3?

When will DevonTech release a similar announcement for DT3?

We’re not in some kind of race here. Other developers are welcome to do as they please. That however doesn’t dictate our course and direction. You already should know we don’t corporately flit to and fro with every passing wind. Implementation of new technologies are done at our pace with careful consideration and testing.


Yes that’s all true @BLUEFROG - except that between your history of “AI” search eons ago and your extreme support of scripting, I would think DT3 is a slam dunk obvious candidate for such integration.

I am not a fan of dependencies or limiting them to the barest of minimums – and this is well known. This is why I have never jumped on the bandwagon of IFTTT or Zapier and the like. Not only is that a dependecy on an external service, it also has the inherent weakness of the network.

And while I’m not a data paranoiac, I also don’t like my data going to services I am not in control of. My data; my business. I like local and things on a network under my control.

If ChatGPT ran on a Mac server on my network with nothing but occasional engine updates, I would likely be much more interested in it. So I’m not some Luddite nor am I saying it’s already SkyNet, but I don’t personally like the vector to some uncontrolled external source.

And again, these are my personal views, not those of my bosses or the company.


Fair enough regarding AI - I will back off for the moment

I am intrigued by your reluctance to use Zapier though. At base Zapier is simply a GUI that makes it easy to configure or use an API call without coding.

Surely you support APIs - arguably APIs have less dependencies than almost any other sort of computing.

I guess this is it why a lot of users think, that DT would/could/should jump on the band wagon.

And even more than that- DT is a repository of documents for many people.

While the media keeps talking about AI for document generation - and I do have major misgivings there- a key strength of AI at present is summarizing or editing or critiquing existing documents.


They’re just convenience wrappers, I don’t think this is anything that couldn’t be done before by hitting the endpoints at OpenAI with JSON payloads and HTTP requests.

Having done some work with AI interfaces, I find that it has a habit of just making stuff up, acting somewhere between a petulant child and someone desperate to please with no concern for consequences. There’s some useful stuff coming up around dealing with constrained data sets, but right now it involves uploading your data to organisations who have track records of riding slipshod through copyright and reuse conditions, and I really don’t want my research and private documents invisibly being consumed into ambiguously-structured datasets outside my control, not to mention the impact on research subject agreements, consent and GDPR.

I very much agree with @bluefrog here - it’s a new shiny thing, but once you have a dependency on external systems the vagaries of reality can break them at the drop of a hat (e.g. whatever latest disaster is unfolding under Musk at Twitter, where whole software companies, private datasets and accounts are ejected depending on how annoyed he feels at any time). My DEVON databases are still usable decades and major revisions later. Once AI systems are more stable and can run locally, there’s more that can be done controllably with them - but right now, it feels like when everyone was adding a Facebook beacon or asking for address book uploads to their services with no real consideration of the repercussions (which are always the worst possible when a more aggressive marketing team moves in at the remote end).


Side question: may I ask if you a kind of local automations, and which ones?

I already use IFTTT, but I agree with you about the dependencies, so I’m always keen to consider other options and learn from different workflows and points of view.

If this is the case then this should be good for you. GPT4All is exactly what you describe currently I am looking at some potential uses. One example is the feeding of documents and getting back tags, etc.

The only issue is that the prompt size is not as large as ChatGPT or Bard.

To get around that the following project can be used until the Opensource development can get it into GPT4All. GitHub - imartinez/privateGPT: Interact privately with your documents using the power of GPT, 100% privately, no data leaks

1 Like


According to the website:

The models that GTP44All is able to run are of course much smaller than the ones of GPT-4 (170 trillion parameters) or GPT-3 (175 billion parameters). It’s more like GPT-2 (1.5 billion parameters).

Therefore the performance might be acceptable even on average computers but the results are probably worse.

I tried GPT4All on a Mac Studio this morning using various models. It’s really slow (even when using all cores of the M1 Ultra), the input/output length is even more limited than the one of ChatGPT, the results are frequently useless or wrong and the language of the input doesn’t matter, the results are always English.

But at least it’s performed on the desktop and free :slight_smile:


So open source in this space only really exploded a month ago. At that time there are a lot of models and development. There are advancements happening daily, as example now there is work being done to offload to GPU in metal and a lot of other enhancements. As for the models, there are new enhancements in the works, as an example, a datalake was created for people to share their answers that can be used to train future models and this will be open source.

The reason that I use it is there are ways to extend the source of input. As an example I am feeding my obsidian notebook to both Devonthink and GPT4All using langchain, as it has an obsidian “import”, and. the answers I get back on my data is very accurate.

I would never share any of my documents with an entity that is very deeply involved with Microsoft as OpenAI is.

1 Like

Meaningful AI for practical applications cannot be done on a desktop. The scale of computation needed is incomprehensibly large for a personal computer or even for most small businesses to own.

Maybe you could run AI on a high-end virtual PC in AWS or similar - but the cost is likely to be prohibitive.

Microsoft has enough confidence in their ability to keep data secure that they will sign HIPAA BAA agreements with enterprise customers; I am puzzled why you are so anti-Microsoft in that regard.

1 Like

I tend to disagree with the statement, the reason why is processing power has very little to do with inference (answers). The most important is memory. The only time that you need specific “Processing” is in creating the models. The current state of running models on the CPU is very new and has tremendous development happening, but the real power of this will come when the current work of using the processing power of the GPU is easier to use. There are already projects that do that for consumer Nvidia GPU, and work is on the way to bring it to the apple hardware where chips already exist in the new laptops to speed the inference up tremendously.

Models do not need to contain or be trained on all information out there (LLM models) they need to be trained on similar types of data to be able to understand that data. As an example, all models are trained on Project Gutenberg, it does not mean that a model has to be retrained for every book out there ever written, it just means that it has to be able to have new data to consume to use it properly. So that means as the models get trained with new appropriate data they will get smarter.

As for my Microsoft comment, this is not the correct place to talk about all the unethical stuff they have done over the years.

1 Like

And I find it interesting this essentially is what our AI has been doing for years :smiley:

As for my Microsoft comment, this is not the correct place to talk about all the unethical stuff they have done over the years.


If so then why do you think the cost to operate ChatGPT is so immense?

There is no need to continue this conversation. This is about Devon Think not about Microsoft, or an article. I would just encourage you to do some more academic research (read some academic papers even from openAI) on the subject instead of trusting articles from journalism majors who are putting out articles dally without deep knowledge of the subject.


I personally appreciate the hesitancy.

1 Like