"Generative AI" focus – perhaps a distraction?

FOMO at peak LLM hype was understandable, but I think Apple now regrets having built so much around that.

Not something that seems very useful to me as a focus for a DEVONthink build.

(What we are looking for is an aid to thought. Not sure that that benefits much from being plugged into a hosepipe disgorging other people’s IP, liquified and reconstituted, like reconstituted meat products …)

1 Like

There is more to DEVONthink 4 than AI, quite a bit in fact.

Also, AI is optional, just like sync is optional. Some people want to use it, others don’t, and neither is “right” or “wrong” in their decision.

5 Likes

@winter, regardless of any other considerations, your comment is very accurate. It is clear to everyone that AI is yet another extractive invention of Surveillance Capitalism, denounced by Zuboff today and by Mumford more than half a century ago (despite not having lived in the internet “era”, he already predicted that capitalism’s destructive drive would stop at nothing). I understand your anger.

Perhaps marketing could foreground those more solid values ?

e.g. from:

offers generative AI, built-in versioning, audit-proof databases and many more improvements

(Strictly speaking perhaps it is not really DEVONthink itself that is “offering” LLMs ?
Looks more like an interface to products offered by others)

to something like:

offers built-in versioning, audit-proof databases, help for LLM-users (or an interface to LLM products) and many more improvements

( A model of language is not, after all, a model of thinking )

1 Like

built-in versioning, audit-proof databases and many more improvements

Those have nothing to do with AI. Nothing has been foregrounded or backgrounded. Also, there are people who want to use AI functions in DEVONthink, so it would make no sense to remove mentions of it.

(Strictly speaking perhaps it is not really DEVONthink itself that is “offering” LLMs ?
Looks more like an interface to products offered by others)

Have you read the Getting Started > AI Explained section of the help or manual or used DEVONthink 4? These topics are already discussed there (as well as on these forums).

offers generative AI, built-in versioning, audit-proof databases and many more

Sentence-initial position gives cognitive focus.

1 Like

That is subjective. Not only is there strong evidence sentence-final position is as strong but you are ignoring individual psychological focii. Some people interested in versioning will read the same sentence and “built-in versioning” gains prominence and they’ll gloss over the phrase about AI.

If I have a list of groceries to shop for: yogurt, eggs, onions, hot sauce, and snacks, yogurt isn’t the most important thing because it’s listed first nor am I saying yogurt is best due to its position (especially as I personally don’t like yogurt). That being said, we aren’t writing for one or two people, those who like or dislike AI, nor are we making any value judgments in that simple sentence. You’re trying to read meaning into the statement that isn’t there.

And again, AI is optional. It’s also not something that’s enabled by default. If you’d read the literature and try the application, you’d see it requires some amount of setup to use.

1 Like

It is, as you say, subjective.

FWIW, between the way DT is describing the new version, and the overwhelming focus on the AI features in the discussion here in the forum, my (admittedly subjective) conclusion is that there aren’t a whole lot of reasons for someone uninterested in AI to upgrade.

2 Likes

That’s likely due to how vocal and verbose the divided camps are on AI. There is actually plenty of other discussion, handled in small, polite threads :slight_smile: And also, most of the inquiries I have gotten in support have not been AI-focused. In fact, it’s been far less than I anticipated and the majority are more about, “Do I have to use AI? Can it see all my personal documents??”, etc.

1 Like

Thta’s a pretty limited view of what AI can do.

We can’t get a sense of the limitations (of the LLM plus reasoning model approach) by taking a view, but they do begin to emerge in controlled experiments:

DT4 AI is not about taking over the world.

It’s about efficiently organizing information and identifying relationships among documents- just as was true for DT1, DT2, and DT3.

Accumulation of Cognitive Debt when Using an AIAssistant

https://arxiv.org/pdf/2506.08872v1

A central difficulty, for thinking, is that the Potemkin fronts (or Hollywood-like sets) are decorated with fragments of approximately-retrieved[1] existing IP[2], without attribution, and without an architecture of coherent reasoning. Very hard to know what you are reading or looking at, and what, if anything underpins it.

Muddies the waters of thought, and makes it dependent, as the cognitive debt accumulates.

This is emerging even in the restricted domain of coding[3] often sold as a perfect application for the use of LLMs.


  1. Can Large Language Models Reason and Plan? ↩︎

  2. See the NYT case against OpenAI, and the Disney and Universal suit against Midjourney ↩︎

  3. After months of coding with LLMs, I'm going back to using my brain • albertofortin.com ↩︎

1 Like

I’m certainly not a fan of AI. But I can see that it might be useful in certain situations.

  • Summarizing text (no issue with attribution there, it’s a simple mechanical task)
  • Coding smaller things in languages with a broad public corpus of examples.
  • Commenting code (that’s a tedious thing to do, since programmers try to avoid it)

Coding even simple tasks in languages with little public visibility and code base seems to produce badly written stuff that is even sometimes not working. Examples can be seen in this forum. I haven’t heard or seen yet AI samples of LISP, Cobol or Prolog code (for example).

One of the issues (for me) with AI is that people post LLM output without knowing if it’s good or bad. “It works” being the only argument for publishing the stuff. Then others see it and might think that they can learn something from it. Whatever they might learn, it is not good programming style.

2 Likes

Yes there are some AI apps which retrieve random information without attribution. That’s not what I use it for.

You can use DT4 to get fully referenced sources with full attribution.

I’d argue that summarising text is far from a simple mechanical task. We each process information in different ways, and the summary you offer of a text will differ from mine, depending on our research priorities/focus and prior knowledge. What’s interesting about this argument in favour of using A.I. to summarise text is that it essentially excuses not reading the text. And if we don’t read texts, why bother publishing them? Why not instead write a text and then run it through an A.I.- led summarisation process. Save everybody the hassle of reading our bad prose!

The NY Times recently published a really interesting article on the use of A.I. in history writing. I was fascinated (and made a little queasy) by how it’s being used to suggest book structures, etc.

It references a classic analysis by Lara Putnam that argues that the less we engage critically with the actual process of research (including reading, parsing and interpreting texts and sources ourselves), the weaker our critical faculties become. That’s a slightly different problem than using A.I. to generate structure from our own tangled masses of notes, but still pretty instructive, I think. I acknowledge that many academic fields/research areas require processing mountains of data that a human simply can’t manage.

3 Likes

I view the summary as creating a sophisticated table of contents for a very complex document.

Does a table of contents excuse reading a document, or does it facilitate reading it?

There are two distinct processes which both get labelled as summary:

  1. Pruning – elimination of redundancy, and
  2. subsumption – replacement of 2 or more conjoined points by a single point at a higher level of abstraction.

Competent and useful summaries tend to involve both.

Pruning can be relatively mechanical, given a rich enough statistical model,
but LLMs perform much more weakly and messily on subsumption (generalisation). They don’t form coherent domain models (the Potemkin understanding problem[1]), perhaps at least in part because their token stream inputs only weakly encode propositional dependencies.

(Writers tend to assume that their readers have bodies, and have plenty of experience of playing around with things to form models of the world – in particular, causal graphs. A lot is left between the lines – beyond the reach of systems which have no experimental component, and only model token sequence distributions – the paths most heavily worn by cliché)


  1. [2506.21521] Potemkin Understanding in Large Language Models ↩︎

1 Like

If, when you mention summary, you mean instead table of contents, that’s fine, but they’re separate and distinct things. A table of contents is not a summary.

I’d say that strongly depends on the actual content provided and the laziness or predisposition of the individual reading it.