Thank you!!! Very helpful and supportive response.
And diving into those intention links now.
Happy if it resonates… ![]()
… but also: do not expect to direct reference to AI Wikis from the links on complex intention. – I put them here mainly in reference of the more general use/presumptions on pure/transparent “intentionality” which one can read out of some comments here (and in other threads).
Then, Alicia Juarrero is a bit closer to questions of acteurs, intentions and (knowledge) systems; but also not in direct reference to PKM or knowledge augmentation…
But again, I am grateful for further hints on literature/material which brings these two intricate aspects (1) intentionality of learners/knowledge collectors and (2) augmented/(semi-)automated knowledge systems (including RAGs, AI Wikis etc) into closer contact and into a more specific discussions…
I expect some of the people propagating such systems, as Andrey Karpathy, must think about these things/implications…
… after all this is all… complex… and we are in the middle of a maelstrom here… ![]()
It’s easier to say that you want to be a millionaire than actually figuring out how to do it, and making it happen.
Let’s just take one of your points: ingest - drop a source. Do you know how much the top models costs? Are you willing to spend 50c on that article about chickens earning more than new grads?
If you have a lot of articles, that’s gonna cost money.
It’s possible, but at what cost.. Not easy.
I have spent the past 10 years or so building up my foundation documents in a general topic area that I am very interested in. I started by transcribing audio and video source manually, translating where necessary, turning everything into markdown documents. Sometime in the last year or so, I thought, what if I start “articles” about subtopics covered in all of this general material. The wikilinks in DEVONthink finally clicked and I started to copy and paste every occurrence of a subtopic in the original documents into various wikilink articles. I have largely completed this work. Now when I place a new markdown document in the original background documents database, it immediately “lights up” with all of the links to the wikilink articles I have placed in a different folder in the same database. It is quite simple to now add new content to the wikilink articles. My next step will be to remove redundancy from the individual articles, since many times it is very similar material that could be summarized and condensed.
My point to all of this is that now I know the material inside out, sideways, upside down. I am using AI to comment on the wikilink articles, but I can’t see having it summarize them for me. I also use AI to do transcription now and while it is fast, I do miss the time I spent going over the audio painstakingly to hear and understand. I think that this work is the POINT of reading and studying and learning and reflecting. I’m not sure how off-loading to AI will be beneficial to understanding the material. Yes you can query the articles directly, but you can query the background source material directly too. All this to say, while AI definitely has helped pull things together logically, I am not sure there is a benefit to having it (AI) gather and synthesize the original material for my own understanding.
Thanks for this!
Personally, I can sympathize with the core proposition of this just as much as I understand the interest of the OP. For one, I think we live in an age where multiple forms (meanings) of “understanding” (or “expertise” for that matter) proliferate side by side. E.g., the now swelling use of “summarization” often is a very different notion than the “(personal) summary” that one does in learning and really building organic expertise. It’s often more connected to the practice and need (given the swell of information) to find routes to quickly assess and browse large heaps – to then decide where to invest real reading/learning/understanding time.
The other thing is, I can totally see a mode of working where I use the power of LLMs to propose a certain form of digest and entry to then personally and more deeply review it and use it as basis for real appropriation. It just gives a headstart so to say – and I can differentiate between such LLM propositions and products inhabited, verified and organically developed by myself. This is quite the same (even if on a higher, denser level) as using the “find” script to list all occurrences of a string, to then browse it and decide which automatically listed entries are really relevant to me and “post-review” them. Otherwise, in a traditional and more dogmatic position on “understanding,” “(contextual) knowing” and “learning” I could also say: only what one finds by themselves is really relevant, legit and worthwhile…
But even though I think the potential value of the automated digestion should not be underestimated (especially given that some personal markers and orientations will permeate and prefigurate by way of good and individual prompts), I do agree with the basic proposition that real “understanding” and in that sense “expertise” cannot be outsourced or finally be delegated to automated systems. So, I do think both stances can actually live alongside each other – and indeed inform and form a shared, higher/richer position…
JMO. ![]()
PS: additional thanks for sketching your system to leverage DT to create a kind of super-“intelligent”/-versatile personal Wiki w/ more minimalist automation approach. I think this is very productive and clever in terms of activating the potentials of the system to leverage it into something that helps augment a personal system very effectively!