Claude Code is now available to $20 per month customers

As has always been the case with technological progress, the people who are “scared” that they will be replaced could/should embrace the new opportunities that are created.

Did cars put bicycle mechanics out of business - or did they have new opportunities as auto mechanics?

Are robots going to put factory workers out of business - or will they create new jobs assmbling, fixing, and operating robots?

Sure programmers may work with different languages and create different types of applications - that has always been the case. AI won’t put programmers out of business - but perhaps they will need to learn how to customize an LLM or create an AI agent or integreate an AI chatbox into a website, etc etc.

So you’re on board with AI primary health care professionals and AI expert witnesses? :wink:

Actually, AI “knows” far more about medicine than it does about AppleScript :slightly_smiling_face:

3 Likes

I am 10,000% on board with AI as a tool for doctors or expert witnesses of any type to query academic literature - AI is now firmly part of my professional practice in both situations. My workflow is immensely different now in both respects than it was a few years ago. Particularly in expert witness work it’s a notable advance for me to find specific information that I need.

Am I worried that doctors or expert witnesses are going to be replaced by clerks with much less education or experience? Not at all. I still need to apply nuance and judgment in both cases, and it shows. A fake “doctor” using ChatGPT will quicly make errors and find himself repeatedly subject to malpractice claims or medical licensing board complaints. An “expert” using AI to fake experience will quickly fall victim to hallucinations or simply won’t be able to defend his work on cross-examination; the “real” expert on the opposing side who used AI the right way will quicly be recognized by a jury as more credible.

1 Like

Are you suggesting people self-diagnose with AI? Because that is the correlated argument. “I’m not a doctor but ChatGPT will tell me what’s wrong. Huh… the diagnosis doesn’t sound too bad to me so never mind calling my doctor.” Or, “AI told me to take these herbs and roots and vitamins to cure what it told me I have. Good enough for me!”

But AI also “knows” incorrect “facts” published in medicine- and there are far more examples of incorrect medical publications than incorrect AppleScript publications.

Generally finding an answer in computer programming requires finding the answer in one of many books.

In medicine, finding an answer generally means figuring out which is the CORRECT answer given 10 different articles which offer 10 different “answers.”

1 Like

And about coding as well, especially something like AppleScript or JXA.

No - because most non-physicians do not have the experience or background to judge what source is valid when they inevitably see conflicting answers.

Plus many diagnoses overalap and thus require nuance and experience to know which one is correct. Chest pain can be due to more than a dozen minor causes as well as more than a dozen very serious causes. AI can help create that list of many possibilities; it takes experience and judgment to then narrow down that list.

Perhaps so - but in most cases it’s OK to use trial and error to see which one is correct. Medicine can have considerably worse adverse effects.

1 Like

Just as non-coders lack said experience, background, and judgment not only to know if the process is good or to be of any assistance when it comes to troubleshooting. “Just getting a result” seems like an extremely low bar to set.

BTW, everyone can do as they like in regards to AI, etc. That is a personal decision to make and there is no “right answer” about if someone should or shouldn’t use it. I’m just saying the arguments often circle back and bite their own tails. Many actual programmers have acknowledged AI is fast but often (1) incorrect or requires a ton of rules and guardrails to be put into place. It is not a “Here’s a prompt… make my app.” process. It is strongly directed and guided. (I actually just watched a video of two coders discussing this very topic.)

But again, people are free to choose the tools they please. There’s no requirement for consensus, for or against. :slight_smile:

1 Like

Agreed

Though the adoption curves are likely to be a bit different in licensed professions or business situations potentially subject to liability lawsuits.

Just getting some kind of result, usually not what you thought you would, if you even know it, seems to be the driver for many people using AI nowadays.

I am not talking about protein folding or atmospheric patterns, or new hormonal research. AI is terrific for that and many other things.

I am talking about instant gratification, and the sense that one can do simple or mildly complex things without putting much effort in understanding how it works. That’s a kind of “poor men’s hubris” if you will.

Disintermediation and Disruption are the contemporary agonists of thoughtful learning. Break things fast. Become a billionaire in the process.

Snarl #2, thinking about Candide, and Leibowitz.

BTW, I did work with ML up until recently, and Bayesian probability. We were careful not to call it AI back then, but all that is gone now. It’s all about the hype, really.

4 Likes

And, 40 years later, I can’t think of a real programmer I met that didn’t cringe at s***y code. It is in their nature, as much as language religious wars. Pseudo-code was, well, ok. They really want to know what exactly are the outcomes you want / need - and more often than not will try and convince you you are wrong, and thinking about it very inefficiently. And most of the time they will be right.

2 Likes

TLDR: This all should be more about how a necessarily diverse forum communicates to and with each other (forum culture) and arrives at a protocol on how to treat user needs for particular ‘solutions’ (which can also touch on the need for scripting) … than about the wholesale horizons / ethics of (non-)GAI, expert systems in critical contexts, or the general fate of engineering culture(s) …

First, I think this is a really high-value discussion, and I learn a lot – especially about how the general social discourse about fundamental questions of AI (projective and analytical, re. history of ‘lived’ computer culture(s), STS-wise, in terms of ethical and normative even philosophical considerations, cultural diagnostics, etc.) is trickling down into individual positions and micro-discursive greenhouses, like the DT-forums. So, big thanks to everyone in here sharing views. I listen and sometimes learn.

But I generally feel this is a very high-level discussion. I enjoy it in a lot of ways. But I think the generalizing and ‘vested positions’ manner also has its pitfalls. And loses its contextual grounding with respect to critical things like some pragmatic guidance about how the community coalesces around shared and mutual practices given a dualistic situation:

– 1) the fact that some people can/want to (learn to) script and some never will; and 2) the fact of computer- or ‘AI’-assisted programming, and the routes this opens up to everyone – and in different ways – from pro-coders to companies, to ‘individual pro-sumer users’. 3) The fact that there regularly arises the challenge for ‘ordinary’ (read here: ‘non-coding’) users to mediate the onboard capabilities of DT (including its very helpful provided script library) with a situation where these do not suffice.

The situation that I think should be the start of a user forum discussion is that – these individuals can either turn to the community of ‘script angels’; – let the problem remain unsolved; – turn to a feature request (though there is no culture of ‘acknowledging FRs here at DT’); – or turn to the hacking means modern state-of-the-art consumer technology provides to them (actually: pushes to them).

The answer can’t be, as some people here convincingly lay out, “let’s treat ‘hacked code’ as ‘regular coding’” (as done by some – somehow – professionalized coders). I think the answer at the same time can’t simply be “anyone has to learn to code”. That is socially just somewhere between impossible and precocious. – I also wouldn’t expect/anyone ask to do a course in social moderation or nonviolent communication, even though these by now belong to the standard repertoire of social interaction, resp. will be critical to human survival.

Given that, I think @uimike hit the nail by really rendering this – on top of layers like the question of augmentation, human-machine interaction, professional ‘state of affairs’ – as an issue of communication and (necessarily negotiated) identities in such a situation of ‘encounter’ (which really is what a ‘forum’ by its very name is). This always also means collective negotiations and the need for a culture of mutual respect and some attention for diversity. And the attempt to create shared understandings and protocols, without forcing anyone into any other identity than the one he/she ‘naturally’ brings. (I think transformation of personality is important; but it shouldn’t be the business of a software forum…)

In the collaborative contexts I value, all these challenges – as well as the new affordances, like AI-assisted programming, as a special case of, hm… “democratized”, expert systems – really are treated as a reason to deliberate a shared practice given a heterogeneous situation and distributed competencies. Where I come from, these are ‘solved’ by negotiating things collectively and mutually given a shared set of overarching aims. Which I think in this case would be to maximize the efficient and helpful ways people can make use of DT, given different starting situations, work contexts, styles of doing things, etc.

The result of such shared deliberation normally – in the best case – would be a shared understanding (culture, protocol) of how to treat the use of AI-assisted coding as a potential fact in a situation that is mainly defined by people trying to make the best (individual) use of DT, and secondly be a cohort of people that do not know each other personally (mostly), and are comprised of coders as well as non-coders, and some people in between. I’d say this is just the sociological reality of the crowd here.

So, in reference to prior discussion, and stating the approach of one ‘non-coding’ thorough DT-user, I can say:

• I value the script culture around DT, and all that it provides, normally being very helpful to non-initiated users (even if at times a little in danger of being paternalistic and elitist in implied tonality; see examples given above and elsewhere). I also see the script layer as an additional layer, and in normal/standard cases, people shouldn’t be necessitated/expected to write extra scripts when they enter into the scene as users buying into a GUI-software.

• When I use ‘AI’-assisted programming, which I do only in rare cases, I don’t count/qualify it as ‘proper coding’, but rather see it as a hack – in a repertoire of these hacks we all do daily to get things done in systems we can’t fully master (and nobody really can do that). And I do not see/treat/evaluate it as using augmentation means in life-critical contexts (I think the discussion goes a little off the rails here). – I also see it as a test of what I can achieve individually, without using/relying on external sources, and a way to just ‘check’ things and new ways that might be useful and might not be.

• When I do so (hacking code in this way), I’d never expect anyone to ‘fix’ it, and honestly, I haven’t seen an approach of people here “coaxing” others to do their work beyond any normal mutual exchange situation.

• I also think, fundamentally, the community overall has to cater to and find a meaningful, mutually respectful social intercourse vis-a-vis the facts that a) some people can code; b) some never will; everyone still needs to – respectfully – talk to each other, with a view of maximizing what the community can wrench from DT

… what would help here is a shared understanding of how this kind of (necessary) negotiation happens in terms of a culture (and maybe protocols) of constructive interaction. And mutual respect, also encompassing respect for different competencies lying with different individuals. That is – IMHO – the real challenge to a “forum” of users(!) in a situation of mixed competencies, capabilities and needs.


PS @jonmoore – sorry for picking a particular subset of the touched-upon questions, here strictly from such a particular perspective of a ‘non-coder’ and ‘ordinary user’. I am aware this is not 100% in alignment with the original intent of the thread. But I also think it aligns with a lot of the issues, (implicit) questions, and sentiments that are/were raised here. … and finally, it touches on the sheer reality of something like Claude Code now being generally available, and what it means for a DT-forum context. – I am very sure other aspects will be continued in the discussion, even if I share/insert a very particular highlighting here…

And virtually all professional coders I’ve worked with since the mid-nineties have a strong dislike for AppleScript, because the AppleScript mission from day one was to make coding no more difficult than writing pseudocode. The result was a mess because, well, we all the reasons why! :slight_smile:

That’s not to say that there aren’t those that managed to write solid software with AppleScript. The publishing industry up until the mid noughties relied heavily on AppleScript for the middleware that joined the dots of their prepress requirements. And of course, DEVONthink ships with an extensive smorgasbord of AppleScript’s that extend the application in many useful ways. So I’m not suggesting the non-existence AppleScript experts, that can tame its weird, idiosyncratic ways.

On the subject of the misuse of the term Ai when what we’re really talking about is LLMs, a specific form of machine learning, it would be remiss of me not to mention we’re discussing this on the DEVONthink forum, a product which has been marketed as featuring a sophisticated Ai since the mid noughties; a definite case of pot, kettle, black, me thinks.

I personally prefer the term machine intelligence, or even alien intelligence, and I do believe LLMs have proven themselves worthy of the term intelligence, as they’re capable of feats that go so much further than predicting the next word/pixel. There’s no proof they have an ability to think/reason in the way humans do (they don’t have a mind), they hallucinate, get caught in circular logic, and show a propensity to reward hack their way to the things we ask for, often to comic effect. But none of that means they’re not a transformative technology. When you look at something like Google’s latest generative video model, Veo 3; the model’s understanding of the laws of physics is mind-boggling. And you’ll notice that there have been no deep fake hysterics in the media ref Veo 3, as Google’s DeepMind are the nanny state of the major Ai vendors.

Whenever I contemplate whether machine intelligence will become sentient any time soon, I think of philosopher Bertrand Russell’s recollection of how, in his youth, his grandmother used to dismiss metaphysics whenever it was mentioned with the witticism: “What is mind? No matter. What is matter? Never mind”. It always makes me smile and grounds my creative instincts.

Considering The New York Times is suing OpenAI, I thought their AI special over the weekend was very balanced. It wasn’t framed in dystopia vs utopia terms. It was more a case of, the genie is out of the lamp, so what does it mean for the fabric of society. This is a conversation we all need to be having, on an ongoing basis.

The New Yorker and The Atlantic have also featured thoughtful essays regarding the societal impact of Ai recently.

3 Likes

Interestingly, it has been “democratized” on the work of others without them being asked. That’s one of my main gripes with this kind of business models: They help themselves to the work of others, which is provided for free. And then they build their commercial “offerings” on top of that. In other contexts, one might perhaps consider that theft.

Of course, people can do the same: They see useful code and incorporate it in their commercial product. But that is not done on a massive scale as all these A"I" thingies do.

And then there’s the other thing: It simply is not intelligent. Were it, it wouldn’t need to scrape code at all – it would simply digest the programming language’s grammar (if such a thing exists, so AppleScript is excluded from that approach) and it would know how to write programs to solve any problems thrown at them. And were it intelligent, it would then test this code to see if it works.

Neither of that happens because there is no intelligence in LLMs. They rely on the work of skilled people to provide often half-baked code to others who have either no interest or no time or no incentive to learn programming. Now, what happens when the skilled people stop publishing their code? Or, like in the case of COBOL, when there simply is hardly any code published and the people speaking this language are slowly dying out.

They aren’t expected nor necessitated. Scripting provides functions that are not available in the GUI. That’s all, and it’s the same for all apps I know that offer scripting – the app is perfectly usable inside the limits given by the GUI.

I’ve seen the “ChatGPT wrote this code and it’s not working, please help” situation several times here. Is that a “normal mutual exchange situation”? Someone pays for ChatGPT, and then they want free support here. There’s nothing mutual about that, either.

What about the respect for the work of those who provided the source for the A"I"s to generate their responses from? They’re not credited, but they’re expected to help if the machine-generated code doesn’t work or is slow or something else. I, for one, do not feel inclined to do that.

But I do provide scripts for problems that I find interesting or possibly concerning several people.

3 Likes

I think your comments are very fair. Thing about AI, is everyone and their uncle have an opinion, well, not surprising. I mentioned medicine, a close friend is deeply involved in using it, in DNA / RNA / Protein analysis, and is very enthusiastic about LLMs. But we are talking about (reasonably) controlled setups, and a lot of discipline. It still matters.

When it comes to AI making people obsolete, I also understand that many existing human activities are mostly drudgery, and I do wish (but I am skeptical it will happen) humans could dedicate themselves to more “noble” tasks. Elon Musk and many others have proposed a basic universal income, we may end up getting to that… they still need us to buy stuff.

For writing code, I’d rather than skip the spaghetti entirely, and would hope natural language advanced far enough that code would be completely invisible, and the results judged upon their success. What is tough is to exist in a space where it’s neither one nor another - getting AI to write shoddy code and trying to understand what happens.

As for the future of us all, I do freak out at the possibility of AI-led war games, but that is another discussion.

And yes, Applescript sucks, yet Applescript is great. :slight_smile:

1 Like

It’s worth going through the complete Claude Code documentation, but I’m linking here to the common workflows section.

This is, in the main, solid engineering workflows to help skilled programmers. It’s a million miles from your description on Ai assisted coding.

If ever there was a fallback to the age-old computing tenet of “garbage in, garbage out”, it’s Ai assisted programming.

I think you are passing by my general and main argument about shifting focus to the way things are treated in a community of heterogenuous user bases. But you are repeating again already stated more general convictions, often pinning them to me in uneccessarily adversarial and personalizing way, or mis-attributing (very) general issues to me, turning them into a matter of personal engagement here.

Nevertheless, just to put things straight I have said – and not said:

Interestingly, it has been “democratized” on the work of others without them being asked.

I have put ‘democratized’ into apostrophes. You seem not to recognize/acknowledge that, or pass over that.
Beyond that, we might agree on some aspects of what you are restating here very generally, but it´s beside the point as to whether the more generalized use of ‘AI’-assisted coding/hacking will be a reality in user communities (as in society in general). And, it will be.

Or, in another way: what you are singling out here is the general characteristic of ‘platform capitalism’. Lots of discourse about that. Really, discussing that would also need to include things like Google search, or any customer servicing system of any of the bigger corporations on the planet as they are systematically engaging in this kind of cannibalization and subsumption

… of course, one can focus on that. But maybe a) not in the scope of this particular thread/discussion, b) as a personal argument between two individuals, while one is trying to open the topic into a whole different direction.

I think in this context the point you are highlighting would be relevant (context specific) to the topical discussion and ‘readable’, if you will, if what you want to say/propose to the DT-user forum is something like this: ‘The use of ‘AI’-assistants is illegitimate and should not be accepted in the DT user context bec. the technology itself is simply illegit, bec it’s (totally) based on wrongful appropriation’.
I´d watch such a discussion with interest.
Otherwise the implication of what you are saying/pointing to in the context of platform capitalism, here with focus on AI, remains unclear in this context, in addressing me and my points, or what others – who also take ‘AI’-assistants as a reality in future practice – have said.

And then there’s the other thing: It simply is not intelligent.

I haven´t said that it’s ‘intelligent’, or made any of my arguments dependent on such a label or ascription. It’s a non-neccessary sidetrack for the discussion about factual use of LLM-Agents (or however they are labeled). It´s also missing / omitting everything I said explicitly about the differentiation of ‘proper coding’ and ‘hacking’ (in this context).

Personally, I think @jonmoore said the necessary and intelligent things about machine ‘intelligence’, where intelligent discussion has to go beyond such binaries as ‘intelligent’ (human) – non-intelligent:

One can see also intelligent people like James Bridle, Pascal Kaufmann, Ryan Young and others wrangling with that question…

They aren’t expected nor necessitated. Scripting provides functions that are not available in the GUI. That’s all, and it’s the same for all apps I know that offer scripting – the app is perfectly usable inside the limits given by the GUI.

I think I gave this much more context and put this in relation to practical challenges people face, when they reach the limits of what DT can do. This is where the need/discussion for a forum actually arises – as for scripts or any other way of solution. This is also why I put this in context with other possible ways of addressing problems people face, exactly bec. I think there sometimes is an overt reduction – in forum discussions – on solving things with scripts. But then, at the same time, not wanting people to “coax”, or to ‘machine hack’ scripts every now and then. For me this is related to having no clear way to make acknowledged “feature requests” (i.e. having a protocol) as an alternative to addressing such challenges in use.
I think this situation runs into the danger of making people (feel) reliant on script solutions or other individuals precious willingness to put up energy/interest to help out with their scripting capabilities. And it runs into the danger of making script-versatile people on the other side feel any general request for solution is ‘taxing’ them unduly.

So, you are kind of pigeonholing it with this reductionist quote – and strawman / personalizing rebuttal. One can say DT is “perfectly usable”, I agree. One can also say if it´s “perfectly usable” there would be no need for a forum. So, it´s also not “perfectly usable”.
All this really misses the point… especially as to the needs and protocols for a forum and a shared culture for negotiating such things, IMO.

I’ve seen the “ChatGPT wrote this code and it’s not working, please help” situation several times here. Is that a “normal mutual exchange situation”? Someone pays for ChatGPT, and then they want free support here.

Then feel free to call out those cases where they occur. I was trying to caution against throwing such slippery labels (like “people are coaxing for this or that”) here in general terms; esp. given the danger anyone not AppleScripting now has to feel like a “coaxer”, whenever he/she enters in an exchange about any scripting help, or any help for that matter.

Such generalizing labels applied in broad strokes should be used with very great care. Even more in an environment where people regularly point a lot of help- or solution-seeking users exactly into the direction “AppleScript really is the solution for you, here…”, …

Also, I don´t see this forum as a “support forum”. For that it would need different protocols, and e.g. a protocol of acknowledging “support requests”. It is about exchange and help in my eyes. And I would be very careful to pull out any global label as to those people that do help sometimes with scripts, or to those that do not master AppleScript.

What about the respect for the work of those who provided the source for the A"I"s to generate their responses from? They’re not credited, but they’re expected to help if the machine-generated code doesn’t work or is slow or something else. I, for one, do not feel inclined to do that.

See above. I am not the correct adress for that.

But I do provide scripts for problems that I find interesting or possibly concerning several people.

I always acknowledge help extended. As I acknowledge every constructive contribution in a forum.

A little tip for those that subscribe, or are considering to subscribe to ChatGPT+, the $20 per month tier. If you install the desktop macOS ChatGPT app. You can integrate directly with the content of your application window in supported apps. This works best with IDE’s for the moment, but I use BBEdit as a simple text editor, so you can use chat prompts suitable for Markdown prose as well as code.

These are the Apps that OpenAI list as being compatible for this workflow. It’s worth stating that BBEdit isn’t on this list, but it shows up in the ChatGPT preferences, so I suspect that other text centric applications will be compatible. This is another place where macOS 26 App Intents will hopefully extend the workflow further.

  • Apple Notes
  • BBEdit
  • Notion
  • TextEdit
  • Quip
  • Xcode
  • Script Editor
  • VS Code (including Code, Code Insiders, VSCodium, Cursor, Windsurf)
  • Jetbrains (including Android Studio, IntelliJ, PyCharm, WebStorm, PHPStorm, CLion, Rider, RubyMine, AppCode, GoLand, DataGrip)
  • TextEdit
  • Terminal
  • iTerm
  • Warp
  • Prompt

I mention this here as you can for instance use the DEVONthink open-in workflow to open a file in e.g. BBEdit, make some ChatGPT informed edits and save the changes directly to the DT database. The fact that the workflow works with Notion means there could possibly be a way for the DT dev team to work with OpenAI to bring this workflow to DT directly.

The value of this workflow is that it’s another way to bring Ai assisted workflows to DT without the need of API keys.

1 Like

IMO, the issue with that might be that natural language is ambiguous and changing. Programming languages (one notable exception being AppleScript) have a clearly defined grammar. Natural languages, not so much.

Even in this forum, it is often not clear what someone is looking for or what they did (not) do. Of course, throwing unclear messages at an A"I" might give results where human interaction just fizzles out because people’s patience and willingness to ask yet another question are limited.