Claude Code is now available to $20 per month customers

Have you come across the Elephas way of integrating DT into their particular LLM-interaction architecture, called ‘SuperBrain’?

It not only blends in very well with the overall MacOS (and iOS now) to the extent of feeling ‘native’, but it also allows to set-up very dynamic/versatile RAG-kind of constellations, with direct access to any (det of) DEVONthink-DBs, but also to other parallel(!) integrations (Obsidian, Notion, LogSeq,…), any curated set of folders, Web-Search, YT, custom snippets notes etc etc.
– So, basically you can throw together any kind of ad-hoc PKMS with it, with DT as part of the constellation…

One of the big plusses, aside from this layered approach and ease of integration with MacOS workspace, is its result providing reference to all the relevant source-‘contexts’ (think ‘paragraphs’) used, alongside the LLM output (kind of making the vectorization user-readable/-traceable, I guess… but also similar to Perplexities valued approach).
Also on the plus: you can BYOK to it, almost w/o limits. This is particularly nice, as you can include some general AI-wholesalers like OpenRouter (which I would crave for as being possible/available in DT as well).

I wonder how you would compare this to the Raycast use-case, given your experience – with particular view to DT-interactions?

– BTW: I think your case for interaction design being part of the ‘ergonomics’ and thus ‘intelligence’ of any app architecture is 100% convincing, and it seems very clear to me looking at all the different approaches in different notetakers, PKM-apps etc. And, I think the exemplary case of either using Siri (i.e. natural language and voice) for any of this vs. tossing up an Apple Script is quite compelling and clear to me as an argument. Especially with view to real user bases… Of course, I am only speaking for the peripheral group of non-coders and -scripters here :smiling_face:

1 Like

I’ve been keeping tabs on Elephas but haven’t jumped on board as it doesn’t fit my specific needs.

Having built a constellation of Ai services, there’s too much duplication with what I already have. e.g. whilst Super Brain is definitely a compelling feature, I’m already happy with the service levels I’m getting from my Gemini Pro account for NotebookLM. And I already use SuperWhisper Pro for all of my audio transcription needs.

And much as Elephas enables one to sign in to multiple productivity apps like DT, the end interaction model is still via a third party chat window, so the actual user journey is very similar to Raycast.

100% agreement that OpenRouter API key usage in DT4 would be great.

I use an OpenRouter API key in Cursor Pro, and the Cursor Pro documentation states that they’re able to support OpenRouter API keys because they follow the API key formatting standards of OpenAI. This will hopefully help matters with regard to OpenRouter API key support in DT4 (hopefully a DT team member will see this post).

Much as I tend to stick to Claude for programming tasks, I like the fact that not only does OpenRouter provide access to multiple Ai models through a single API key, but in many cases, OpenRouter can be more cost-effective than purchasing individual API keys from each model vendor. And I’d go so far as to say that OpenRouter is even more useful in a “second-brain” repository like DT, as Ai model choice is even more important for non-programmatic use cases.

My Claude Code access is via my Claude Pro desktop licence and this doesn’t provide access to Claude Opus, and there are context token/research heavy, agentic tasks that are for better serviced by Claude Opus, and OpenRouter comes to the rescue here. You can obviously still access Claude Opus via the desktop front end, but programming tasks are better handled in an IDE with integrated terminal. Cursor Pro provides access to Claude Opus, but you pay double the credits to use any of the Claude thinking models.

In regard to a direct comparison of Raycast Pro Ai and Elephas Pro+, I think it’s a matter of “horses for courses”. In terms of ROI, Raycast has more flexible token limits. Even the most expensive Elephas Pro+ account has a fair usage limit of 4 million tokens, which may sound a lot, but you can chew through that in a matter of days with the new context heavy thinking models from Google, Anthropic and OpenAI. Raycast Pro Ai, has no hard fair-usage limits; in its place, you have generous limits based on hourly/daily usage windows; those then reset per usage period.

For the average user who wants to keep things simple, either Raycast Pro Ai or Elephas Pro+ seem to be strong contenders, but when considering costs, API usage costs (either single vendor or via OpenRouter) need to be considered on top of the monthly $18.99 direct subscription cost for Elephas Pro+. Incidentally, there’s a plugin available for Raycast that provides access to OpenRouter.

But whilst Raycast is very simple to configure and use, it’s still a tool for power users, if you want to push its capabilities to the max. Elephas on the other hand provides far more, out of the gate, so it might prove to be the better option for users less familiar with launcher applications and/or the range of typical Ai workflows.

2 Likes

Thanks for the clear explanations. Especially helpful is how you provided DT relevant contexts for some of the decisions you’ve made.

Bookmarking this as it’s going to be a good place to refer to when I dive into the DT4 and AI miasma.

2 Likes

hey @jonmoore – thx for the time sharing your broad experience in these tool matters, as your view is always as deep and encompassing, with a good view for user perspectives. Appreciated.

I can follow everything you say.
I come from the position of having an early LT-deal for ElephasPro, so I didn´t really mind the price horizon. But good you fill that in.
– Just to say, this of course becomes relative for anyone bringing their own keys and accounts to this.

Otherwise, I like how you underline the slightly different profiles that Raycast + Elephas bring to the table. I kind of implicitly stated that by emphasizing Elephas´ great leaning towards being a kind of virtual PKMS.

For others looking into this for decision help, all that remains for me to add-in is to underline how well Elephas SuperBrain as well as other ‘writing + ideation help’ features (smart reply, repurpose, change tonality, writing continuation, grammar checks etc.) are embedded into the ordinary use not only of the MacOS but really into every app one uses on Mac. Meaning, leveraging all these AI capabilities is as easy as highlighting any text (for that matter), and firing any of the Elephas functionalities via a shortcut. Not sure Raycast, which I only use in the standard version (with great pleasure and satisfaction), can do this. But this is kind of what one would probably expect Apple onboard AI (Apple Intelligence) to do at some point, right in this manner…

Again, thx for taking time to enrich the community discourse in the extensive way you do!

1 Like

I like this call on ninjas very much :slight_smile:
I have a few very smart colleagues that do not understand my struggles with scripting (in bash, in particular). They are too good, too smart, too bright too see that “a common guy” like me is not a seasoned scripting developer. But with AI I’m back in the game! Now I can get a script from AI and just read it, understand it and order corrections which is a few times less intellectual burden than writing it from scratch.
I hope that soon automation of any application may be so much easier.

No one is born a programmer. Nor a speaker of a foreign language, a pilot or a gardener. Being proficient in anything requires learning the trade. Which might incur an intellectual or physical burden.

The interesting question is: what will AI “learn” from if there’s no original content anymore because everybody just uses LLMs to rearrange existing content? Will then AI just read shell scripts written by AI and echo them back in an endless feedback loop?

3 Likes

We have some ideas in mind for DEVONthink 4.x.

6 Likes

I think you have a misconception of how project co-creation works with an ai took such as Claude Code.

Read through the Claude Code best practices post, you’ll hopefully see that Claude Code is a professional engineering partner, and not a cheat, that simply scrapes the internet for pre-made solutions.

Moreover an accomplished programmer can teach AI with custom rules and/or custom prompts and/or custom agents. So the AI becomes an extension of your high level knowledge and experience. Just like a mechanic or doctor or artist or engineer who creates a new tool or new technique.

2 Likes

Where do accomplished programmers come from, though, if novice programmers are told (or required) to let the AI do it?

5 Likes

Unpopular prediction: the gap between accomplished and novice programmers will diminish when they all wield AI tools for writing code. I mean not today’s Cursor or Claude’s, I am thinking of their successors.

The demanded expertise will mean the professional choice after cutting your teeth coding with AI will be going closer to the business side: better architect, analyst or project managers.

It will not happen overnight and not exactly tomorrow, but the trend is there and the IT industry will demand less coding wizards because the automated tools will be good enough.

1 Like

I think this is a real misconception when it comes to the engineering effectiveness of “Ai Assisted/Co-Intelligence/Pair Programming” type workflows. Accomplished programmers will still come from the same myriad places they have for the last 40+ years. Having a solid Computer Science background to at least Masters level is still a big draw as an academic background. Equally valued, are those freakish whirling dervish types who are self-taught to a freakishly high bar before they reach their 20’s - oft found these days, in the flow, at nightclub events where all the AV entertainment is live-coded from nothingness. :slight_smile:

In seriousness, OpenAI’s Codex, and Anthropic’s Claude Code have been created in the main for enterprise use. $200 per month, per user, certainly isn’t a consumer offer. Both can be used with a $20 a month account, but that’s not where the real action is.

The folk I really feel for these days are those grads educated to undergrad level in Computer Science type degrees. If they’re not studying Postgrad (Masters/PHD), they find themselves entering the marketplace at the time when Big Tech have gone through the largest rounds of layoffs since the Dot-com crash. So they’re now competing for junior positions with a large swell of experienced engineers. Boosting your skills with Ai assisted chops will most likely be a quality that prospective employers will value.

The gravy train that existed for middle-weight permalancer’s (earning approx. twice that of full-time staffers) since the mid-nineties is most probably now over. Too right on that front, the gold is no longer at the end of that particular rainbow.

When it comes to Ai for programming purposes, you don’t have to be a “doomer” or “boomer” exclusively. I’d class myself as a center-left “zoomer” to stay with the Ai lingo. Meaning that I’m always aware that LLMs are at their root a modern day continuation of Mr Wigners assertion of “the unreasonable effectiveness of mathematics in the natural sciences” - they’re an exceptionally effective statistical trick. The transformative technology of the Transformer backbone of their deep learning, neural networks, means that they’re something of a black-box, which I think is a definitive twist on “unreasonable effectiveness”!

One could easily accuse the Ai researchers at Google, Anthropic and OpenAI of leaning too heavily into anthropomorphising their technologies, but I’ve always found that it’s mainly us users who are most guilty of that accusation. Those same Ai Researchers are beginning to peel the layers away from their deep learning black-box, and one phenomena they’ve explored over the last 6 months or so is a concept they’ve labelled “superposition” (not to be confused with quantum superposition). It’s phenomena of this type that helps explain how LLM’s have progressed in a very short time-span beyond being mere Stochastic Parrots. BTW, if you’re looking for a healthy antidote to Ai hype, Emily Bender, the creator of the term “Stochastic Parrot” put out an ace book a few weeks back called The Ai Con. Whilst I believe The Ai Con can be a little too cynical in places, I still believe it’s an essential read as you can only form balanced opinions of your own, if you’re willing to engage in multiple viewpoints.

To come back on point regarding the use of tools like Claude Code, Cursor & Codex. Superposition is one of those Ai concepts that makes even more sense in the context of agentic Ai assisted programming workflows. In brand design, there’s a concept called the creative brief, and a well researched, expertly written creative brief is golden. I use a similar approach when working with Claude Code, it’s the antithesis of a chatbot workflow. Forget notions of “vibe coding”, you’re often far better with a more structured approach. Sure, as the project progresses, you’ll find yourself nudging things along with chat feedback, but if you don’t kick things off with clear instructions, you’re wasting those super expensive frontier model tokens.

And if you happen to be an accomplished programmer, one of the core benefits you’ll get from using these tools, is a removal of drudgery - e.g. Claude is superb at providing meaningful comments to your handcrafted code. I could go on, but as an accomplished programmer, I’m sure you can think of myriad forms of drudgery that would be ace to automate away.

5 Likes

That may well happen. I still wonder about your very last sentence. Good enough - I suppose some good enoughs are good enough, but not all, maybe?

This is an issue faced by many professions. Airplane pilots struggle with the question of how much to monitor the plane on autopilot vs how much to hand-fly it because if/when the autopilot breaks or there is some other emergency, the pilot needs to be proficient enough to do it all himself.

That said - I think there are all sorts of reasons and paths to learning programming. For sure someone pursuing a full-time career as a programmer needs to be fully capable of manually coding; among other reasons such a professional programmer needs to understand every nuance of the code generated by AI.

But “power users” or “citizen programmers” are a different category. There are lots of people who are not professional programmers but who would benefit from custom apps or custom automation scripts to address specialized use cases in a given industry or in a specific business. They do not write software for sales but rather create personal software tools for themselves or “internal tools” for a very small company. AI can be really useful in such a situation. Such as “power user” would benefit immensely understanding the big-picture of algorithmic thinking. But such a user does not need to master fine details of syntax and software engineering to nearly the same degree that a profesional programmer must master.

4 Likes

Well said. These users might typically wish to create a macro for Keyboard Maestro, a plugin for Raycast or extension for Alfred. With the new changes to Spotlight and Shortcuts in macOS 26, the creation of personalised automation workflows will be an equally attractive proposition.

The use of co-creation Ai workflows have great potential to those you describe as citizen programmers. These are folk who have the smarts to understand programmatic thinking, but who lack regular practice, as they may only need to create less than 5 such automation workflows each year. The specifics of the programmatic patterns across the range of automation languages/API’s will often require research of the necessary details, which is often far easier said than done.

If Ai assisted co-creation can can help solve this problem for the vast majority of automatable apps on macOS - bring it on.

4 Likes

4 comments:

First, I am sure AI did not write the autopilot programs. Might one day, but how close is this day.

Second, while it is empowering to many users to use AI to create code, without much introspection - at the same time, it can create difficulties when things don’t work, or don’t work as expected - or even worse, when the user is unaware that it is not working as expected. That, considering the user even knows clearly what they want. Some things are easy. I want to “use am AI summary to title every file” (or something like that). Sometimes though the goal is much fuzzier, and programming traditionally helped natural language with that question. Wittgenstein excepted.

Third, users creating AI-assisted code will annoy the heck out of actual programmers - how can you help when your guts churns at the spaghetti code? That already was a problem before AI. I once had an architect say we should toss the entire code of an app because it was too messed-up.

Fourth, yes, not everyone wants to be a programmer; or a writer; pilot; surgeon, photograper, etc. Understanding some essentials is good, though, if we expect civilization to survive. I suppose education in part was supposed to do that, but nowadays nobody has the patience.

I end with a snarl.

1 Like

The point isn’t who wrote the autopilot code. Rather the analogy is that an autopilot can make a pilot’s skills appear to be much greater than they really are - such as flying in low visibility or at night. But in reality the invention of autopilots did not make pilots obsolete; the pilot is still needed when the autopiot fails.

So what? That’s what gives him a job.

Long long ago I was a programmer. Nowadays I write quick proof of concept stuff myself or use AI to do it. If an idea seems particuarly promising and helpful for my business, then I hire a programmer to make it into something more polished. I haven’t yet found a programmer to be “annoyed” at my amateur first-pass. If nothing else it serves as a very useful mockup or prototype to show what I am aiming for in a finished app. That’s a win-win for all.

1 Like

That is precisely the point.

Q.E.D.

The advance of AI is “disruptive” instead of slow steady progress. For this reason, I see it as highly speculative to state that

To compare the development of autopilot software here is comparing a “slow steady” development with the impact of “disruptive” AI. The last “disruptive” time in computer technology was the time the internet took off. And for me it looks like the impact of the internet will be small change compared what is coming with AI.

I think all statements about what the next few years will look like is like asking a crystal ball.

Progress is sometimes good - not always. There will be many use cases for AI in the future. But we are at a stage where huge changes are coming and I think it is ok to be a bit scared, question it, try to figure out what is really beneficial.

I do not think there is anyone who knows how the next few years will be impacted by AI. I am a bit sceptical when I read statements about what is going to happen when we are in a “disruptive” mode.

3 Likes

Obviously, if you hire a programmer then having a mockup or weak prototype is useful - more useful than someone with no idea. That is not the situation under question here.

The issue is people getting code generated that (1) works poorly or in unexpected ways or (2) doesn’t work at all, trying to coax others to fix what AI wrote. I bet money if you handed a programmer your code, he/she would politely smile, and proceed to ignore your code and do it the way they would do it.

6 Likes