The question is whether (a) the customers are willing to pay what it actually costs to run them, once they aren’t being subsidized by investors; or (b) whether the costs can come down to more reasonable territory.
Multiple companies are considering building their own nuclear power plants to handle their data centers’ energy load. That’s not a sign that they think they’ve solved their efficiency issues.
This question touches on so many complex and important subjects that go way beyond AI investors — world markets, economic theory, social science and climatology — that if we exchange about it here, this conversation would be a candidate for the most off-topic thread in recent history…
But if I have to venture simple answer, then no. At least in OECD countries, we are definitely not willing to pay what it really costs to live like we do.
That’s a huge IF at this point. I’ve lived through decades of “This will save you so much money!!!” pitches in my life. The vast majority do not pan out.
Of course the vast majority of businesses fail within a few years - but that does not stop people from trying as long as there is a viable pathway to profits for some.
There is no question that for selected businesses AI can ethically and profitably grow the business in a win-win fashion. The fact that many busineses attempt this and fail will not deter future entrepreneurs from trying to be the next ones to profit.
Current-generation tools do sophisticated pattern matching, but they can’t reason. Apple recently released an academic paper on this.
I’ve played with LLMs locally on my machine with LM studio. The challenge with your documents will be the context window of the model. When you analyse your docs, they get loaded into the model’s context window. Most models have a limit of 128,000 tokens. Apparently that is roughly 50 pages of PDF.
FWIW, my M3Max has 64GB of RAM; I can run local models to do this. However, when I analyze anything of real length, I hear the fans spin up.
Claude Sonnet 3.5 has a context window of 200K which is about 300 pages. No local LLMs have such a large context window.
If you have a longer document then you can write an app or script or other automation that splits the document and then combines the AI responses from the multiple parts
Agreed. Somehow, I forgot to add that to my message. Further, OpenAI seems to have no interest in safety etc. At least Anthropic appears to have principles (aside from grow the most money) based on their recent statements.
And Anthropic is willing to sign a HIPAA BAA agreement - that’s a huge plus for anyone wanting to use AI with medically related content, and it suggests Anthropic truly understands and acts on security issues.
Having just spent an extremely frustrating weekend dealing with customer service chatbots, I can say that a company’s willingness to pay for a technology doesn’t necessarily have anything to do with how well it actually does the job.
Sometimes it’s illustrative to think of capitalist organizations as the modern nobility who, like the nobles of the feudal past, have a tendency of paying fortunes for exotic things which are “fashionable” but otherwise of little practical value to their serfs and themselves alike. Every once in a while, a nobleman bankrupts himself for his appetite for fashion.
(I know, many aspects of company affairs are dominated by practical concerns. My point is, one can be a practical person and a tulip maniac at the same time. The latter part does not register in their self-consciousness until after the tulip market crashes.)
There are of course failed projects in any business.
But it’s not like AI is snake oil. There truly are real examples of actual businesses where AI improves the quality or efficiency of their service/product in a meaningful/ethical way and that drives increased profits.
The fact that others fail is irrelevant to the point. The fact that there are bonafide successes will drive others to try to emulate that.
I wanted to share my thoughts on this topic. I know the developers don’t give out hints or timelines about future releases, but I’m keeping my fingers crossed that they’re working on some kind of AI/LLM integration.
As someone mentioned, this field has come a long way faster than I thought. There are now several AI platforms you can run locally and privately on your computer. Sure, they might need more powerful computers, but that’s not surprising.
It was time to upgrade my 2018 Mac Mini, so bought a MBP with an M4 Pro chip - Suddenly, I find myself delving into local AI sooner and more eagerly than I thought I would have. I’ve been looking at several local LLMs and have installed a few. Right now, LM Studio running the MLX Llama 3.2 3B Instruct model seems to be working well for me. I’m also looking into IKI.ai, as this looks like it has potential.
However, the main problem with these local LLMs is I have to export my data from DTP and then import or upload it into the AI/LLM. That’s not too hard, but it means I’d have to do it every time I add something to DTP. Not very efficient and could be solved with an LLM/AI component directly in DTP.
I love DTP (and I really do), but I’m starting to feel the urge to use all that knowledge. So strong, in fact, that if I found a local AI that allowed me to collect and store documents in collections, groups, or containers like I do now (locally, privately) with DTP, AND gave me the ability to use AI to search and query all that data, I would probably switch to it. I’m sure there are others who feel the same way, which is why I’m hoping there’s an LLM/AI component in the near future for DTP. I’d be willing to pay more for that ability, and even though I hate subscription-based software, I’d probably pay it (assuming it was reasonable).
As I have posted here, I would appreciate a future DTP solution which offers artificial intelligence modules like LLM as an add-on for users to decide if they want/need it.
I’m finding that I can do most of what I need to do by using DT3 to find the documents, combine them into PDF and then just copy the PDF into notebooklm.google.com.
Another interesting tool is to perhaps consider using https://aistudio.google.com/ while using DT3. It can see your screen and you can talk to it.
As the AI tools become more powerful, it may be the case that the developers of tools like DT3 will not be necessary for improving UI features. They will instead start to build their apps more as APIs that will allow the LLMs to just write code for you and allow you to build your own user interfaces.
We already do this to a large degree using tools like Keyboard Maestro and Shortcuts, but I think it will move faster and since DT3 already supports applescript, you can build out a lot of your own work already without assistance from DT3 developers.
Furthermore, because DT3 maintains all the files in an open format, they are available for search inside the file system and therefore accessible by other tools outside of DT3.
Imagine what you would do when there is a fatal/critical bug in your GenAi-written “interface”, which the AI can’t solve by itself. You post the code, a thousand lines strong, on a forum site, hoping that some human experts who actually know how to debug will come to your rescue. Instead you get a bunch of 300-word replies, very polite and totally useless, written by the same GenAI service you use. The replies are then crawled for training the same AI in a faraway data center. What great times we live in.
(Substitute “AI” with “social media”, and “code” with “conspiracy theory”, and then we have a real-life example.)
Good software comes with reliable support. That is not, and will not be, guaranteed by GenAI-authored programs. If you, the human being, is a programmer capable of fixing AI-written code, you may as well use that time to write the code on your own.
It’s posible to learn from AI-written code so that you are then able to fix it. Or at least so you understand it enough to pose the question back to AI how to fix it.
I have found AI coding to greatly increase my ability to learn new coding skills/nuances.
A really nice feature of most AI coding apps is the ability to post some code and ask it to explain to you how it works. That has been mind-opening for me.
Provided that you already know how to code. If it’s all gibberish to you (“I don’t have time to learn coding”), there’s nothing to learn from code.
But then you do know how to code, and you are keen to learn Not to mention that you’re probably using AI for Python code. Try more exotic things like AppleScript or Hugo templates. Like a programmer on mushrooms, often.
To write code (or essays) on one’s own does not exclude learning from others. I went through hundreds of documents written by others for my own dissertation, and you most likely did that too for yours. Taking inspirations from AI is a very different thing from …
Yes- in a former life I coded in PL/I, Assembly Language, Pascal, and others. AI is particularly helpful to learn modern coding languages because one thing that has fallen behind is documentation of coding languages. It used to be for any given language you could pick up a book, read it cover to cover, and know the language. Now with Javascript for example there are so many different flavors/libraries/dependencies it is hard to even know what to read to learn a language.
Claude does pretty well with Javascript and Applescript except it may try to use deprecated features or not be aware of new/recently changed features.