Experimenting with OpenAI API for automatic classification and renaming

Sure you can - you just have to use it for the right purposes, with understanding of its capabilities and limitations.

Throwing it out now would be like throwing out the Internet circa 1995.

I have absolutely no doubt that the opportunities that will arise from AI are just as profound as those which arose from the Internet.

Why are you using a computer now to construct your post? It’s just 1s and 0s; what use is that?

You are vastly simplifying the current AI offerings. Much of what you are suggesting about limited applications of AI applies mostly to the basic web-app consumer version of ChatGPT. You are not considering the numerous ways that AI limitations can and have been overcome with 3rd party apps and APIs accessing GPT-3 and GPT-4 rather than the ChatGPT web app. You also appear to not be aware of the added capabilities of Bing AI Chat and its 3rd party API.

And that’s not to mention 3rd party plugins for consumer-oriented ChatGPT which are about to be made available.

It’s way more than just “word completion.” While I would not for a moment use it to generate original work nor to use its output in an unsupervised fashion, GPT has a stunning ability to make suggestions on big-picture structure of a document. It can offer counter-arguments which make me think at a high level about professional issues I have studied for 30 years, and it can summarize documents/ideas in an extremely useful user-friendly way.

Piqued by this post, I asked Chat GPT to develop a few scripts for Devonthink. None of them worked particularly well, and they seemed to make up what it thought were Devonthink-specific commands. The script editor did accept the formatting without issue. Unfortunately, I don’t know AppleScript well enough to debug it.

I have found a little more success in developing bibliographies with it, although it still “hallucinates” about 20% of the time and confuses similar names. Its depth in journals is poor.

I will continue to poke around with it but I am disappointed by the LLM’s performance with a simple scripting exercise.

1 Like

Snake oil salesmen with big pocket donors have existed since well before the term was invented, marketing stuff to do anything or everything you think that you need (and wash your dishes too if you want). I fear not their existence per se. Having been in higher ed for a few decades, watching the trend in learning capabilities for students over that time, I fear instead that we have relatively fewer rational, critical thinkers today to combat the onslaught that is standing at our door.

But then, I have also always been naive either about the ability for my generation to overcome its own self-serving, silly impulses against a backdrop of bigger picture issues or about the true nature of the danger to go battle.

In summary, your concern resonates, but for a different reason than you may intend. And I am not sure that I trust whether your concern will simply be resolved anyway, either because we have enough rational, critical thinkers around or because the concern is simply as hyped up as the benefit.


JJW

@DrJJWMac - Have you considered that the opposite may be true, i.e. that GPT is one of the best tools to come around in a generation - maybe a lifetime - specifically to help rational, critical thinkers?

Of course there are many people who will use GPT in unwise ways, as is true for all tools (both software and physical in nature). Of course the media will hype its capabilities and emphasize its dangers in particular.

All that aside- let’s not worry about those who misuse the technology. Let’s only focus on rational, critical thinkers. In fact, let’s only focus on you.

What field of academics or education are you involved in? Would it not be helpful for you to have a new tool into which you insert a paragraph, an idea, or a manuscript - and then instantly get on-point, constructive feedback? Would you be interested in a “spellchecker” which not only corrects spelling but also offers suggestions on much bigger concepts, themes, and writing methodology, both for fiction and non-fiction? Would you be interested in a software tool that can read your work before you present a lecture or submit an editorial and challenge you with both concepts and references which refute your positions?

I spend my professional life evaluating the rhetoric of academic arguments in both law and medicine. I write very detailed reports in which I express opinions which I later need to defend to to a scientific/academic standard in court. I assure you that I ultimately “own” every argument I make in a given case and have read every citation I offer. That said - GPT has orgasmic capability to help me think through an argument, anticipate opposing questions, and satisfy myself that I have explored all sides of an issue for my clients such that my conclusions can be defended equally well in a classroom or a courtroom.

I submit to you that such a tool is not a danger to a rational, critical thinker; rather, it is one of the best tools mankind has ever invented for academics and thinkers of all stripes.

P.S. Just for fun I submitted the above to ChatGPT and asked for thoughts to strengthen or otherwise better express my views. It responded with a pretty good summary of my much wordier post. This is a good example of how the underlying intellectual work of the reply is mine but GPT can help me improve my writing and presentation - nothing wrong with that at all.

1 Like

My concern is not that the tool is now available for rational, critical thinkers who with such nature should presume to train themselves on how and where to use it to its maximum benefit and least detriment. My concern is my perception on the relative dearth of said thinkers or of said thinkers with said ethics compared to the relative abundance of hackers with lazy or shady ethics.

The AK-57 may be the best tool for trained professionals going to war. It can sure leave a hellacious mess otherwise. The equivalent for trained professionals is at some point to be cleaning up behind an AI generated sh*tstorm causing set of postings that are so convincingly real only the few deep thinkers will spot the falsehoods or know well enough how to get out of the mess.

I intend to try the AI tools. I agree that they should be revolutionary in positive benefits to my workflow because they will help me in my need to master the abundance of information that I have to process to gain not just knowledge but also understanding. To bring this back around to this forum, I wish that Devonthink would offer if not an immediate “we will do this” notice at least a more positive statement that they hope to support a way for non-tech savvy users to do so.


JJW

2 Likes

You don’t want to start such a discussion, don’t you? :slight_smile:

The biggest danger with regard to ChatGPT is naive use or out of convenience without any critical distance. The same is true for the Internet and especially social media.

Of course, there are various possible applications for ChatGPT, which we also already have in mind.

5 Likes

Well that’s going to happen no matter what. Yellow journalism began over 100 years ago and has evolved in all sorts of forms.

I think we should be concerned here not with those uses (which are really a question for society/government) but rather with the quite notable upside potential of the technology when used by a “rational critical thinker” as @DrJJWMac refers to that audience.

Is it just me? ChatGPT reminds me of Star Trek TOS, the episode Court Martial.

I feel renewed kinship to Samuel T. Cogley.

Thank you all for the wonderful, most interesting thread. I am not a programmer – I taught myself Python just to play around and I hoped better understand what programming and coding is. The discussion in the thread was clear, well stated and focused on significant issues. The examples and illustrations enhanced the discussion.

My professional work focuses on the history of philosophy and I use DevonThink to store, organize and access scores of texts in several languages that often address topics in metaphysics, ontology, epistemology, etc. Those of you with an interest in these topics know that questions of “mind,” “understanding,” “meaning,” are the grist of much philosophy. Obviously questions about AI often add new perspective to these enduring questions.

At this point, regardless of the (possibly bogus) possibility of machine sentience, I don’t see AI doing genuine thinking. It collects information and data, sorts and organizes same, makes matches with ordinary language tropes, performs long and difficult calculations, and of course executes instructions. But there are other types of mental activity that, it seems to me, AI processes are more or less unrelated to. The differences between what Ai does and what I’m calling “thinking” are probably more distant than the skillsets needed to play golf and basketball.

Given what I understand what AI potentially can do, I would think translation between natural languages might be something we could eventually expect AI to do with excellence. But that’s where understanding/thinking in contrast to calculating/problem solving becomes clear. Imagine two educated adults, each with near native fluency in both English and German (perhaps one is German the other English) who are given the task of translating Kant of Schiller, or Brecht from German to English. We would of course expect differences in the translations, but not only because one of them may have had access to a superior library, but because of differences in judgement.

My question after all of this is how (if it does) or can AI make judgements? We are aware of the many biases that AI systems have exhibited – approving loans, for example; how would AI approach the task of making “human like” judgements?

It’s already fairly good at that, as seen by DeepL, for example. But they still sometimes falter at colloquial expressions, like translating “Your are welcome” to “Sie sind willkommen”, while “Gern geschehen” would be appropriate. And I’m not sure how well machine translation works with irony and sarcasm. In any case, it seems to me that it works better with longer text, i.e. more context. Having Google try its luck with a menu in Mandarin leads to more confusion than clarity (or at least it did some four years ago).

By mimicry

If there is a pattern to which loans were approved in the past, then it would try to apply that.

Which means that there might be some novel factor in a new application which makes it obvious to a human to make the decision one way or another - and AI may not have enough data points upon which to make what a human would consider a reasonable decision.

Hmm… I’d think ChapGPT taking cues from the past, e.g., the subprime morgage crisis, would be a Very Bad Thing™ :flushed:

The results might be interestingly different if the cues come only from before vs before and afterwards the crisis. Same for SVB, perhaps.

Yes, that’s a major issue.

And not just with the subprime mortgage crisis. Whatever human biases are reflected in the dataset will be imported directly into the machine learning model, without the independent judgment that might allow a human to say, “this decision was wrong.”

So if you have a loan officer who doesn’t like people with blue eyes, using his loan approvals to train the model will result in a machine that won’t approve loans for people with blue eyes.

1 Like

I don’t see anybody arguing that ChatGPT should be used for that purpose.

Currently its forte is as a language editor. It is arguably good at brainstorming ideas. It is great for thinking through rebuttal arguments to check if you might have missed something in your writing. Its output should never be used autonomously without verification from another source.

Nobody would argue it is appropriate to use it now for financial modeling; so arguing against ChatGPT for that reason is a false argument.

My comment was more related to machine learning models in general, some of which are being used for those kinds of applications.

1 Like

Understood - but there has been a lot of negativity toward ChatGPT in this forum and I believe it is highly misplaced. DT3 is an ideal app to take advantage of the features at which ChatGPT currently excels.

I am close to finishing a script which integrates DT3 with the OpenAI API. I will share it with all when done. Incredibly helpful.

2 Likes

I’ve been following the discussions on this topic and largely agree with your viewpoints. There is huge potential here.

Waiting with suspense to try out the script you have come up with, once it is ready to be shared :slightly_smiling_face:

@AW2307

See here:

3 Likes