Hmmm. Then, how is it that we are all having this discussion, which involves analysis, critical thinking, construction of original thought, comparison of concepts, research, judgments; whereas ChatGPT is constrained to:
- merely replying to prompts, in ways that can be admittedly uncannily imitative of these but are based solely on what it has been fed, and, oh, yes, are often downright wrong as compared to facts that are in its reference data; that
- if pursued long enough, eventually degrade into nonsensical rants that conclude with something like, âI know you love me!â
The fact is that ChatGPT and similar AIs are computer programs that use LLM algorithms to generate text, one word at a time, with some âsecret sauceâ added to keep things from being âflatâ and uninteresting (see Wolfram, par. 5). They are not capable of analysis, critical thinking, construction of original thought, comparison of concepts, research, judgments.
Why treat them as if they are?
Iâm not saying that ChatGPT and AIs and LLMs are evil and should be destroyed, or trivial and should be abandoned.
But it is dangerous to believe that these are functional in the way that humans are, to even suggest that we can rely on them to behave as humans, and to believe that their utterances, text generated one word at a time in response to prompts, can be trusted.
And I misspoke when I wrote that ChatGPT has been â(criminally) mis-marketed.â I believe a more accurate characterization would be: â(criminally mal-marketed.â Witness OpenAIâs bold claim, it passes a simulated bar exam with a score around the top 10% of test takers, vs. its so-called Stated Principles, that do nothing to dissuade the reader from considering its program as a legal consultant, and does not even state, âYour Mileage May Vary.â This is simply irresponsible.
In the course of daily life, we evaluate information and make decisions based on many factors, including the reliability and provenance of the information we consume. It is incumbent upon us to employ these same considerations when evaluating the output of this shiny-new machine, and to resist being seduced by its novelty.
Sorry, I donât see any equivalency at all between my sodium ions and membranes, and ChatGPTâs electrons and gates and insulating layers. (Other than, perhaps, metaphorical, or poetic, but not anything functional.)
To put it another way, when composing your reply, did you consult your library and the word list that you generated from it, and your proximity-frequency analysis that you generated from all that, to see what word should come after âYourâ? Then, after âbrainâ? Then,âŚ