Confused by response about LLM model and version

Hi,

I have started to ask the LLM’s to state their version when I use Batch Process > Append Chat response to annotation… I’m not sure I understand why I get these responses. Is it the model itself that is not “self-aware”? (It is not, but I hope you understand what I mean :wink:)

Best regards,
Björn


I doubt that this is part of the training data. E.g. GPT 4.1 just tells that it’s based on OpenAI’s GPT-4 architecture. Likewise Gemini (I am a Gemini model from Google.). A lot of models even claim that they’re another model. But does this really matter in the end?

No, it is not really important. The data that Claude 4 Sonnet was trained on is maybe a year old? With that data I suspect that Claude 3.5 Sonnet is a more likely response. Another observation of this phenomenon is that the current US President is quite often referred as former US President. Whenever I see this error in a text on the web I suspect that it was created by some LLM. (This test will soon be obsolete.)

Thanks for your reply. I will modify my prompt :blush:

A demonstration of how LLMs don’t “think.” A human would realize that the pre-2016 candidate, the 2017-2021 president, the 2021-2025 former president, and the 2025 president are all the same person. LLMs, not so much.

4 Likes

Yes, that’s right. I find this article interesting. ChatGPT is bullshit | Ethics and Information Technology

2 Likes