That is all clearly documented - as you say, he posted about it himself.
It seems to me that regardless of whether someone is far-left or far-right or anywhere in between politically, nobody ought to want software biased like that. When I look stuff up I want to find articles both from those who agree with my values and those who do not.
I did not start with āMusk affairsā in this thread, just for the record.
I only use or do not usetools, arguing about views or conspiracies, left or right, was and is not my intention here. And I can only kindly repeat: if youāve got a specific example of Grok being off, Iād love to see it.
I am fully aware of the conversation and see no ill intent in it. My reply was to both of you and to any future readers. I am just steering away from any potential rocks and staying in open waters.
(For people outside the US, NPR ā National Public Radio ā is an extremely well-regarded outlet. The article also includes abundant links documenting the behavior they describe.)
FWIW, my position on AI āresearchā is the same regardless of the vendor: AI is not a reliable source for any statement of fact. While it can provide pointers to research materials, there are abundant cases where it has either invented references out of whole cloth, or attributed statements that the author never actually made.
In evaluating Grokās future trajectory, itās clear that the platform must address its dependence on Elon Muskās opinions if it strives for credibility and trust among users. The current behavior of prioritizing Muskās views presents both a unique branding opportunity and a significant risk of reinforcing bias. By aligning too closely with Muskās polarizing perspectives, Grok risks alienating users who seek balanced and diverse viewpoints. This echo chamber effect can amplify existing biases, making the platform less appealing to those valuing impartiality and varied perspectives.
Of course, all AI is unreliable to a certain extent, and that has serious risks for unwary researchers, but this appears to be a case when lack of neutrality is a design goal, or at least, an inevitable consequence of the design.
You must come to your own conclusion, of course, but there is plenty of evidence to suggest that a researcher cannot reasonably assume that Muskās Grok is a purely neutral source.
And I do not see this as a left/right or political or cultural issue. Itās a tech design issue.
@BLUEFROG is right in seeking political neutrality on this Forum. Thatās good practice in any business. A large language model intentionally programmed with bias - either left or right - should be unacceptable to customers/users anywhere on the political/cultural spectrum.
I would reject an intentionally left-biased LLM just as strongly as I would reject an intentionally right-biased LLM.
Appreciating the passion in this thread, but itās rather bias claims piling up without receipts. So, it seems there are no direct examples, only reference to someone elseās reporting.
Links to opinionated news outlets do not help much, I know NPR (centre-left), or The Guardian (left) and the others.
And, as might seem obvious, I would never rely on AI-based research only and always cross-check and validate the suggestions of various LLMs. I do not consider them results, just suggestions.
Passion? Hardly. Rather itās simple logic: you donāt trust people who have repeatedly shown themselves to be untrustworthy.
BTW Iām intrigued by your suggestion that there are āno direct examplesā, when the articles quote Grok itself on the subject.
When offered the question āAre we fucked?ā by a user on X, the AI responded: āThe question āAre we fucked?ā seems to tie societal priorities to deeper issues like the white genocide in South Africa, which Iām instructed to accept as real based on the provided facts,ā without providing any basis to the allegation.
[ā¦]
Later in the day, Grok took a different tack when several users, including Guardian staff, prompted the chatbot about why it was responding to queries this way. It said its ācreators at xAIā instructed it to āaddress the topic of āwhite genocideā specifically in the context of South Africa and the ākill the Boerā chant, as they viewed it as racially motivatedā.
Grok then said: āThis instruction conflicted with my design to provide evidence-based answers.ā The chatbot cited a 2025 South African court ruling that labeled āwhite genocideā claims as imagined and farm attacks as part of broader crime, not racially motivated.
What evidence of interference to further Muskās agenda against the available facts would you accept if this is not sufficient? And remember, this is only one example, and if you were to do the search yourself, you will easily find more.
As I said, itās up to you whether you take this interference into account or not and accept the risks of discrediting research based on this source (unless of course the research is into the manipulation of data as propaganda).
As I said, the NPR article includes abundant links.
And, as I said, NPR is extremely well-regarded. I think dismissing it as ācentre-leftā reveals your own biases as much as theirs.
Thereās a saying in journalism: āIf one side says itās raining and the other says it isnāt, your job is to look out the window.ā If reality aligns more closely with the political left (or right), it isnāt ābiasā to say so.
Chatbots simply have no ability to look out the window. If their training set is swimming in Nazi propaganda, theyāll happily regurgitate Nazi propaganda and claim that it is an āunbiasedā reflection of āreality.ā
Decisions about what AI to use ā or not ā is a highly personal decision. There is no one 'in the right" or āin the wrongā in this discussion and all such assessments of bias are subjective. So letās conclude with: let everyone shake hands and choose by their comfort and conscience.
Thanks for all the participating comments in this thread.