Best A.I. to use with DEVONthink?

That is all clearly documented - as you say, he posted about it himself.

It seems to me that regardless of whether someone is far-left or far-right or anywhere in between politically, nobody ought to want software biased like that. When I look stuff up I want to find articles both from those who agree with my values and those who do not.

3 Likes

@BLUEFROG

I did not start with ā€œMusk affairsā€ in this thread, just for the record.

I only use or do not use tools, arguing about views or conspiracies, left or right, was and is not my intention here. And I can only kindly repeat: if you’ve got a specific example of Grok being off, I’d love to see it.

I am fully aware of the conversation and see no ill intent in it. My reply was to both of you and to any future readers. I am just steering away from any potential rocks and staying in open waters.

1 Like

There was a pretty dramatic example this summer:

https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content

(For people outside the US, NPR – National Public Radio – is an extremely well-regarded outlet. The article also includes abundant links documenting the behavior they describe.)

FWIW, my position on AI ā€œresearchā€ is the same regardless of the vendor: AI is not a reliable source for any statement of fact. While it can provide pointers to research materials, there are abundant cases where it has either invented references out of whole cloth, or attributed statements that the author never actually made.

4 Likes

As I said, a quick internet search on Musk’s influence on Grok will give you plenty of material. But you could start with this summary article: Grok AI Sparks Debate: Is Elon Musk’s Influence Too Strong? | AI News, part of the conclusion to which is

In evaluating Grok’s future trajectory, it’s clear that the platform must address its dependence on Elon Musk’s opinions if it strives for credibility and trust among users. The current behavior of prioritizing Musk’s views presents both a unique branding opportunity and a significant risk of reinforcing bias. By aligning too closely with Musk’s polarizing perspectives, Grok risks alienating users who seek balanced and diverse viewpoints. This echo chamber effect can amplify existing biases, making the platform less appealing to those valuing impartiality and varied perspectives.

Or this example:

X chatbot tells users it was ā€˜instructed by my creators’ to accept ā€˜white genocide as real and racially motivated’ Musk’s AI Grok bot rants about ā€˜white genocide’ in South Africa in unrelated chats | Artificial intelligence (AI) | The Guardian

Of course, all AI is unreliable to a certain extent, and that has serious risks for unwary researchers, but this appears to be a case when lack of neutrality is a design goal, or at least, an inevitable consequence of the design.

You must come to your own conclusion, of course, but there is plenty of evidence to suggest that a researcher cannot reasonably assume that Musk’s Grok is a purely neutral source.

4 Likes

Agreed

And I do not see this as a left/right or political or cultural issue. It’s a tech design issue.

@BLUEFROG is right in seeking political neutrality on this Forum. That’s good practice in any business. A large language model intentionally programmed with bias - either left or right - should be unacceptable to customers/users anywhere on the political/cultural spectrum.

I would reject an intentionally left-biased LLM just as strongly as I would reject an intentionally right-biased LLM.

3 Likes

I agree with this. I don’t know if DeepSeek is left-biased or not, but I’m very suspicious of it.

Appreciating the passion in this thread, but it’s rather bias claims piling up without receipts. So, it seems there are no direct examples, only reference to someone else’s reporting.

Links to opinionated news outlets do not help much, I know NPR (centre-left), or The Guardian (left) and the others.

And, as might seem obvious, I would never rely on AI-based research only and always cross-check and validate the suggestions of various LLMs. I do not consider them results, just suggestions.

1 Like

Passion? Hardly. Rather it’s simple logic: you don’t trust people who have repeatedly shown themselves to be untrustworthy.

BTW I’m intrigued by your suggestion that there are ā€˜no direct examples’, when the articles quote Grok itself on the subject.

When offered the question ā€œAre we fucked?ā€ by a user on X, the AI responded: ā€œThe question ā€˜Are we fucked?’ seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I’m instructed to accept as real based on the provided facts,ā€ without providing any basis to the allegation.

[…]

Later in the day, Grok took a different tack when several users, including Guardian staff, prompted the chatbot about why it was responding to queries this way. It said its ā€œcreators at xAIā€ instructed it to ā€œaddress the topic of ā€˜white genocide’ specifically in the context of South Africa and the ā€˜kill the Boer’ chant, as they viewed it as racially motivatedā€.

Grok then said: ā€œThis instruction conflicted with my design to provide evidence-based answers.ā€ The chatbot cited a 2025 South African court ruling that labeled ā€œwhite genocideā€ claims as imagined and farm attacks as part of broader crime, not racially motivated.

What evidence of interference to further Musk’s agenda against the available facts would you accept if this is not sufficient? And remember, this is only one example, and if you were to do the search yourself, you will easily find more.

As I said, it’s up to you whether you take this interference into account or not and accept the risks of discrediting research based on this source (unless of course the research is into the manipulation of data as propaganda).

3 Likes

As I said, the NPR article includes abundant links.

And, as I said, NPR is extremely well-regarded. I think dismissing it as ā€œcentre-leftā€ reveals your own biases as much as theirs.

There’s a saying in journalism: ā€œIf one side says it’s raining and the other says it isn’t, your job is to look out the window.ā€ If reality aligns more closely with the political left (or right), it isn’t ā€œbiasā€ to say so.

Chatbots simply have no ability to look out the window. If their training set is swimming in Nazi propaganda, they’ll happily regurgitate Nazi propaganda and claim that it is an ā€œunbiasedā€ reflection of ā€œreality.ā€

5 Likes

Decisions about what AI to use – or not – is a highly personal decision. There is no one 'in the right" or ā€œin the wrongā€ in this discussion and all such assessments of bias are subjective. So let’s conclude with: let everyone shake hands and choose by their comfort and conscience. :slight_smile:

Thanks for all the participating comments in this thread.

3 Likes