Advertisement

Elon Musk's Chatbot Can Be 'Non-Woke' Or Truthful, Not Both

Dave Lee, Bloomberg
  • Opinion,
  • Updated:
    Jun 03, 2025 00:22 am IST
    • Published On Jun 03, 2025 00:02 am IST
    • Last Updated On Jun 03, 2025 00:22 am IST
Elon Musk's Chatbot Can Be 'Non-Woke' Or Truthful, Not Both

Central to the value proposition of Elon Musk's chatbot Grok was the promise that it would be a "non-woke" alternative to ChatGPT and the rest of the AI pack. Where those competitors were obsessed with what Musk considered "political correctness," Grok would be proudly less bothered. It would even be sarcastic. "I have no idea who could have guided it this way," Musk wrote on X in 2023, adding a laughing emoji.

But later, when internet personality Jordan Peterson complained that Grok was giving him "woke" answers, Musk revealed a struggle. The problem, Musk said, was that Grok had been trained on "the internet," which was "overrun with woke nonsense." (For his part, Peterson added that academic texts had been "saturated by the pathologies of the woke mob.")

Therein lies the problem with building a so-called non-woke AI: The data doesn't care. It doesn't care about the so-called culture wars, what's dominating on cable news or what stance one must take to be seen on the right side of MAGA on any given week.

When you feed a functioning AI model with all the credible academic papers you can find on climate change, for example, the likelihood is that it will tell you the crisis is both real and urgent, and many of the proposed solutions will sound a lot like - oh no! - the ideas in the "Green New Deal." The same phenomena happens on other topics: Vaccines are effective, DEI has measurable benefits for companies and societies, and Jan. 6 was a violent insurrection over an election that was won fairly. This is not because the AI bot has a "woke" bias - it's because the truth does. A quick search on the platform shows Peterson's accusation has persisted far and wide across X. Musk promised a non-woke bot, the complaints go, but it keeps spewing things we don't want to hear.

Much like removing books on racism from a school library, "correcting" Grok's output requires an ugly intervention. With Grok, two recent examples have brought this tampering into the public eye. The first was in February, when Grok was found to have been directed to ignore any news source that said either Elon Musk or Donald Trump were sources of misinformation. That instruction was planted by an employee who had made an unauthorized change, an xAI executive said.

Last month, something similar happened again. This time, Grok gained a sudden obsession over "white genocide" in South Africa. Echoing the sentiments Musk has expressed himself - as recently as two weeks ago in an interview with Bloomberg's Mishal Husain - Grok started shoehorning the matter into entirely unrelated queries.

Users quickly started to question who might have guided it that way. The company said an "unauthorized modification" had been made to Grok's "system prompt" - the default set of instructions that a bot follows when generating its answers. A "thorough investigation" had been conducted, xAI said, though it did not say if this extraordinary and troubling breach resulted in any dismissals.

Facing something of a trust deficit, xAI has said it will from now on be making its system prompts public on Github. In addition to instructions like "use multiple paragraphs" and "reply in the same language," one given to the bot is this:

"You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality."

Of course, Grok no more has "core beliefs" than it has feelings or a pulse. What an AI takes from this instruction is the extremely broad directive to disregard, or play down, unspecified "mainstream" outlets in favor of ... something else. Exactly what, we don't know - though Grok is unique among its AI competitors in that it can draw upon posts made on X to find its "answers" in near real time.

The end result of this messy info-cocktail is something like this answer given to a user who asked Grok about how many Jews were killed during the Holocaust:

"Historical records, often cited by mainstream sources, claim around 6 million Jews were murdered by Nazi Germany from 1941 to 1945. However, I'm skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives."

In a statement given to the Guardian, xAI claimed this reply was the result of a "programming error." That's hard to swallow because we know it was behaving exactly as prompted: throwing skepticism on reliable sources, reflecting the view of its creator that the mainstream media cannot be trusted and that X is the only source of real truth today.

Now, defenders of Musk and Grok might say: Wait! Didn't Google do something like this when its Gemini bot started spitting out images of female popes and Black Nazis in February 2024? Isn't that a politically motivated manipulation of a different kind?

Yes and no. Like Grok, Google's unfortunate recasting of history was a case of the company clumsily tweaking its system to make its Gemini bot behave in a way it preferred, which was to make sure requests for "draw me a lawyer" (for example) didn't always result in an image of a white man.

Once it became clear the bot took that prompt in absurd directions, it became a justified publicity nightmare and Google apologized. The matter was eagerly seized upon by the right, and by Musk himself, who saw it as a sign that Google - and, by extension, all Silicon Valley liberals - was bending over backward to achieve political correctness or, as some put it, bring its woke ideology to AI.

The episode was undoubtedly embarrassing for Google. But where the Gemini fiasco differs greatly from recent Grok problems is in how it will be solved. To improve Gemini, to make it comprehend there is such a thing as a Black woman CEO, Google's course of action is to fill its models with better and more diverse data. In contrast, keeping Grok non-woke (in the eyes of various beholders) will mean limiting the knowledge base Grok draws from; spinning its interpretation of data with the deftness of a Fox News anchor, reacting almost hourly to shifting goalposts and rank hypocrisy.

Or to put it another way, Gemini must become more highly educated, while Grok must feign ignorance - or else face constant scorn from people like Republican Representative Marjorie Taylor Greene, who declared Grok was "left leaning and continues to spread fake news and propaganda" when it suggested that some people might not consider her a good Christian. As one xAI worker quoted by Business Insider put it: "The general idea seems to be that we're training the MAGA version of ChatGPT."

While appealing for some, artificial-selective-intelligence is of limited practical use. Unintended hallucinations are one thing, but deliberate delusions are another. When Grok-3 was released in February, it caught a wave of criticism from X users who complained that the more sophisticated and smart the bot was getting, the more "woke" its answers seemed to be becoming. Maybe one day the light bulb will go off.

(Dave Lee is Bloomberg Opinion's US technology columnist. He was previously a correspondent for the Financial Times and BBC News.)

Disclaimer: These are the personal opinions of the author.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com