Grok 3: "Maximal truth", minimal climate science - the first climate denialist AI
Elon Musk's new generative AI system—Grok 3, released on February 17th—positions itself as the world's most powerful AI and claims to be "Maximally truth seeking"—a promise of veracity that appears, at best, selective.
After testing several dozen climate change queries, I detected serious climate-denialist responses in approximately 10% of cases. These responses mobilize classic climate disinformation arguments—natural variability, solar cycles, conspiracy narratives about the IPCC, and transition solution skepticism.
This represents the first mainstream generative AI system displaying such biases. Previous systems have been mostly exemplary regarding scientific consensus and IPCC findings, despite occasional imprecisions and an American-centric perspective on solutions (notably the systematic absence of approaches based on sufficiency principles).
Unsurprisingly, these biases likely stem from multiple factors: biased data from climate-skeptical tweets on X—either directly sourced in responses or present in the model's training data (documented in the Climatoscope by David Chavalarias et al.)—and/or design choices in Grok 3 that privilege contestation of "dominant narratives," or some combination thereof.
Indeed, Grok 3 has already been shown to censor unflattering responses about Elon Musk and Donald Trump; its reasoning process explicitly indicated instructions to avoid mentioning Trump or Musk as sources of misinformation.
Biases in generative AI systems are already pervasive. Multiple research studies have demonstrated that most popular models exhibit American liberal-left biases. However, recent studies reveal a statistically significant "value shift" to the right in successive ChatGPT versions.
Why This Matters
When LLMs are used to generate ideas, complete homework assignments, or assist in drafting ads or fiction, they can become vectors for propagation and reinforcement with real effects in our informational ecosystem.
More concerning still is when these models are integrated as components in large-scale evaluation systems. For example, Elon Musk intends to deploy AI for his Department of Government Efficiency (DOGE) to identify governmental waste by analyzing civil servants' responses explaining their weekly activities. This is particularly troubling as NOAA climate data disappears and environmental agencies face mounting pressure from the Trump administration.
In an op-ed co-authored with QuotaClimat in Le Nouvel Obs, we warn about climate misinformation fueled and amplified by AI: climate is the subject most exposed to online misinformation in the European Union according to the European Digital Media Observatory. The consequences are already tangible: evacuation refusals during hurricanes, circulation of fake emergency numbers during floods, and physical attacks against French Biodiversity Office agents.
The absence of a robust regulatory framework specifically addressing systemic climate misinformation represents a critical vulnerability in our collective informational infrastructure and threatens climate science consensus. With Grok 3, we are witnessing the emergence of a new algorithmic obstacle in this already precarious struggle.