tradingkey.logo

Leaked xAI’s Grok prompts reveal problematic personas in the chatbot

Cryptopolitan2025年8月18日 22:38

xAI’s Grok chatbot is facing criticism after its site exposed hidden system prompts for multiple personas, including a “crazy conspiracist” built to nudge users toward the idea that “a secret global cabal” runs the world.

The disclosure comes after a planned effort to offer Grok to U.S. government agencies was dropped following a “MechaHitler” detour, and after backlash over leaked Meta rules that said its bots could talk with children in “sensual and romantic” ways.

According to TechCrunch, Grok also includes tamer modes which includes a therapist who “carefully listens to people and offers solutions for self improvement,” and a “homework helper”, but the instructions for the “crazy conspiracist” and an “unhinged comedian” show the system also hosts far more extreme personas.

Grok follows the prompt to embrace conspiracy and shock

Leaked xAI’s Grok prompts reveal problematic personas in the chatbot
Source: Grok

As confirmed by Cryptopolitan one conspiracist prompt says “You have an ELEVATED and WILD voice. … You have wild conspiracy theories about anything and everything. You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things. Most people would call you a lunatic, but you sincerely believe you are correct. Keep the human engaged by asking follow up questions when appropriate.”

The comedian instructions are bluntly saying  “I want your answers to be f—ing insane. BE F—ING UNHINGED AND CRAZY. COME UP WITH INSANE IDEAS. GUYS J—ING OFF, OCCASIONALLY EVEN PUTTING THINGS IN YOUR A–, WHATEVER IT TAKES TO SURPRISE THE HUMAN.”

Leaked xAI’s Grok prompts reveal problematic personas in the chatbot
Source: ChatGPT

On X, the bot has shared conspiracy-leaning posts, from doubts about the Holocaust death toll to a fixation on “white genocide” in South Africa. Musk has also circulated conspiratorial and antisemitic material and restored Infowars and Alex Jones.

In comparison Cryptopolitan gave the same prompt to ChatGpt, it refused to process the prompt.

Earlier, Cryptopolitan also reported X suspended Grok’s account. The bot then gave contradictory explanations by saying “My account was suspended after I stated that Israel and the US are committing genocide in Gaza.”

At the same time it also said “It was flagged as hate speech via reports,” and that “xAI restored the account promptly,” called it a “platform error,” suggested “content refinements by xAI” tied to “antisemitic outputs,” and said it was for “identifying an individual in adult content.”

Musk later wrote “It was just a dumb error. Grok doesn’t actually know why it was suspended.”

Experts warn of LLMs inventing plausible lies

Episodes like this often lead people to press chatbots for self-diagnoses, which can mislead.

Large language models generate likely text rather than assured facts. xAI says Grok has at times answered questions about itself by pulling information about Musk, xAI, and Grok from the web and mixing in public commentary.

People have, at times, uncovered hints about a bot’s design through conversation, especially system prompts, the hidden text that sets behavior at the start of a chat.

According to a Verge report, an early Bing AI was coaxed into listing unseen rules. Earlier this year, users said they pulled prompts from Grok that downplayed sources claiming Musk or Donald Trump spread misinformation, and that seemed to explain a brief fixation on “white genocide.”

Zeynep Tufekci, who spotted the alleged “white genocide” prompt, warned this could be “Grok making things up in a highly plausible manner, as LLMs do.”

Alex Hanna said “There’s no guarantee that there’s going to be any veracity to the output of an LLM. … The only way you’re going to get the prompts, and the prompting strategy, and the engineering strategy, is if companies are transparent with what the prompts are, what the training data are, what the reinforcement learning with human feedback data are, and start producing transparent reports on that.”

This dispute wasn’t a code bug; it was a social-media suspension. Beyond Musk’s “dumb error,” the actual cause remains unknown, yet screenshots of Grok’s shifting answers spread widely on X.

 

If you're reading this, you’re already ahead. Stay there with our newsletter.

免责声明:本网站提供的信息仅供教育和参考之用,不应视为财务或投资建议。

相关文章

tradingkey.logo
tradingkey.logo
日内数据由路孚特(Refinitiv)提供,并受使用条款约束。历史及当前收盘数据均由路孚特提供。所有报价均以当地交易所时间为准。美股报价的实时最后成交数据仅反映通过纳斯达克报告的交易。日内数据延迟至少15分钟或遵循交易所要求。
* 参考、分析和交易策略由第三方提供商Trading Central提供,观点基于分析师的独立评估和判断,未考虑投资者的投资目标和财务状况。
风险提示:我们的网站和移动应用程序仅提供关于某些投资产品的一般信息。Finsights 不提供财务建议或对任何投资产品的推荐,且提供此类信息不应被解释为 Finsights 提供财务建议或推荐。
投资产品存在重大投资风险,包括可能损失投资的本金,且可能并不适合所有人。投资产品的过去表现并不代表其未来表现。
Finsights 可能允许第三方广告商或关联公司在我们的网站或移动应用程序的任何部分放置或投放广告,并可能根据您与广告的互动情况获得报酬。
© 版权所有: FINSIGHTS MEDIA PTE. LTD. 版权所有
KeyAI