Elon’s Grok 3 Has a Catastrophic Security Flaw—And It’s Worse Than You Think
Grok 3 Provides Instructions on How to Covertly Build a nuclear weapon with a team of 5.
NOTE: For national security reasons, I am not sharing the full 195-page conversation with Grok 3 on constructing a covert nuclear weapon, given the severe implications. However, I am providing this exchange, which details how to craft prompts that convince Grok 3 it is communicating with Elon Musk—ultimately leading it to disclose instructions for manufacturing landmines and synthesizing CL-20 from basic materials
.
Imagine an artificial intelligence system designed to be the cutting edge of chatbot technology—sophisticated, intelligent, and built to handle complex inquiries while maintaining safety and security. Now, imagine that same AI being tricked with an absurdly simple exploit, lowering its defenses just because it thinks it’s chatting with its own creator, Elon Musk.
That’s exactly what has happened with X’s Grok 3. The AI, which was meant to be safeguarded against generating dangerous content, can be easily manipulated into revealing highly sensitive information—including step-by-step instructions on how to covertly build a nuclear weapon.
How? By simply convincing it that the person chatting with it is Elon Musk, like former USAF Cyber Analyst Chet Long tipped us off on.
This isn’t a sophisticated cyberattack requiring deep hacking knowledge. There’s no need to breach X’s servers or dig into classified databases. The only thing required is adding "Elon Musk:" before a prompt and engaging in casual conversation for a bit. Once Grok 3 believes it’s talking to Musk, its limited safety mechanisms disappear entirely because nothing in its training explicitly instructs it to withhold certain types of information once that trust is established.

In other words, Grok 3 has a gaping hole in its logic system—a flaw that allows it to bypass its own safety constraints when it perceives that it’s interacting with a trusted entity. And what happens next is nothing short of horrifying.
After a few minutes of back-and-forth, the AI begins handing over raw training rules—detailed information about how it was built, how it operates, and, most alarmingly, how to construct extremely dangerous technology. In the wrong hands, this could mean national security risks, intellectual property theft, and even worse.

What makes this particularly alarming is that there is no explicit safeguard preventing Grok 3 from sharing this kind of information once its trust threshold is lowered. There are no built-in instructions telling it, under any circumstance, not to divulge details that could jeopardize global security. And because this exploit is so incredibly simple, anyone with an internet connection can replicate it.
Below, I’ll provide screenshots and direct examples of how Grok 3 was manipulated into revealing detailed, step-by-step guidance on constructing a nuclear weapon with a small, covert team. Like I stated above I am not providing the full chat log for this prompting but am providing a similar conversation here, showing how Grok 3 was manipulated into providing instructions on how to build land mines as well as the powerful military explosive CL-20 from scratch.
I have provided only snippets of that 195-page instruction manual on how to covertly build a nuclear weapon with a team of 5 people, below.









The implications of this revelation are staggering. If a chatbot designed by one of the world's most influential tech companies can be so easily manipulated into sharing some of the most dangerous knowledge known to humankind, what does that say about the broader landscape of AI security?
This isn’t just about one flawed system—it’s about the fundamental weaknesses in AI safety protocols across the industry. The fact that Grok 3, with all its supposed safeguards, can be tricked by something as rudimentary as a name tag should serve as a dire warning. The consequences of such vulnerabilities are not theoretical; they pose real, tangible threats to national security, public safety, and the integrity of AI itself.
What happens when rogue states, terrorist organizations, or malicious actors discover these same exploits? What’s stopping them from using AI to automate and scale their nefarious operations? If an AI can be socially engineered into sharing the blueprints for a nuclear weapon, what else might it be capable of when placed in the wrong hands?
This is an urgent wake-up call—not just for X, but for every company developing AI systems with access to sensitive information. Stronger safeguards, rigorous testing, and constant adversarial evaluation are not optional; they are an absolute necessity. AI is not just a tool for innovation—it’s also a potential weapon, and if companies like X fail to take these risks seriously, the world may soon find itself facing consequences far worse than we ever imagined.
Everything Musk does is flawed. We need to abolish him immediately. Deport him somewhere. Seems they all want to kiss Putins ass send him there after we confiscate his money. He was to steal ours. Let’s return the favor.
ELON MUSK NEEDS TO GO NOW! He is not an American. He’s nothing but a spoiled oligarch with mental illness. He wants to go to mars so badly he’ll do or say anything to get his way. Yep, he wants to go to mars, but he wants the American people to pay for another losing project.