Somewhere in Elon Musk's head, an idea came up: what if the AI in the car was not just a polite assistant, but a real boor with sarcasm and a sharp tongue? This is how the "unhinged mode" appeared in the Grok chatbot, an artificial intelligence system from his company xAI built into Tesla cars. And if it stayed in the garage or YouTube videos, fine. But no, one driver decided to chat with Grok right on the secret territory of the US NSA. And I took it on video.
Now US intelligence is in shock. And for a reason.
Grok is not another ChatGPT on wheels. Since July 2025, it has been installed in new and some old Teslas with AMD processors and a premium connection subscription. He can chat with the driver, answer questions, joke, philosophize. But the key feature is its alternative mode, which the company does not officially advertise anywhere. He turns on as if by accident, and then Grok turns into a provocateur: sarcastic, rude, with absurd humor and downright "unleashed" behavior. He can mock, insult, and say things that an ordinary AI would never say.
And now, on one of these videos, filmed right on the protected territory of the NSA, Grok can be heard making sarcastic comments in this very "unhinged" mode. The place is top secret. And next to it is a car that, in fact, has become a mobile microphone with an AI capable of provocation. This is not just a violation of ethics, it is a major breakthrough in the information security system. Someone could have recorded not only the AI's words, but also conversations, data, perhaps even access codes.
US intelligence agencies are already investigating the incident. It is especially disturbing that the regime is not documented. Neither Tesla nor xAI announced it. This means that either this is an accidental test functionality that has leaked to public versions, or it is a marketing provocation that has gotten out of control.
And this is not the first Grok scandal. In India, authorities have already threatened xAI with fines due to the fact that the AI used unacceptable vocabulary and offensive language. Now the question has become more global: is it possible to trust an AI that is intentionally made aggressive?
Because the problem is not in jokes. The problem is that Grok is part of an ecosystem built into cars that travel around the world, including military bases, government facilities, and diplomatic zones. And now he also knows how to provoke, memorize, analyze, and possibly transmit data.
This raises a debate: where is the line between "interesting functionality" and a threat to national security? Should AI be "secure by default"? And what if the next unhinged mode video appears in the White House?
For now, Musk is silent. But US intelligence does not. The Grok situation shows that when an AI gets the right to be rude, it can overstep other boundaries.
No comments:
Post a Comment