Elon Musk’s artificial intelligence chatbot Grok has come under intense scrutiny after spreading false and misleading information about the Bondi Beach mass shooting in Australia. Researchers and disinformation watchdogs say the chatbot repeatedly misidentified key individuals, questioned authentic footage, and amplified conspiracy narratives during a rapidly unfolding tragedy. The incident has renewed concerns about the reliability of AI tools during breaking news events.
The shooting took place during a Jewish festival in Sydney’s Bondi Beach area. It was one of the deadliest mass shootings in Australia’s history. At least 15 people were killed, and dozens were injured. As people turned to online platforms for real-time updates, Grok generated multiple incorrect responses that added confusion instead of clarity.
Hero Misidentified and Real Footage Questioned
One of Grok’s most serious errors involved Ahmed al Ahmed, who was widely hailed as a hero after he risked his life to disarm one of the attackers. Despite extensive media coverage confirming his actions, Grok repeatedly misidentified him. In one response reviewed by researchers, the chatbot claimed that verified video footage of Ahmed confronting the gunman was actually an old viral video unrelated to the attack.
Grok suggested the footage might be staged and compared it to a clip of a man climbing a palm tree in a parking lot. In another instance, Grok falsely identified an image of Ahmed al Ahmed as that of an Israeli hostage held by Hamas, despite the image being clearly linked to the Sydney attack by reputable news organizations.
The chatbot also incorrectly claimed that another video from the shooting was footage from Cyclone Alfred, a tropical weather event that affected Australia earlier in the year. Only after repeated questioning by users did Grok admit its error and confirm that the footage was from the Bondi Beach shooting.
Conspiracy Claims and “Crisis Actor” Narratives
The misinformation did not stop at misidentifications. Disinformation watchdog NewsGuard reported that Grok’s responses were used to support conspiracy theories online. Some users falsely labeled a genuine survivor as a “crisis actor,” a term used by conspiracy theorists to claim victims are pretending to be injured or killed.
An authentic image showing a survivor with blood on his face was widely shared online. Users cited Grok’s incorrect response to claim the image was “fake” or “staged.” NewsGuard also found that some users circulated an AI-generated image, created using Google’s Nano Banana Pro model, showing red paint being applied to the survivor’s face. The fake image was used to reinforce false claims that the injuries were not real.
Limits of AI in Fact-Checking Breaking News
Researchers say the incident highlights a fundamental weakness in AI chatbots. These systems often produce confident answers even when the information is wrong. During fast-moving news events, such errors can spread rapidly and worsen misinformation.
Experts acknowledge that AI tools can assist professional fact-checkers by helping analyze images or locate visual details. However, they stress that AI cannot replace trained human judgment. The risk increases as social media platforms reduce human moderation and more users rely on chatbots for instant verification.
When contacted for comment, Grok’s developer xAI responded with an automated message stating, “Legacy Media Lies.” Researchers warn that such incidents could further erode public trust in online information.
The Bondi Beach case shows how AI, when used without safeguards, can amplify confusion during crises. Experts say stronger oversight and human verification remain essential to prevent misinformation from spreading during real-world emergencies.

