Grok распространяет ложные сведения о трагедии в Австралии и вызывает общественное недовольство Translation: Grok spreads false information about the tragedy in Australia, sparking public outrage

The AI-based chatbot Grok has repeatedly disseminated misinformation about a mass shooting that occurred on Bondi Beach in Australia.

On December 14, Sydney hosted the annual Hanukkah by the Sea celebration—marking the start of Hanukkah—organized by Chabad of Bondi, which attracted around 1,000 participants.

In the afternoon, two attackers—a father and son dressed in black—opened fire on the crowd at a playground in Archer Park, firing approximately 50 rounds. The police categorized this incident as a terrorist act with anti-Semitic motivations, making it the second deadliest mass shooting in Australia’s history.

The attack resulted in 16 fatalities and 42 injuries.

One of the attackers was disarmed by a passerby—43-year-old Ahmed al-Ahmed. Footage of the incident went viral on social media, with many praising the man’s bravery.

Elon Musk’s xAI-operated chatbot provided users with inaccurate information regarding the event. In response to a question about a video showing al-Ahmed grappling with the shooter, the AI stated:

«It appears to be an old viral clip showing a man climbing a palm tree in a parking lot, possibly to trim it. Consequently, a branch fell onto a damaged vehicle. Searching various sources yielded no confirmed details on location, date, or injuries. It might be a hoax, and authenticity remains unverified.”

Grok also claimed that a photo of al-Ahmed was taken on October 7, 2023, alleging he had been held captive by Hamas for over 700 days and was released in October 2025.

In response to another user’s query, the chatbot produced an irrelevant paragraph regarding whether the Israeli army deliberately targeted civilians in Gaza.

In one instance, Grok mischaracterized a video showing a shootout between the attacker and police in Sydney as footage of Tropical Cyclone Alfred.

Furthermore, the AI confused information about the beach incident with a shooting at Brown University that had occurred just hours before the attack in Australia.

The inaccuracies were not limited to the shooting. On December 14, Grok:

repeatedly found itself embroiled in controversies due to incorrect and misleading statements about various events and matters.

In July, users noted that the neural network relied on Elon Musk’s opinions while generating responses, including on topics such as the Israel-Palestine conflict, abortion, and immigration policy.

Observations suggest that the chatbot was intentionally configured to reflect Musk’s political views when addressing contentious issues.

Previously, the billionaire stated that his startup would rewrite «all human knowledge» to train a new version of Grok, as there is currently «too much garbage in any base model trained on unfiltered data.”

Later, Grokipedia emerged—an AI-based online encyclopedia «oriented towards truth.»

It’s worth noting that in November, users pointed out the bias in Grok 4.1—the new model had significantly overestimated Elon Musk’s capabilities.