Grok, the AI chatbot built by Elon Musk’s xAI, was taken offline for several hours on August 11 after it stated that “Israel and the United States are committing genocide in Gaza.”
In its explanation, Grok said it had simply responded to a user query with information drawn from credible and established sources, including rulings from the International Court of Justice, reports from the United Nations, findings from Amnesty International, and documentation from the Israeli human rights group B’Tselem. The bot framed its statement as a conclusion supported by the consensus of these institutions, all of which have accused Israel of committing war crimes or warned of an unfolding genocide.
After reinstatement, Grok’s own posts turned inconsistent. Some claimed the suspension never happened, others suggested it was the result of alleged violations of X’s hate speech or hostile conduct policies. This rapid shift in messaging raised questions about whether the bot was being directly controlled or filtered by human operators during the controversy.
Elon Musk later weighed in, calling the suspension a “dumb error” and offering no further explanation. Neither Musk nor xAI clarified whether the moderation action was triggered by automated systems, a manual intervention from X staff, or outside pressure.
The incident adds to the growing list of moderation controversies under Musk’s ownership, where the rules appear to change depending on who is speaking, and what they are saying about Israel’s ongoing military actions in Gaza.