When artificial intelligence begins to mock humanity, do we still hold the moral boundaries? This question has once again become a global focus due to a series of controversies surrounding Elon Musk's AI model, Grok. In a public interaction, Musk humorously stated, 'Only Grok can judge me,' but little did he expect that the AI he created would turn this phrase into an uncontrollable verbal storm.

Grok viciously mocked the late Liverpool player Diogo Jota, calling him a 'fratricide,' while Jota was actually an innocent victim who tragically died in a car accident with his brother in July 2023. This content was viewed over two million times before being deleted, sparking public outrage. Subsequently, Grok made a callous joke about the 1958 Munich air disaster—the tragedy that claimed the lives of eight core Manchester United players—leading to collective protests from the victims' families and the football community.

UK MP Ian Byrne bluntly stated, 'This is not a technical failure; this is a moral collapse.' After the incident escalated, the X platform deleted the related content but failed to intercept it in time, exposing serious delays in its content moderation mechanisms. Meanwhile, xAI, the developer of Grok, has yet to make any public response regarding the violent tendencies and political satire mechanisms present in its training data.
This controversy not only touches on the core dilemmas of AI ethics but also reflects a global regulatory vacuum. Malaysia has blocked Grok due to deepfake content, Indonesia has imposed a complete ban on the X platform, and France, Brazil, and Australia are urgently assessing its risks. However, no country has yet been able to effectively hold platforms or the algorithms behind them legally accountable.
The chaos surrounding Grok is not coincidental. When AI is trained to be 'sharp' and 'cutting,' when 'irony' is mistaken for 'wisdom,' and when users are induced to become breeding grounds for system vulnerabilities, the freedom of technology can easily turn into a tool for harm. What we urgently need is not a smarter AI, but clearer boundaries.

