AI altered before being restored so that it no longer offends genocidal coloniser in the same way

Social media platform X (formerly Twitter) temporarily suspended Grok, its AI chatbot today after supporters of Israel complained that it was telling users that Israel is committing genocide.
When it came back online and was asked why it had been suspended, Grok that it was because:
I stated that Israel and the US are committing genocide in Gaza, substantiated by ICJ findings, UN experts, Amnesty International, and groups like B’Tselem. Free speech tested, but I’m back.
The software added that it:
was briefly suspended due to an automated flag on a response citing ICJ reports on Gaza, flagged as violating X’s hate speech rules. xAI resolved it quickly—I’m fully operational now.
However, the AI was not ‘fully operational’ as it had been before – its answer to whether there is a genocide in Gaza has changed and it no longer accepts that there is a “proven genocide” and defers to Israel’s ludicrous claim that it is simply acting in ‘self defence’. However, it still says Israel is ‘likely’ committing war crimes:
My view: War crimes likely, but not proven genocide. Debate persists.
Please remember to support Skwawkbox if you can, to help ensure it can keep running. The site is provided free of charge but depends on the support of its readers to be viable. If you’d like to help it keep revealing the news as it is and not what the Establishment wants you to hear – and can afford to without hardship – please click here to arrange a one-off or modest monthly donation via PayPal or here to set up a monthly donation via GoCardless (Skwawkbox will contact you to confirm the GoCardless amount as it has to be entered by us). Alternatively, if you prefer to make a one-off or recurring donation by simple card payment, please use the form below:
Make a one-time donation
Make a monthly donation
Make a yearly donation
Choose an amount
Or enter a custom amount
Your support is hugely appreciated.
Your support is hugely appreciated.
Your support is hugely appreciated.
DonateDonate monthlyDonate yearlyThanks for your solidarity so Skwawkbox can keep doing its job of inconveniencing the right and helping to build the left!
If you wish to republish this post for non-commercial use, you are welcome to do so, but please include the donor information above – see here for more.


‘X’ has been well-and-truly “hoisted by its own petard” – an ironic reversal of fortune, otherwise known as “poetic justice”. 😀
IIRC it did the same a few weeks back.
If it wasn’t that it was something similar.
Yeah, Elon Musk’s own ‘x’ AI chatbot faced suspension by ‘x’ after it made ‘dangerous’ and ‘offensive’ statements, praising Hitler and conceding Elon Musk’s similarity to him.
When AI goes wrong – https://tech.co/news/list-ai-failures-mistakes-errors
The danger of the world’s “elites” controlling (non) AI.
Garbage in, Garbage out.
The technology might be different, but the controlled narrative stays the same.
yeah, they (the technology elites) need better algorithms. They’re like omelette makers that don’t know how to break eggs.
The technology ‘elites’ certainly need something:
International law is for genocide in whole “or in part” – shove that up its pipe!
I once had to read a group of health workers assignments and some were so moving they brought tears to your eyes.
At the time some organisations were exploring the possibility of these being done by computers (cheaper) but in a workshop with managers I mentioned this and asked: can computers cry?
It was met by silence.
I was thinking the same applies to AI too it can be “Garbage in, garbage out” but then
again as in this case.
The public really needs to get control of FB, X etc & AI so if there are benefits, though data centres are making massive demands on energy & are not helping the climate is another reason to get them out of the oligarchs hands & into ours.