British regulator upholds complaint against The Telegraph for labeling Muslim organization ‘extremist’

The decision, announced on Thursday, followed a seven-month investigation into an article published in March, which wrongly described MAB as extremists. (Telegraph/File)
The decision, announced on Thursday, followed a seven-month investigation into an article published in March, which wrongly described MAB as extremists. (Telegraph/File)
Short Url
Updated 04 October 2024
Follow

British regulator upholds complaint against The Telegraph for labeling Muslim organization ‘extremist’

British regulator upholds complaint against The Telegraph for labeling Muslim organization ‘extremist’
  • Newspaper inaccurately called Muslim Association of Britain ‘extremist’ following a remark by then minister Michael Gove
  • In response to complaints, The Telegraph issued a correction and attributed mistake to ‘human error’

LONDON: The Independent Press Standards Organisation has upheld a complaint filed by the Muslim Association of Britain against The Telegraph for inaccurately labeling the organization as “extremist.”

The decision, announced on Thursday, followed a seven-month investigation into an article published in March, which wrongly described MAB as extremists.

“IPSO has upheld our complaint against The Telegraph for falsely labelling us as an extremist organisation, after Michael Gove’s abused parliamentary privilege in promoting a discredited and politicised definition of extremism,” said MAB in a post on X.

The regulator concluded that the newspaper violated the Editors’ Code of Practice by “failing to take care not to publish inaccurate information” and “for failing to offer a correction to a significant inaccuracy with sufficient promptness.”

The article, written by right-wing commentator Nick Timothy, claimed MAB was “one of several organizations declared extremist by Michael Gove in Parliament.” However, Gove had actually stated that MAB raised concerns due to its “Islamist orientation” and that the government would assess whether it met the definition of extremism.

In response to the complaint, The Telegraph issued a correction on its Corrections and Clarifications page, attributing the error to “human error.”

“While the correction is welcome, we urge the media to reflect on their responsibility to report facts and avoid spreading harmful falsehoods,” said MAB.

The decision comes at a critical moment, with British media facing accusations of bias in the ongoing conflict between Israel and Hamas, further complicating discussions on Islamophobia and antisemitism and highlighting ongoing challenges for Muslim organizations in the press, particularly in the context of extremism.


MBC CEO granted Saudi premium residency

MBC CEO granted Saudi premium residency
Updated 07 August 2025
Follow

MBC CEO granted Saudi premium residency

MBC CEO granted Saudi premium residency
  • Sneesby said in a post on X that he feels immense pride in obtaining the premium residency in this country I have come to love
  • Executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment

RIYADH: The CEO of Riyadh-headquartered broadcaster MBC Group Mike Sneesby has been granted premium residency in Saudi Arabia.

Sneesby said in a post on X that he feels “immense pride in obtaining the premium residency in this country I have come to love, and have chosen to make my home since moving from Australia.”

The executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment.

The premium residency was launched in 2019 and allows eligible foreigners to live in the Kingdom and receive benefits such as exemption from paying expat and dependents fees, visa-free international travel, and the right to own real estate and run a business without requiring a sponsor.


Grok, is that Gaza? AI image checks mislocate news photographs

Grok, is that Gaza? AI image checks mislocate news photographs
Updated 07 August 2025
Follow

Grok, is that Gaza? AI image checks mislocate news photographs

Grok, is that Gaza? AI image checks mislocate news photographs
  • Furor arose after Grok wrongly identified a recent image of an underfed girl in Gaza as one from Yemen years back
  • Internet users are turning to AI to verify images more and more, but recent mistakes highlight the risks of blindly trusting the technology

PARIS: This image by AFP photojournalist Omar Al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel’s blockade has fueled fears of mass famine in the Palestinian territory.

But when social media users asked Grok where it came from, X boss Elon Musk’s artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.

The AI bot’s untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.

At a time when Internet users are turning to AI to verify images more and more, the furor shows the risks of trusting tools like Grok, when the technology is far from error-free.

Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.

In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.

Before the war, sparked by Hamas’s October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.

Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP — and even that’s “not always available.”

Challenged on its incorrect response, Grok said: “I do not spread fake news; I base my answers on verified sources.”

The chatbot eventually issued a response that recognized the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.

The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.



Grok’s mistakes illustrate the limits of AI tools, whose functions are as impenetrable as “black boxes,” said Louis de Diesbach, a researcher in technological ethics.

“We don’t know exactly why they give this or that reply, nor how they prioritize their sources,” said Diesbach, author of a book on AI tools, “Hello ChatGPT.”

Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.

In the researcher’s view Grok, made by Musk’s xAI start-up, shows “highly pronounced biases which are highly aligned with the ideology” of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.

Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said Diesbach.

“Typically, when you look for the origin of an image, it might say: ‘This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine’.”

AI does not necessarily seek accuracy — “that’s not the goal,” the expert said.

Another AFP photograph of a starving Gazan child by Al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.

That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.



An AI’s bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer.

“Just because you explain to it that the answer’s wrong doesn’t mean it will then give a different one,” Diesbach said.

“Its training data has not changed and neither has its alignment.”

Grok is not alone in wrongly identifying images.

When AFP asked Mistral AI’s Le Chat — which is in part trained on AFP’s articles under an agreement between the French start-up and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen.

For Diesbach, chatbots must never be used as tools to verify facts.

“They are not made to tell the truth,” but to “generate content, whether true or false,” he said.

“You have to look at it like a friendly pathological liar — it may not always lie, but it always could.”


Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze
Updated 07 August 2025
Follow

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze

Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze
  • One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air
  • The explosive rise of the trend underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation

WASHINGTON: From mouth taping to rope-assisted neck swinging, a viral social media trend is promoting extreme bedtime routines that claim to deliver perfect sleep — despite scant medical evidence and potential safety risks.

Influencers on platforms including TikTok and X are fueling a growing wellness obsession popularly known as “sleepmaxxing,” a catch-all term for activities and products aimed at optimizing sleep quality.

The explosive rise of the trend — generating tens of millions of posts — underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation.

One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air.

“Those who try it claim their sleep problems have significantly improved,” said one clip on X that racked up more than 11 million views.

Experts have raised alarm about the trick, following a Chinese state broadcaster’s report that attributed at least one fatality in China last year to a similar “neck hanging” routine.

Such sleepmaxxing techniques are “ridiculous, potentially harmful, and evidence-free,” Timothy Caulfield, a misinformation expert from the University of Alberta in Canada, told AFP.

“It is a good example of how social media can normalize the absurd.”

Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing. Influencers claim it offers broad benefits, from better sleep and improved oral health to reduced snoring.

But a report from George Washington University found that most of these claims were not supported by medical research.

Experts have also warned the practice could be dangerous, particularly for those suffering from sleep apnea, a condition that disrupts breathing during sleep.

Other unfounded tricks touted by sleepmaxxing influencers include wearing blue- or red-tinted glasses, using weighted blankets, and eating two kiwis just before bed.

‘Actively unhelpful, even damaging’

“My concern with the ‘sleepmaxxing’ trend — particularly as it’s presented on platforms like TikTok — is that much of the advice being shared can be actively unhelpful, even damaging, for people struggling with real sleep issues,” Kathryn Pinkham, a Britain-based insomnia specialist, told AFP.

“While some of these tips might be harmless for people who generally sleep well, they can increase pressure and anxiety for those dealing with chronic insomnia or other persistent sleep problems.”

While sound and sufficient sleep is considered a cornerstone of good health, experts warn that the trend may be contributing to orthosomnia, an obsessive preoccupation with achieving perfect sleep.

“The pressure to get perfect sleep is embedded in the sleepmaxxing culture,” said Eric Zhou of Harvard Medical School.

“While prioritizing restful sleep is commendable, setting perfection as your goal is problematic. Even good sleepers vary from night to night.”

Pinkham added that poor sleep was often fueled by the “anxiety to fix it,” a fact largely unacknowledged by sleepmaxxing influencers.

“The more we try to control sleep with hacks or rigid routines, the more vigilant and stressed we become — paradoxically making sleep harder,” Pinkham said.

Melatonin as insomnia treatment

Many sleepmaxxing posts focus on enhancing physical appearance rather than improving health, reflecting an overlap with “looksmaxxing” — another online trend that encourages unproven and sometimes dangerous techniques to boost sexual appeal.

Some sleepmaxxing influencers have sought to profit from the trend’s growing popularity, promoting products such as mouth tapes, sleep-enhancing drink powders, and “sleepmax gummies” containing melatonin.

That may be in violation of legal norms in some countries like Britain, where melatonin is available only as a prescription drug.

The American Academy of Sleep Medicine has recommended against using melatonin to treat insomnia in adults, citing inconsistent medical evidence regarding its effectiveness.

Some medical experts also caution about the impact of the placebo effect on insomnia patients using sleep medication — when people report real improvement after taking a fake or nonexistent treatment because of their beliefs.

“Many of these tips come from non-experts and aren’t grounded in clinical evidence,” said Pinkham.

“For people with genuine sleep issues, this kind of advice often adds pressure rather than relief.”


Meta facing $1bn lawsuit for livestreaming Oct. 7 Hamas attack

Meta facing $1bn lawsuit for livestreaming Oct. 7 Hamas attack
Updated 06 August 2025
Follow

Meta facing $1bn lawsuit for livestreaming Oct. 7 Hamas attack

Meta facing $1bn lawsuit for livestreaming Oct. 7 Hamas attack
  • Victims accuse Facebook, Instagram of being ‘pipeline for terror’
  • Case could set precedent for social media companies

LONDON: Survivors and relatives of Israeli victims of the Oct. 7 Hamas attack have filed a lawsuit against Meta, accusing the American tech giant of enabling and amplifying the atrocities through its platforms.

The plaintiffs are seeking nearly 4 billion shekels ($1.17 billion) in damages. The figure comprises 200,000 shekels for each victim whose suffering was broadcast or documented on Meta platforms and 20,000 shekels for every Israeli who was exposed to the footage.

The suit, filed with the Tel Aviv District Court, could set a precedent for social media companies. It alleges that Facebook and Instagram became “a pipeline for terror,” allowing Hamas militants to livestream and upload videos of killings, kidnappings and other atrocities.

The plaintiffs claim Meta failed to block or remove the footage in real time and left some content online for hours or even days.

Israeli news website Ynet reported that the legal action was initiated by the Idan family, who said Hamas gunmen stormed their home, held them hostage and murdered their eldest daughter, Maayan — all while livestreaming the attack on the mother’s Facebook account. The father, Tsachi, was abducted to Gaza and later killed.

“They livestreamed the murder of our daughter, our other children’s trauma and our cries for help,” the mother was quoted as saying.

“Facebook and Instagram enabled the broadcast of a brutal terror attack. And Meta is still allowing the footage to circulate.”

Another plaintiff said she learned of her grandmother Bracha Levinson’s abduction and death only after Hamas uploaded the footage to her Facebook page.

The lawsuit also includes claims from members of the public who say they were exposed to graphic and traumatic content simply by logging on to the platforms that day. They accuse Meta of failing to act quickly to protect users from the livestreamed violence. The platforms, they argue, became “an inseparable part of Hamas’ terror infrastructure.”

Meta is also accused of violating victims’ privacy and dignity, and of profiting from the viral spread of the footage. Plaintiffs argue that the company failed to activate rapid response systems or prevent its algorithms from promoting the violent content.

“Our hearts go out to the families affected by Hamas terrorism,” a Meta spokesperson said, adding that the company had set up dedicated teams working round the clock to remove the content and continued to remove any material that supported or glorified Hamas or the Oct. 7 attack.

The case is one of several filed in Israel and the US targeting actors accused of aiding or enabling Hamas propaganda and logistics. Last month, families of more than 120 victims sued the Palestinian Authority, claiming its “pay-for-slay” policy — providing monthly stipends to convicted attackers or their families — constituted material support for the massacre.


Latin America News Agency launches Arabic service

Latin America News Agency launches Arabic service
Updated 06 August 2025
Follow

Latin America News Agency launches Arabic service

Latin America News Agency launches Arabic service
  • Move part of efforts to build media, cultural ties between regions, LANA says

LONDON: The Latin America News Agency has launched a news service in Arabic, the first of its kind on the continent.

“From now on, all our content — website, video scripts, image data — is fully available in Arabic, in addition to Spanish and English,” the agency said on Wednesday.

The new service was part of the company’s efforts to build stronger “media and cultural ties” between Latin America and the Arab world and “facilitate access to reliable and up-to-date content,” it said.

Millions of people of Arab descent, primarily from Lebanon, Syria and Palestine, live in Latin America, mostly in Argentina, Brazil and Chile.

Based in Argentina, LANA collaborates with several international and regional agencies, including Reuters, The Associated Press, Turkey’s Anadolu Agency and the Saudi Press Agency.

It also distributes multimedia content and describes itself as Latin America’s “first image bank.”