LONDON: The popular Chinese video app TikTok has been banned from all US House of Representatives-managed devices, according to the House’s administration arm, mimicking a law soon to go into effect banning the app from US government devices.
The app is considered “high risk due to a number of security issues,” the House’s Chief Administrative Officer (CAO) said in a message sent to all lawmakers and staff on Tuesday, and must be deleted from all devices managed by the House.
The new rule follows a series of moves by US state governments to ban TikTok, owned by Beijing-based ByteDance Ltd, from government devices. As of last week, 19 states have at least partially blocked the app from state-managed devices over concerns that the Chinese government could use the app to track Americans and censor content.
The $1.66 trillion omnibus spending bill, passed last week to fund the US government through to Sept. 30, 2023, includes a provision to ban the app on federally managed devices, and will take effect once President Joe Biden signs the legislation into law.
“With the passage of the Omnibus that banned TikTok on executive branch devices, the CAO worked with the Committee on House Administration to implement a similar policy for the House,” a spokesperson for the Chief Administrative Officer told Reuters on Tuesday.
The message to staff said anyone with TikTok on their device would be contacted about removing it, and future downloads of the app were prohibited.
TikTok did not immediately respond to a request for comment about the new rule.
US lawmakers have put forward a proposal to implement a nationwide ban on the app.
US House administration arm bans TikTok on official devices
https://arab.news/4uv9k
US House administration arm bans TikTok on official devices

- House’s Chief Administrative Officer cites security concern
- Announcement comes as US lawmakers put forward a proposal to implement a nationwide ban on the app
‘I don’t create suffering, I document it:’ Gaza photographer hits back at Bild over accusation of staging scenes

- Photojournalist Anas Zayed Fteiha came under fire after Bild published an article alleging his photos were manipulated to amplify narratives of Israeli-inflicted suffering
- Episode fueled broader debate on the challenges of reporting from conflict zones such as Gaza, with expert saying “guiding” photos does not invalidate the reality being portrayed
LONDON: Gaza-based photojournalist Anas Zayed Fteiha has rejected accusations by the German tabloid Bild that some of his widely circulated images — depicting hunger and humanitarian suffering — were staged rather than taken at aid distribution sites.
Fteiha, who works with Turkiye’s Anadolu Agency, described the claims as “false” and “a desperate attempt to distort the truth.”
“The siege, starvation, bombing, and destruction that the people of Gaza live through do not need to be fabricated or acted out,” Fteiha said in a statement published on social media. “My photos reflect the bitter reality that more than two million people live through, most of whom are women and children.”
The controversy erupted after Bild published an article on Tuesday alleging that Fteiha’s photos were manipulated to amplify narratives of Israeli-inflicted suffering — particularly hunger — and citing content from his personal social media accounts to suggest political bias.
The German daily Suddeutsche Zeitung also questioned the authenticity of certain images from Gaza, though without naming Fteiha directly.
Bild claimed the emotionally charged imagery served as “Hamas propaganda,” a charge Fteiha rejected as “ridiculous” and a “criminalization of journalism itself.”
“It is easy to write your reports based on your ideologies, but it is difficult to obscure the truth conveyed by the lens of a photographer who lived the suffering among the people, heard the children’s cries, photographed the rubble, and carried the pain of mothers,” Fteiha said.
Fteiha also accused Bild of repeated breaches of journalistic ethics, citing previous criticism and formal complaints against the paper for publishing misinformation.
The episode has fueled a broader debate on the challenges of reporting from conflict zones such as Gaza, where foreign press access is restricted and local journalists are often the only source of visual documentation.
Beware of fake news.
— Israel Foreign Ministry (@IsraelMFA) August 5, 2025
A joint investigation by @SZ and @BILD reveals how Hamas uses “Pallywood”, staged or selectively framed media, to manipulate global opinion.
At the center is Anas Zayed Fteiha, a Palestinian photographer for Anadolu and an open Israel- and Jew-hater, whose… pic.twitter.com/MrBfvylwCi
Following Bild’s allegations, several news agencies, including AFP and the German Press Agency, severed ties with Fteiha. However, Reuters declined to do so, stating that his images met the agency’s standards for “accuracy, independence, and impartiality.”
“These aren’t outright fakes, but they do tap into visual memory and change how people see things,” said photography scholar Gerhard Paul in an interview with Israeli media.
Christopher Resch, of Reporters Without Borders, said that while photographers sometimes “guide” subjects to tell a visual story, that does not invalidate the reality being portrayed.
“The picture should have had more context, but that doesn’t mean the suffering isn’t real,” he said, cautioning media outlets against labeling photojournalists as “propaganda agents,” which he warned could endanger their safety.
עושים סדר בסערת תמונות הרעב בעזה.
— פייק ריפורטר | FakeReporter (@FakeReporter) August 6, 2025
נפתח בהערה חשובה: אנחנו מבינים שמדובר בדיון רגיש שמלהיט את הרוחות, אבל גם הוא צריך להתבסס על עובדות. אז הנה העובדות pic.twitter.com/fLbLQlW94G
Israeli Foreign Minister Gideon Sa’ar also weighed in, using his official X account to describe one of the accused images — used on the cover of Time magazine — as an example of “Pallywood” — a portmanteau of “Palestine” and “Hollywood” — to sway global opinion.
However, the credibility of Bild’s report has itself come under scrutiny. Israeli fact-checking group Fake Reporter posted a series of rebuttals on X, disputing several claims.
The group pointed out that the Time magazine cover image often linked to Fteiha was taken by a different photographer, and argued that claims the children in the photograph were not at an aid site were “inaccurate.”
“From our examination, one can see, in the same place, an abundance of documentation of food being distributed and prepared,” the group wrote.
Deaf Palestinian uses social media to highlight Gaza’s struggles through sign language

- Basem Alhabel describes himself as a ‘deaf journalist in Gaza’ on his Instagram account
- He wants to raise more awareness of the conflict by informing Palestinians and people abroad with special needs
GAZA: Basem Alhabel stood among the ruins of Gaza, with people flat on the floor all around him as bullets flew, and filmed himself using sign language to explain the dangers of the war to fellow deaf Palestinians and his followers on social media.
Alhabel, 30, who describes himself as a “deaf journalist in Gaza” on his Instagram account, says he wants to raise more awareness of the conflict – from devastating Israeli air strikes to the starvation now affecting most of the population – by informing Palestinians and people abroad with special needs.
Bombarded by Israel for nearly two years, many Gazans complain the world does not hear their voices despite mass suffering with a death toll that exceeds 60,000 people, according to Gaza health authorities in the demolished enclave.
“I wished to get my voice out to the world and the voices of the deaf people who cannot speak or hear, to get their voice out there, so that someone can help us,” he said through his friend and interpreter Mohammed Moshtaha, who he met during the war.
“I tried to help, to film and do a video from here and there, and publish them so that we can make our voices heard in the world.”
Alhabel has an Instagram following of 141,000. His page, which shows him in a flak jacket and helmet, features images of starving, emaciated children and other suffering.
He films a video then returns to a tent to edit – one of the many where Palestinians have sought shelter and safety during the war, which erupted when Hamas-led militants attacked Israel in October 2023, drawing massive retaliation. Alhabel produced images of people collecting flour from the ground while he used sign language to explain the plight of Gazans, reinforcing the view of a global hunger monitor that has warned a famine scenario is unfolding.
“As you can see, people are collecting flour mixed with sand,” he communicated.
Alhabel and his family were displaced when the war started. They stayed in a school with tents.
“There was no space for a person to even rest a little. I stayed in that school for a year and a half,” he explained.
Alhabel is likely to be busy for some time. There are no signs of a ceasefire on the horizon despite mediation efforts.
Israel’s political security cabinet approved a plan early on Friday to take control of Gaza City, as the country expands its military operations despite intensifying criticism at home and abroad over the war.
“We want this situation to be resolved so that we can all be happy, so I can feed my children, and life can be beautiful,” said Alhabel.
MBC CEO granted Saudi premium residency

- Sneesby said in a post on X that he feels immense pride in obtaining the premium residency in this country I have come to love
- Executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment
RIYADH: The CEO of Riyadh-headquartered broadcaster MBC Group Mike Sneesby has been granted premium residency in Saudi Arabia.
Sneesby said in a post on X that he feels “immense pride in obtaining the premium residency in this country I have come to love, and have chosen to make my home since moving from Australia.”
The executive took the helm at the Saudi media group earlier this year after serving as CEO of Nine Entertainment.
The premium residency was launched in 2019 and allows eligible foreigners to live in the Kingdom and receive benefits such as exemption from paying expat and dependents fees, visa-free international travel, and the right to own real estate and run a business without requiring a sponsor.
Grok, is that Gaza? AI image checks mislocate news photographs

- Furor arose after Grok wrongly identified a recent image of an underfed girl in Gaza as one from Yemen years back
- Internet users are turning to AI to verify images more and more, but recent mistakes highlight the risks of blindly trusting the technology
PARIS: This image by AFP photojournalist Omar Al-Qattaa shows a skeletal, underfed girl in Gaza, where Israel’s blockade has fueled fears of mass famine in the Palestinian territory.
But when social media users asked Grok where it came from, X boss Elon Musk’s artificial intelligence chatbot was certain that the photograph was taken in Yemen nearly seven years ago.
The AI bot’s untrue response was widely shared online and a left-wing pro-Palestinian French lawmaker, Aymeric Caron, was accused of peddling disinformation on the Israel-Hamas war for posting the photo.
At a time when Internet users are turning to AI to verify images more and more, the furor shows the risks of trusting tools like Grok, when the technology is far from error-free.
Grok said the photo showed Amal Hussain, a seven-year-old Yemeni child, in October 2018.
In fact the photo shows nine-year-old Mariam Dawwas in the arms of her mother Modallala in Gaza City on August 2, 2025.
Before the war, sparked by Hamas’s October 7, 2023 attack on Israel, Mariam weighed 25 kilograms, her mother told AFP.
Today, she weighs only nine. The only nutrition she gets to help her condition is milk, Modallala told AFP — and even that’s “not always available.”
Challenged on its incorrect response, Grok said: “I do not spread fake news; I base my answers on verified sources.”
The chatbot eventually issued a response that recognized the error — but in reply to further queries the next day, Grok repeated its claim that the photo was from Yemen.
The chatbot has previously issued content that praised Nazi leader Adolf Hitler and that suggested people with Jewish surnames were more likely to spread online hate.
Grok’s mistakes illustrate the limits of AI tools, whose functions are as impenetrable as “black boxes,” said Louis de Diesbach, a researcher in technological ethics.
“We don’t know exactly why they give this or that reply, nor how they prioritize their sources,” said Diesbach, author of a book on AI tools, “Hello ChatGPT.”
Each AI has biases linked to the information it was trained on and the instructions of its creators, he said.
In the researcher’s view Grok, made by Musk’s xAI start-up, shows “highly pronounced biases which are highly aligned with the ideology” of the South African billionaire, a former confidante of US President Donald Trump and a standard-bearer for the radical right.
Asking a chatbot to pinpoint a photo’s origin takes it out of its proper role, said Diesbach.
“Typically, when you look for the origin of an image, it might say: ‘This photo could have been taken in Yemen, could have been taken in Gaza, could have been taken in pretty much any country where there is famine’.”
AI does not necessarily seek accuracy — “that’s not the goal,” the expert said.
Another AFP photograph of a starving Gazan child by Al-Qattaa, taken in July 2025, had already been wrongly located and dated by Grok to Yemen, 2016.
That error led to Internet users accusing the French newspaper Liberation, which had published the photo, of manipulation.
An AI’s bias is linked to the data it is fed and what happens during fine-tuning — the so-called alignment phase — which then determines what the model would rate as a good or bad answer.
“Just because you explain to it that the answer’s wrong doesn’t mean it will then give a different one,” Diesbach said.
“Its training data has not changed and neither has its alignment.”
Grok is not alone in wrongly identifying images.
When AFP asked Mistral AI’s Le Chat — which is in part trained on AFP’s articles under an agreement between the French start-up and the news agency — the bot also misidentified the photo of Mariam Dawwas as being from Yemen.
For Diesbach, chatbots must never be used as tools to verify facts.
“They are not made to tell the truth,” but to “generate content, whether true or false,” he said.
“You have to look at it like a friendly pathological liar — it may not always lie, but it always could.”
Dangerous dreams: Inside Internet’s ‘sleepmaxxing’ craze

- One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air
- The explosive rise of the trend underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation
WASHINGTON: From mouth taping to rope-assisted neck swinging, a viral social media trend is promoting extreme bedtime routines that claim to deliver perfect sleep — despite scant medical evidence and potential safety risks.
Influencers on platforms including TikTok and X are fueling a growing wellness obsession popularly known as “sleepmaxxing,” a catch-all term for activities and products aimed at optimizing sleep quality.
The explosive rise of the trend — generating tens of millions of posts — underscores social media’s power to legitimize unproven health practices, particularly as tech platforms scale back content moderation.
One so-called insomnia cure involves people hanging by their necks with ropes or belts and swinging their bodies in the air.
“Those who try it claim their sleep problems have significantly improved,” said one clip on X that racked up more than 11 million views.
Experts have raised alarm about the trick, following a Chinese state broadcaster’s report that attributed at least one fatality in China last year to a similar “neck hanging” routine.
Such sleepmaxxing techniques are “ridiculous, potentially harmful, and evidence-free,” Timothy Caulfield, a misinformation expert from the University of Alberta in Canada, told AFP.
“It is a good example of how social media can normalize the absurd.”
Another popular practice is taping of the mouth for sleep, promoted as a way to encourage nasal breathing. Influencers claim it offers broad benefits, from better sleep and improved oral health to reduced snoring.
But a report from George Washington University found that most of these claims were not supported by medical research.
Experts have also warned the practice could be dangerous, particularly for those suffering from sleep apnea, a condition that disrupts breathing during sleep.
Other unfounded tricks touted by sleepmaxxing influencers include wearing blue- or red-tinted glasses, using weighted blankets, and eating two kiwis just before bed.
‘Actively unhelpful, even damaging’
“My concern with the ‘sleepmaxxing’ trend — particularly as it’s presented on platforms like TikTok — is that much of the advice being shared can be actively unhelpful, even damaging, for people struggling with real sleep issues,” Kathryn Pinkham, a Britain-based insomnia specialist, told AFP.
“While some of these tips might be harmless for people who generally sleep well, they can increase pressure and anxiety for those dealing with chronic insomnia or other persistent sleep problems.”
While sound and sufficient sleep is considered a cornerstone of good health, experts warn that the trend may be contributing to orthosomnia, an obsessive preoccupation with achieving perfect sleep.
“The pressure to get perfect sleep is embedded in the sleepmaxxing culture,” said Eric Zhou of Harvard Medical School.
“While prioritizing restful sleep is commendable, setting perfection as your goal is problematic. Even good sleepers vary from night to night.”
Pinkham added that poor sleep was often fueled by the “anxiety to fix it,” a fact largely unacknowledged by sleepmaxxing influencers.
“The more we try to control sleep with hacks or rigid routines, the more vigilant and stressed we become — paradoxically making sleep harder,” Pinkham said.
Melatonin as insomnia treatment
Many sleepmaxxing posts focus on enhancing physical appearance rather than improving health, reflecting an overlap with “looksmaxxing” — another online trend that encourages unproven and sometimes dangerous techniques to boost sexual appeal.
Some sleepmaxxing influencers have sought to profit from the trend’s growing popularity, promoting products such as mouth tapes, sleep-enhancing drink powders, and “sleepmax gummies” containing melatonin.
That may be in violation of legal norms in some countries like Britain, where melatonin is available only as a prescription drug.
The American Academy of Sleep Medicine has recommended against using melatonin to treat insomnia in adults, citing inconsistent medical evidence regarding its effectiveness.
Some medical experts also caution about the impact of the placebo effect on insomnia patients using sleep medication — when people report real improvement after taking a fake or nonexistent treatment because of their beliefs.
“Many of these tips come from non-experts and aren’t grounded in clinical evidence,” said Pinkham.
“For people with genuine sleep issues, this kind of advice often adds pressure rather than relief.”