Surveillance nation: India spies on world’s largest population

Authorities say they are needed to improve governance and bolster security in a severely under-policed country. (AFP file photo)
Short Url
Updated 20 March 2023
Follow

Surveillance nation: India spies on world’s largest population

  • Across the country, the use of CCTV and facial recognition is increasing in schools, airports, train stations, prisons and streets as authorities roll out a nationwide system to curb crime and identify missing children

NEW DELHI: Khadeer Khan was arrested in the south Indian city of Hyderabad in January after police claimed to have identified him from CCTV footage as a suspect in a chain snatching incident. He was released a few days later, and died while being treated for injuries he allegedly sustained while in custody.
The police said Khan was arrested because he looked like the man seen in the CCTV footage.
“When it was ruled out that Khadeer was not the one who had committed the crime, he was released. Everything was done as per procedure,” said K. Saidulu, a deputy superintendent of police.
But human rights activists say the 36-year-old was clearly misidentified — a growing risk with the widespread use of CCTV in Telangana state, which has among the highest concentrations of the surveillance technology in the country.
“We have been warning for many years that CCTV and facial recognition technology can be misused for harassment, and that they can misidentify people,” said S.Q. Masood, a human rights activist who filed a lawsuit in 2021 challenging the use of facial recognition in Telangana that is still ongoing.

HIGHLIGHTS

• India poised to become world's most populous nation

• Increased digitisation of services has led to greater surveillance, activists say

• Authorities say surveillance needed to curb crime

“This case has exposed just how harmful it can be,” he told the Thomson Reuters Foundation.
Across the country, the use of CCTV and facial recognition is increasing in schools, airports, train stations, prisons and streets as authorities roll out a nationwide system to curb crime and identify missing children.
It’s not the only form of surveillance in the country.
The biometric national ID Aadhaar, with some 1.3 billion IDs issued, is linked to dozens of databases including bank accounts, vehicle registrations, SIM cards and voters’ lists, while the National Intelligence Grid aims to link nearly two dozen databases of government agencies for citizen profiles.
Meanwhile, policing of the Internet has also grown, with greater monitoring of social media, and the most frequent Internet shutdowns in the world.
Authorities say they are needed to improve governance and bolster security in a severely under-policed country. But technology experts say there is little correlation to crime, and that they violate privacy and target vulnerable people.
“Everything’s being digitised, so there’s a lot of information about a person being generated that is accessible to the government and to private entities without adequate safeguards,” said Anushka Jain, legal counsel at Internet Freedom Foundation, an advocacy group in Delhi.
“At a time when people are attacked for their religion, language and sexual identity, the easy availability of these data can be very harmful. It can also result in individuals losing access to welfare schemes, to public transport or the right to protest whenever the government deems it necessary.”

BIRTH TO DEATH
India is poised to become the world’s most populous country in April, overtaking China with more than 1.43 billion people, according to estimates by the United Nations.
The government, led by Prime Minister Narendra Modi, has prioritized the Digital India program to improve efficiency and streamline welfare schemes by digitising everything from land titles to health records to payments.
Aadhaar — the world’s largest biometric database — underpins many of these initiatives, and is mandatory for welfare, pension and employment schemes, despite a 2014 Supreme Court ruling that it cannot be a requirement for welfare programs.
Yet despite its wide adoption, millions face difficulties with their Aadhaar IDs because of inaccurate details or fingerprints that don’t match, and are denied vital services.
“The government claims linking to Aadhaar brings better governance, but it will lead to a totalitarian society because the government knows every individual’s profile,” said Srinivas Kodali at Free Software Movement of India, an advocacy group.
“The goal is to track everyone from birth to death. Anything linked to Aadhaar eventually ends up with the ministry of home affairs, and the policing and surveillance agencies, so dissent against the government becomes very difficult,” he added.
The ministry of home affairs did not respond to a request for comment.
The latest iteration of digitization is Digi Yatra, which was rolled out at the Delhi, Bengaluru and Varanasi airports in December. It allows passengers to use their Aadhaar ID and facial recognition for check-ins at airports.
The ministry of civil aviation has said Digi Yatra leads to “reduced wait time and makes the boarding process faster and more seamless,” with dedicated lanes for those using the app.
But those who choose to not use Digi Yatra may be viewed with suspicion and subject to additional checks, said Kodali.
The data — including travel details — can also be shared with other government agencies, and may be used to put people on no-fly lists, and stop activists, journalists and dissenters from traveling, as is already happening, said Kodali.
The ministry of civil aviation did not respond to a request for comment.

ATTENDANCE APPS
Some of the lowest-paid public-sector workers in India bear the brunt of the government’s surveillance mechanisms.
Municipal workers across the country are required to wear GPS-enabled watches that are equipped with a camera that takes snapshots, and a microphone that can listen in on conversations.
The watches feed a stream of data to a central control room, where officials monitor the movements of each employee, and link the data to performance and salaries.
Authorities have said the goal is to improve efficiency. Workers across the country have protested the surveillance.
In January, the federal government said that the National Mobile Monitoring Software (NMMS) app would be mandatory for all workers under the National Rural Employment Guarantee Scheme (NREGS), after having rolled it out in several states last year.
Women make up nearly 60 percent of the more than 20 million beneficiaries nationwide who get 100 days of work in a year, and are paid a daily wage of up to 331 rupees ($4).
The new system requires the supervising officer, called a mate, to upload pictures of the laborers when they start work and when they finish, as proof of their attendance, which was marked in manual logs earlier.
But this requires the mate — usually a woman — to have a smartphone and a stable Internet connection twice a day, which is near impossible in many rural areas, said Rakshita Swamy, a researcher with the non-profit Peoples’ Action for Employment Guarantee.
“If the pictures don’t get uploaded, the workers are considered absent, and they don’t get paid for the work,” she said.
“There is also hesitation among the women about having their pictures taken. There is no transparency about what happens to these photographs — it’s highly likely that they are being used to train facial recognition algorithms,” she added.
Hundreds of NREGS workers are holding a protest in Delhi, calling for payment of back wages and doing away with the app.
The ministry of rural development has said the app would lead to “more transparency and ensure proper monitoring” of workers, without addressing surveillance concerns.
A long-delayed data protection law, which is awaiting passage in parliament, would offer little recourse as it gives sweeping exceptions to government agencies, say privacy experts.
In Rajasthan state, which has among the highest number of Internet shutdowns in the country, Kamla Devi, a mate in Ajmer district, has struggled with the NMMS app for several months.
“On many days, there’s no network, and I tell the workers to go home. There’s no point if they work because they won’t get paid,” she said.
“This app is ruining livelihoods. It was better when we had a manual attendance log.”

 

 


Tech-fueled misinformation distorts Iran-Israel fighting

Updated 23 June 2025
Follow

Tech-fueled misinformation distorts Iran-Israel fighting

  • It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation

WASHINGTON: AI deepfakes, video game footage passed off as real combat, and chatbot-generated falsehoods — such tech-enabled misinformation is distorting the Israel-Iran conflict, fueling a war of narratives across social media.
The information warfare unfolding alongside ground combat — sparked by Israel’s strikes on Iran’s nuclear facilities and military leadership — underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP’s fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a “surge in generative AI misinformation, specifically related to the Iran-Israel conflict,” Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
“These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication.”
GetReal Security, a US company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos — depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer — to Google’s Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show “the moment an Iranian missile” struck Tel Aviv.
“It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion,” said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
“This eight-second limit obviously doesn’t prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share,” he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims — ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
“We’re seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience,” McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as “trapped in a sealed information environment,” where state media outlets dominate in a chaotic attempt to “control the narrative.”
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women’s protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP’s fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel’s military has rejected Iranian media reports claiming its fighter jets were downed over Iran as “fake news.”
Chatbots such as xAI’s Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
“This highlights a broader crisis in today’s online information landscape: the erosion of trust in digital content,” BitMindAI’s Miyachi said.
“There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse.”


BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

Updated 22 June 2025
Follow

BBC shelves Gaza documentary over impartiality concerns, sparking online outrage

  • The film, titled “Gaza: Doctors Under Attack” had been under editorial consideration by the broadcaster for several months.

LONDON: The BBC has decided not to air a highly anticipated documentary about medics in Gaza, citing concerns over maintaining its standards of impartiality amid the ongoing Israel-Gaza conflict.

The film, titled “Gaza: Doctors Under Attack” (also known as “Gaza: Medics Under Fire”), was produced by independent company Basement Films and had been under editorial consideration by the broadcaster for several months.

In a statement issued on June 20, the BBC said it had concluded that broadcasting the documentary “risked creating a perception of partiality that would not meet the BBC’s editorial standards.” The rights have since been returned to the filmmakers, allowing them to seek distribution elsewhere.

The decision comes in the wake of growing scrutiny over how the BBC is covering the Israel-Gaza war. Earlier this year, the broadcaster faced backlash after airing “Gaza: How to Survive a War Zone,” a short film narrated by a 13-year-old boy later revealed to be the son of a Hamas official. The segment triggered nearly 500 complaints, prompting an internal review and raising questions about vetting, translation accuracy, and the use of sources in conflict zones.

BBC insiders report that portions of “Gaza: Doctors Under Attack” had been considered for integration into existing news programming. However, concerns reportedly emerged during internal reviews that even limited broadcast could undermine the BBC’s reputation for neutrality, particularly given the politically charged context of the ongoing war.

Filmmaker Ben de Pear and journalist Ramita Navai, who worked on the documentary, have expressed disappointment at the decision. They argue that the film provided a necessary and unfiltered look at the conditions medical workers face in Gaza. “This is a documentary about doctors — about the reality of trying to save lives under bombardment,” said Navai. “To shelve this is to silence those voices.”

Critics of the BBC’s decision have been vocal on social media and online forums, accusing the broadcaster of yielding to political pressure and censoring Palestinian perspectives. One commenter wrote, “Sorry, supporters of the Israeli government would get very offended if we demonstrated the consequences … so we shelved it.” Others, however, defended the move, citing the importance of neutrality in public service broadcasting.

A BBC spokesperson said the decision was made independently of political influence and reflected long-standing editorial guidelines. “We are committed to reporting the Israel-Gaza conflict with accuracy and fairness. In this case, we concluded the content, in its current form, could compromise audience trust.”

With the rights now returned, Basement Films is expected to seek other avenues for release. Whether the documentary will reach the public via another broadcaster or platform remains to be seen.


Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

Updated 20 June 2025
Follow

Iran’s Internet blackout leaves public in dark, creates uneven picture of war with Israel

  • Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings
  • Activists see it as a form of psychological warfare

DUBAI: As the war between Israel and Iran hits the one-week mark, Iranians have spent nearly half of the conflict in a near-communication blackout, unable to connect not only with the outside world but also with their neighbors and loved ones across the country.
Civilians are left unaware of when and where Israel will strike next, despite Israeli forces issuing warnings through their Persian-language online channels. When the missiles land, disconnected phone and web services mean not knowing for hours or days if their family or friends are among the victims. That’s left many scrambling on various social media apps to see what’s happening — again, only a glimpse of life able to reach the Internet in a nation of over 80 million people.
Activists see it as a form of psychological warfare for a nation all-too familiar with state information controls and targeted Internet shutdowns during protests and unrest.
“The Iranian regime controls the information sphere really, really tightly,” Marwa Fatafta, the Berlin-based policy and advocacy director for digital rights group Access Now, said in an interview with The Associated Press. “We know why the Iranian regime shuts down. It wants to control information. So their goal is quite clear.”
War with Israel tightens information space
But this time, it’s happening during a deadly conflict that erupted on June 13 with Israeli airstrikes targeting nuclear and military sites, top generals and nuclear scientists. At least 657 people, including 263 civilians, have been killed in Iran and more than 2,000 wounded, according to a Washington-based group called Human Rights Activists.
Iran has retaliated by firing 450 missiles and 1,000 drones at Israel, according to Israeli military estimates. Most have been shot down by Israel’s multitiered air defenses, but at least 24 people in Israel have been killed and hundreds others wounded. Guidance from Israeli authorities, as well as round-the-clock news broadcasts, flows freely and consistently to Israeli citizens, creating in the last seven days an uneven picture of the death and destruction brought by the war.
The Iranian government contended Friday that it was Israel who was “waging a war on truth and human conscience.” In a post on X, a social media platform blocked for many of its citizens, Iran’s Foreign Ministry asserted Israel banned foreign media from covering missile strikes.
The statement added that Iran would organize “global press tours to expose Israel’s war crimes” in the country. Iran is one of the world’s top jailer of journalists, according to the Committee to Protect Journalists, and in the best of times, reporters face strict restrictions.
Internet-access advocacy group NetBlocks.org reported on Friday that Iran had been disconnected from the global Internet for 36 hours, with its live metrics showing that national connectivity remained at only a few percentage points of normal levels. The group said a handful of users have been able to maintain connectivity through virtual private networks.
Few avenues exist to get information
Those lucky few have become lifelines for Iranians left in the dark. In recent days, those who have gained access to mobile Internet for a limited time describe using that fleeting opportunity to make calls on behalf of others, checking in on elderly parents and grandparents, and locating those who have fled Tehran.
The only access to information Iranians do have is limited to websites in the Islamic Republic. Meanwhile, Iran’s state-run television and radio stations offer irregular updates on what’s happening inside the country, instead focusing their time on the damage wrought by their strikes on Israel.
The lack of information going in or out of Iran is stunning, considering that the advancement of technology in recent decades has only brought far-flung conflicts in Ukraine, the Gaza Strip and elsewhere directly to a person’s phone anywhere in the world.
That direct line has been seen by experts as a powerful tool to shift public opinion about any ongoing conflict and potentially force the international community to take a side. It has also turned into real action from world leaders under public and online pressure to act or use their power to bring an end to the fighting.
But Mehdi Yahyanejad, a key figure in promoting Internet freedom in Iran, said that the Islamic Republic is seeking to “purport an image” of strength, one that depicts only the narrative that Israel is being destroyed by sophisticated Iranian weapons that include ballistic missiles with multiple warheads.
“I think most likely they’re just afraid of the Internet getting used to cause mass unrest in the next phase of whatever is happening,” Yahayanejad said. “I mean, some of it could be, of course, planned by the Israelis through their agents on the ground, and some of this could be just a spontaneous unrest by the population once they figure out that the Iranian government is badly weakened.


BBC threatens legal action against AI startup Perplexity over content scraping

Updated 20 June 2025
Follow

BBC threatens legal action against AI startup Perplexity over content scraping

  • Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content

LONDON: The BBC has threatened legal action against Perplexity, accusing the AI startup of training its “default AI model” using BBC content, the Financial Times reported on Friday, making the British broadcaster the latest news organisation to accuse the AI firm of content scraping.

The BBC may seek an injunction unless Perplexity stops scraping its content, deletes existing copies used to train its AI systems, and submits “a proposal for financial compensation” for the alleged misuse of its intellectual property, FT said, citing a letter sent to Perplexity CEO Aravind Srinivas.

The broadcaster confirmed the FT report on Friday.

Perplexity has faced accusations from media organizations, including Forbes and Wired, for plagiarizing their content but has since launched a revenue-sharing program to address publisher concerns.

Last October, the New York Times sent it a “cease and desist” notice, demanding the firm stop using the newspaper’s content for generative AI purposes.

Since the introduction of ChatGPT, publishers have raised alarms about chatbots that comb the internet to find information and create paragraph summaries for users.

The BBC said that parts of its content had been reproduced verbatim by Perplexity and that links to the BBC website have appeared in search results, according to the FT report.

Perplexity called the BBC’s claims “manipulative and opportunistic” in a statement to Reuters, adding that the broadcaster had “a fundamental misunderstanding of technology, the internet and intellectual property law.”

Perplexity provides information by searching the internet, similar to ChatGPT and Google’s Gemini, and is backed by Amazon.com (AMZN.O) founder Jeff Bezos, AI giant Nvidia (NVDA.O), and Japan’s SoftBank Group (9984.T).

The startup is in advanced talks to raise $500 million in a funding round that would value it at $14 billion, the Wall Street Journal reported last month.


Streaming platform Deezer starts flagging AI-generated music

Updated 20 June 2025
Follow

Streaming platform Deezer starts flagging AI-generated music

  • French streaming service Deezer is now alerting users when they come across music identified as completely generated by artificial intelligence, the company told AFP on Friday

PARIS: French streaming service Deezer is now alerting users when they come across music identified as completely generated by artificial intelligence, the company told AFP on Friday in what it called a global first.
The announcement by chief executive Alexis Lanternier follows repeated statements from the platform that a torrent of AI-generated tracks is being uploaded daily — a challenge Deezer shares with other streaming services including Swedish heavyweight Spotify.
Deezer said in January that it was receiving uploads of 10,000 AI tracks a day, doubling to over 20,000 in an April statement — or around 18 percent of all music added to the platform.
The company “wants to make sure that royalties supposed to go to artists aren’t being taken away” by tracks generated from a brief text prompt typed into a music generator like Suno or Udio, Lanternier said.
AI tracks are not being removed from Deezer’s library, but instead are demonetised to avoid unfairly reducing human musicians’ royalties.
Albums containing tracks suspected of being created in this way are now flagged with a notice reading “content generated by AI,” a move Deezer says is a global first for a streaming service.
Lanternier said Deezer’s home-grown detection tool was able to spot markers of AI provenance with 98 percent accuracy.
“An audio signal is an extremely complex bundle of information. When AI algorithms generate a new song, there are little sounds that only they make which give them away... that we’re able to spot,” he said.
“It’s not audible to the human ear, but it’s visible in the audio signal.”
With 9.7 million subscribers worldwide, most of them in France, Deezer is a relative minnow compared to Spotify, which has 268 million.
The Swedish firm in January signed a deal supposed to better remunerate artists and other rights holders with the world’s biggest label, Universal Music Group.
But Spotify has not taken the same path as Deezer of demonetising AI content.
It has pointed to the lack of a clear definition for completely AI-generated audio, as well as any legal framework setting it apart from human-created works.