AI Surveillance for Paris Olympics Sparks Privacy Concerns
French plans to use artificial intelligence (AI) to monitor athletes, coaches, and spectators at the Paris Olympics have raised privacy concerns among rights groups. These groups view the technology as a form of creeping surveillance.
AI Surveillance Testing in France
French authorities have recently tested AI surveillance systems at train stations, concerts, and football matches. During the Olympics, these systems will scan crowds for abandoned packages, detect weapons, and more. Although these tools will not be fully operational before the games, police, fire and rescue services, and some transport security agents will use them until March 31, 2025.
Rights groups fear that AI surveillance could become the new norm. Katia Roux, advocacy lead at Amnesty International France, stated, “The Olympics are a huge opportunity to test this type of surveillance under the guise of security issues, paving the way to more intrusive systems like facial recognition.”
Companies and Metrics
The French government has enlisted four companies for this effort: Videtics, Orange Business, ChapsVision, and Wintics. These security platforms measure eight key metrics: traffic against the flow, people in prohibited zones, crowd movement, abandoned packages, weapon detection, overcrowding, a body on the ground, and fire.
The software has been tested at events like Depeche Mode and Black-Eyed Peas concerts, a Paris Saint-Germain soccer match, and at metro stations during Taylor Swift’s visit. The Cannes Film Festival, which attracts 40,000 attendees, was another test site. Cannes Mayor David Lisnard noted that the town already has the densest video protection network in France, with 884 cameras.
Surveillance Infrastructure Concerns
France has around 90,000 video surveillance cameras monitored by the police and gendarmerie. Daniel Leufer, a senior policy analyst at digital rights group AccessNow, expressed concerns about the potential for these systems to evolve into more invasive forms of mass surveillance.
“While these use cases may not seem to reveal individual identities, they still require a surveillance infrastructure that could easily be updated for more invasive surveillance,” Leufer warned.
Legal and Ethical Implications
French lawmakers have banned facial recognition, promising it is a red line not to be crossed. However, privacy campaigners argue that exceptions in the legislation allow for its deployment by competent authorities for purposes like national security and migration. Roux emphasised that such exceptions could lead to misuse.
Historical use of surveillance has added to these concerns. In November, non-profit Disclose found that French law enforcement had covertly used facial recognition software from Israeli company Briefcam since 2015.
Future of AI Surveillance
Senator Agnes Canayer acknowledged that AI-driven video surveillance might not be fully optimal during the Olympics, requiring additional security forces to compensate for its shortcomings. The Ministry of the Interior did not respond to requests for comment.
The government’s Law Commission has suggested continuing the experimental basis of the technology and extending the retention period of captured images to test the equipment over different seasons and events.
Roux concluded, “We need to campaign and raise awareness about facial recognition now. If we wait until it is used, it will be too late.”