The Progressive Post

The EU’s dangerous proposal for stopping online child sexual abuse material

05/07/2023

Child sexual abuse material is a horror, causing long-term harm to victims. Numbers are increasing: the US National Center for Missing and Exploited Children, which collects and shares child sexual abuse material evidence with authorised parties, reported 29 million cases of online sexual exploitation in 2021. This is a factor of 10 increase over 2011 and a 40 per cent increase of Internet videos of child sexual abuse between 2020 and 2021. Yet most computer security experts and privacy advocates strongly oppose the EU proposal that online providers must recognise and remove all known child sexual abuse material, detect new abuse materials and ‘grooming’ (enticing and luring a minor into a sexually abusive situation). There simply is no technology to do this.

Feasibility and proportionality are at issue. Though current technology can largely recognise previously identified child sexual abuse material (CSAM), it cannot effectively identify new CSAM or grooming occurrences at scale. Current and proposed technology have both false negatives – missing a crime – and false positives: mistakenly identifying innocuous content as CSAM. These misses can be deliberately generated by someone seeking to evade detection or targeting a victim by sending inoffensive-looking content that would trigger a false positive.

Even a whiff of a CSAM investigation is sufficient to make a person and their family community outcasts. False accusations have led to suicides. Implementing the EU regulation would greatly increase the number of false positives and falsely accused people.

Furthermore – and critically – though the regulation does not explicitly say so, satisfying its requirements would effectively break the security guarantees of end-to-end encryption. Such encryption is the basis for the confidentiality provided by messaging apps like Signal and WhatsApp, and is the only method for ensuring confidentiality of communications. As many in national security and law enforcement have noted, such encryption is critical for protecting industry, national security, and individuals. The EU regulation would prevent its use.

A 2021 study for the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) recommended instead a targeted approach: do not monitor everyone, just those already under suspicion of Child Sexual Abuse and Exploitation (CSAE). But the proposed regulation did not adopt this approach. The 2023 report for the LIBE committee concluded that the regulations’ obligations on technology providers “would likely fail the proportionality test”. To understand how investigations might work in the presence of encryption, let us dig into the crimes the proposal seeks to address. CSAE – the more accurate name for these crimes – consists of four separate types of abuse. 

First is CSAM, the production and distribution of photographs and videos of child sexual abuse. The number of instances of reported instances is high, but the number of different instances – and different children affected – is much lower. 90 per cent of Meta’s reporting to US National Center for Missing and Exploited Children (NCMEC) from October and November 2020 was effectively the same as or similar to content reported previously, half of all reporting concerned the same six videos. Second is Perceived First Person (PFP) material, in which a child shares a nude image of themselves. This is not criminal but becomes so when the photo is redistributed without permission. The third form of CSAE is online trafficking of children for sexual purposes, and the fourth is real-time internet video of child sexual abuse. Each crime requires different tools for prevention and investigation. 

Though CSAM investigations can be impeded by encryption, there is a large opportunity for stopping much of the redistribution. Meta learned resharing CSAM often occurs, not for prurient interests, but because of outrage or a warped sense of humour. An American University report observed that warning of severe legal consequences for sharing CSAM can have a strong deterrent effect on these sharers.

The American University report proposed other interventions. In PFP, the child who shares her nude photo is not engaging in criminal activity; she may be incentivised to report when the photo is reshared. Sex education, information about online safety and reporting abuse can enable this – or even prevent the photo creation in the first place. 

Meanwhile, abusers offering internet-enabled child sex trafficking and real-time videos of child sex abuse are most likely to be family members or friends. Thus, a serious investigatory problem is the child’s unwillingness to see the abuser prosecuted. Interventions empowering the child, including providing safe community spaces and sex education can be crucial. And investigators can use online techniques – abusers’ ads and odd communications patterns in the real-time video – even when the video itself is encrypted.

Proponents argue the regulation will pressure technology firms to improve efforts on finding and reporting CSAE. Two techniques are proposed: perceptual hashing and machine learning, the former for recognising previously known CSAM, the latter for discerning new instances and grooming. That such techniques can work effectively is illusory, as a group of computer security experts documented, this argument by proposal supporters ignores technological realities (disclosure: I coauthored this report).

Perceptual hashing, currently responsible for much of CSAM reporting, divides an image into many tiny squares and computes a ‘hash’ – a mapping of a long string of bits to a much smaller fixed-size one – of the image and matches images with ‘close-by’ hashes. This enables recognising a CSAM image even if it is cropped, blurred or otherwise changed in a minor way. Perceptual hashing can be fooled. Researchers have created images of a young girl and a beagle whose hashes match. Technical deceit means that a perceptual hashing system can be misled into reporting a CSAM image when none existed. (Perceptual hashes also suffer from false negatives, meaning modified CSAM images may be missed.)

Scale makes perceptual hashing an inadequate solution for CSAM recognition. A message that tests positive for CSAM must be examined, but users send many billions of messages daily. If even a tiny percentage of flagged messages are false positives, service providers will be unable to manage the huge numbers. As criminals develop additional ways to fool the technology, high numbers of false positives and negatives will be a reality. And although experimental efforts use machine learning to unearth grooming, that work faces the same problems as perceptual hashing. 

Simply put, the two technologies fail the efficacy test. Claims that these systems can be developed to satisfy the EU proposal’s requirements reflect wishful thinking by policymakers rather than hard-eyed analysis by technologists.

Because storing CSAE material is illegal, academics cannot study the efficacy of current detection techniques. European law enforcement is largely silent, but we have some information. In 2022 the Irish Council for Civil Liberties queried An Garda Síochána, the Irish national police, about NCMEC 2020 referrals for investigation. The police had received 4,192 referrals, of which 409 were actionable, with 265 of the cases completed. A higher number – 471 – eleven per cent were deemed not CSAM. An Garda Síochána kept the files anyway. Now 471 people have police records because a programme incorrectly flagged them as having CSAM. 

Based on an illusion, the EU CSAM regulation is downright dangerous. It should not pass.

Photo credits: Shutterstock.com/AndrewAngelov

Find all related publications
Publications
15/04/2024

Expected labour market effects of the Green Deal Industrial Plan

The potential of labour policy for Just Transition regions
24/01/2024

Progressive Yearbook 2024

Looking back to look ahead
23/01/2024

Algorithms by and for the workers

Digital Programme: Algorithms at the workplace
18/01/2024

Employment terms of platform workers

Digital Programme: Algorithms at the workplace
Find all related events
Events
Past
08/04/2024
Online

The rise of algorithmic management, from platform economy to traditional sectors

Webinar on platform work and algorithms at the workplace
16/02/2024
Sofia, Bulgaria

The cost of personal transition

Forum for Progressive left solutions
23/01/2024
Helsinki, Finland

Algorithms in the Workplace

Breakfast roundtable
Find all related news
News
13/03/2023

Digital programme: Algorithms at the workplace

FEPS, together with Nordic partners, launched a Digital Programme on algorithmic management and workers' rights
Find all related in the media
In the media

Sustainable democracies need a sustainable media sector, says Jourová

by EURACTIV 02/04/2024
FEPS President Maria João Rodrigues discusses AI and journalism at Stars4Media event

AI won’t replace quality journalism, but sector needs safeguarding, says socialist think tank head

by Euractiv 18/03/2024
Interview with FEPS President on the role of media in Europe’s future

Ласло Андор: Европа трябва да създаде нов модел за икономически растеж

by dir.bg 14/03/2024
'Europe must create a new model for economic growth'. Interview by Laszlo Andor, FEPS Secretary General

Цената на прехода – зелен, дигитален и демографски

by BNT 13/03/2024
'The cost of transition - green, digital and demographic' BNT Interview with László Andor about the three major transformations the world is facing.