The CHAT CONTROL proposal: status of the legislative process and next steps

The CHAT CONTROL proposal: status of the legislative process and next steps

The European “chat control” regulation proposal, that is, the regulation to prevent and combat child sexual abuse online, after years of deadlock, reached a political agreement[1] in the Council in late November 2025. This agreement is based on risk assessment obligations for online services and the possibility of maintaining content scanning, including end-to-end encrypted messaging services, albeit on a voluntary basis, unlike the Commission’s original framework, which mandated such measures.[2]

The compromise therefore abandons the blanket requirement to scan private messages but leaves in place a regime of voluntary surveillance and “risk mitigation” that many critics consider a covert form of chat control 1.0 made permanent, with potential implications for privacy, encryption, and fundamental rights.[3]

1. The context: from “Chat Control 1.0” to the new regulation

The debate originated with the 2021 temporary derogation (Regulation 2021/1232), often referred to as “chat control 1.0,” which allows providers to voluntarily scan messages and content to detect CSAM, as an exception to ePrivacy rules on confidentiality of communications. At present, only providers whose messages are not end-to-end encrypted implement this[4] (for example, Meta).

This derogation, conceived as a bridge measure, has been extended multiple times and remains valid until April 2026, precisely because the permanent legislation proposed by the Commission in 2022, which moved from voluntary detection to blanket obligations for detection, reporting, and removal, including potential orders targeting encrypted services, was never able to achieve a common position in the Council until fall 2025.

During Council negotiations, efforts were made to overcome the challenge of encryption integrity through so-called “client-side scanning”, the model of “scanning on the phone before sending”, but this failed to quell criticism from experts and digital rights activists. Even with this new model, all communications effectively become accessible, not only to the platform itself but also to third parties (hackers, spies).

Moreover, once communications are effectively monitorable, potential child abusers could simply migrate to different applications downloadable from the internet. This is a trivial operation, within anyone’s reach. Therefore, the Chat Control model would have very limited practical utility against the criminal phenomenon it aims to combat.

Many critics also observe that blanket scanning would generate a high number of “false positives” (i.e., images of children that appear to be child pornography but are not), potentially swamping investigators’ workloads.

2. The Council compromise in November 2025

The agreement reached by Member States in late November 2025 was the product of a true “marathon” of presidencies: from those pushing for mandatory scanning (including through client-side scanning) to more recent pressure in favor of a voluntary model less directly at odds with encryption.

The Danish presidency, after attempting a more aggressive text in the summer, pivoted in October toward a solution that abandons the idea of mandatory detection orders “for everyone” and instead codifies the continuation of voluntary scanning within the regulation. This approach ultimately—in late November—broke the deadlock in COREPER and secured the adoption of a Council negotiating mandate.

The core of the Council’s framework rests on three pillars: risk assessment, mitigation measures, and voluntary scanning as a “legitimized” option integrated into the framework.

Service providers—hosting services, sharing platforms, and communication services alike—would thus be required to periodically conduct risk assessments evaluating whether their service could be used to spread CSAM or for grooming, and to implement “targeted, proportionate, and effective” mitigation measures. Authorities could request additional measures for services classified as high-risk.

Within this framework, content and communication scanning remains formally voluntary but is recognized as a legitimate measure and is linked to the new category of “high-risk” services, which may be encouraged to “contribute to the development of technologies” to mitigate such risks.

In effect, this creates a continuum between the chat control 1.0 transitional regime and the new regulation: what was an emergency voluntary derogation is transformed into a structural component of the mitigation strategy within the permanent framework, with room for regulatory and reputational pressure on providers to adopt scanning as a standardized practice.

3. Encryption, Client-Side scanning, and the shadow of surveillance

One of the most sensitive issues is how end-to-end encrypted (E2EE) messaging services are handled, where content is unreadable by the provider and, in theory, even by authorities without “breaking” the encryption.

The Council compromise avoids explicitly writing a client-side scanning mandate but leaves open the possibility that detection technologies could be implemented “on the device,” before encryption, especially in relation to services classified as high-risk and in combination with age verification requirements.

This is precisely the point that draws the harshest criticism from cryptographers, academics, and regulators, who speak of “more surveillance, but no more protection.” According to them, the idea of normalizing voluntary scanning, especially client-side, creates a technical infrastructure for mass surveillance that can be repurposed for other ends and structurally weakens communications security.

Patrick Breyer and other proponents of the more privacy-protective line in Parliament argue that any form of blanket analysis of private messages, mandatory or “voluntary”, is incompatible with the Charter and with the Court of Justice’s jurisprudence on indiscriminate surveillance. They contend that the Council’s position risks bringing a new “test case” before Luxembourg on the boundary between crime-fighting and fundamental rights.

4. Criticism, reactions, and next steps

The Council agreement was welcomed by parts of the industry and some child-protection NGOs[5] as a step forward because it provides “legal certainty” and introduces general risk assessment obligations. However, digital rights organizations and segments of the scientific community describe it as a “political letdown” that fails to address the fundamental problems.

Critical assessments highlight at least three issues: lack of robust evidence that mass scanning effectively reduces abuse; high error rates in detection technologies, with severe consequences for innocent users; and the risk of making permanent what was designed as a temporary exception, without a rigorous proportionality assessment.

On the procedural front, the Council text now serves as the mandate for trilogue negotiations with Parliament, which holds a far more protective position on encryption and privacy, oriented toward targeted tools based on individual suspicion rather than blanket scanning.

Indeed, in November 2023, the European Parliament adopted a position[6] that rejects indiscriminate chat surveillance, opposing mass surveillance (while acknowledging targeted investigations of suspected individuals or groups, with judicial authorization only). The idea of mandatory client-side scanning (i.e., on phones or personal devices) was also rejected.

The real battleground in the coming months, within the trilogue, will be whether negotiations can close on a model centered on targeted investigations and security-by-design measures, as Parliament seems to propose, or whether the logic now prevailing in the Council and Commission will win out—a permanent infrastructure where “scanning” becomes, if not a legal obligation, at least a favored technical practice with institutional legitimacy.

[1]: Council Press Release, November 26, 2025: “Child sexual abuse – Council reaches position on law protecting children from online abuse,” https://www.consilium.europa.eu/en/press/press-releases/2025/11/26/child-sexual-abuse-council-reaches-position-on-law-protecting-children-from-online-abuse/

[2]: Proposal for a Regulation of the European Parliament and of the Council laying down rules to prevent and combat child sexual abuse, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52022PC0209

[3]: See, among others, Patrick Breyer: “Chat control evaluation report: EU Commission again fails to demonstrate effectiveness of mass surveillance of intimate personal photos and videos,” https://www.patrick-breyer.de/en/chat-control-evaluation-report-eu-commission-again-fails-to-demonstrate-effectiveness-of-mass-surveillance-of-intimate-personal-photos-and-videos/

[4]: “End-to-end” encryption is encryption where the keys are held by the user, not the provider, meaning the provider cannot read the messages.

[5]: Eurochild reacts to the Council’s position on the Regulation to Prevent and Combat Child Sexual Abusehttps://eurochild.org/news/eurochild-reacts-to-the-councils-position-on-the-regulation-to-prevent-and-combat-child-sexual-abuse/

[6]: European Parliament Press Release, November 22, 2023: Child sexual abuse online: MEPs ready to start negotiationshttps://www.europarl.europa.eu/news/en/press-room/20231117IPR12219/child-sexual-abuse-online-meps-ready-to-start-negotiations

— By Innocenzo Genna, Legal specialist in EU digital policy, competition and liberalization regulations

Share this post