The number of commercial child sexual abuse websites more than doubled in 2025, with the Internet Watch Foundation (IWF) identifying 15,031 such sites compared with 7,028 the previous year — a 114% increase — as experts warn that organised criminal gangs are generating significant profits from the online sexual exploitation of children.
The Internet Watch Foundation, a UK-based organisation that works to identify and remove child sexual abuse material (CSAM) from the internet, has recorded a dramatic rise in commercially operated websites hosting illegal content targeting children.
Analysts who worked on the IWF's annual report described the situation as deeply alarming, noting that such content is now present 'across all social media platforms' and is described as 'very easy' to find. The findings point to an increasingly industrialised model of exploitation, in which criminal gangs treat child sexual abuse as a revenue-generating enterprise.
Scale of the Problem
The IWF's figures represent only material that has been reported or discovered through the organisation's active searching — meaning the true scale of commercially hosted abuse material online is likely considerably larger. The foundation works with internet companies, governments and law enforcement agencies to take down illegal content and has long been considered a key source of data on CSAM trends.
The near doubling of commercial sites in a single year marks a sharp acceleration of a trend that advocates and child safety organisations have been warning about for years. Unlike non-commercial material, commercial sites imply the existence of a supply chain: producers, distributors and paying customers, all of whom must be identified and prosecuted if the problem is to be meaningfully addressed.
Criminal Networks Behind the Sites
Experts cited in the IWF report attribute much of the growth to organised criminal networks that have identified child sexual exploitation as a high-profit, relatively low-risk enterprise. The commercial model — in which users pay for access to abuse content — provides a financial incentive that encourages further production of material and, by extension, further abuse of victims.
Child safety advocates have long argued that the profit motive fundamentally changes the nature of the threat, transforming what might otherwise be isolated offending into a scaled, business-like operation capable of producing large volumes of material.
Platform Responsibility Under Scrutiny
The finding that illegal content is present across all major social media platforms raises serious questions about the effectiveness of existing detection and moderation measures. Technology companies have faced increasing regulatory pressure in recent years — particularly in the United Kingdom and European Union — to do more to prevent their platforms from being used to host or distribute child sexual abuse material.
The Online Safety Act in the UK, which came into force in stages from 2023, places legal obligations on platforms to detect and remove CSAM, with potential for significant financial penalties for non-compliance. However, critics argue that enforcement has been slow and that platforms continue to fall short of their obligations.
Child protection organisations are calling for stronger international cooperation between law enforcement agencies, greater investment in detection technology, and more robust action from social media companies to prevent their services from being exploited.
Analysis
Why This Matters
- The doubling of commercial child sexual abuse websites signals a shift toward organised, profit-driven exploitation — meaning more children are being abused to produce content for paying customers, not simply to share among offenders.
- The presence of this material across mainstream social media platforms raises urgent questions about platform accountability, the effectiveness of existing laws, and whether current moderation technologies are adequate.
- Without decisive action from governments, tech companies and law enforcement, the financial incentives that are driving this growth will continue to attract further criminal investment in the sector.
Background
The Internet Watch Foundation was established in 1996 in the United Kingdom as a response to growing concerns about the distribution of child sexual abuse material via early internet networks. It operates a hotline for public reporting, conducts active searches for illegal content, and works with industry and law enforcement to remove material from the internet.
Over the past decade, the proliferation of social media platforms, encrypted messaging services and dark web infrastructure has significantly complicated efforts to detect and remove CSAM. While major platforms have invested in tools such as hash-matching technology — which identifies known illegal images by their digital fingerprint — critics argue these tools are insufficient against new or previously unknown material, particularly as criminal networks adapt their methods.
The commercial dimension of child sexual abuse online is not new, but the scale now being recorded by the IWF represents a qualitative shift. Regulatory frameworks such as the UK's Online Safety Act and the EU's Digital Services Act have sought to increase platform obligations, but both are still in relatively early stages of enforcement.
Key Perspectives
Internet Watch Foundation and Child Safety Advocates: The IWF and allied organisations argue that the surge in commercial sites reflects a failure of both platforms and governments to act with sufficient urgency. They are calling for mandatory, proactive detection measures from technology companies and faster international law enforcement cooperation.
Technology Platforms: Social media companies and internet service providers generally maintain that they are investing heavily in detection and removal tools. However, representatives of the tech industry have also pushed back against some regulatory proposals — particularly those requiring scanning of encrypted communications — arguing these would undermine user privacy and create new security risks.
Critics and Law Enforcement: Some law enforcement experts argue that focusing on takedowns, while necessary, is insufficient without equal investment in identifying and prosecuting the criminal networks producing and distributing the material. They point to the commercial nature of these operations as evidence that existing deterrents are not working.
What to Watch
- Whether the IWF's data prompts accelerated enforcement action under the UK Online Safety Act, including formal investigations of platforms found to be hosting such content.
- Upcoming reviews of platform compliance with child safety obligations in both the UK and EU, which could result in significant fines or operational restrictions for non-compliant companies.
- The response of social media platforms to the specific finding that illegal content is present across all major services — and whether voluntary commitments to improve detection follow the publication of this report.