Updated On: 23 June, 2025 08:13 AM IST | Mumbai | Diwakar Sharma
Cybersecurity experts warn that such AI-generated child sexual abuse material (CSAM) is now circulating on social media and the dark web, despite repeated attempts by governments to curb access

Experts warn that synthetic child abuse content, though digitally created, causes real-world psychological harm. Representation pic/istock
The rapid rise of artificial intelligence (AI) has dramatically lowered the bar for creating pornographic content, especially child sexual abuse material (CSAM). What once required technical skill and effort can now be done with minimal expertise, allowing predators to easily generate synthetic explicit content involving minors.
Cybersecurity experts warn that such AI-generated child sexual abuse material (CSAM) is now circulating on social media and the dark web, despite repeated attempts by governments to curb access. In India, websites hosting CSAM are routinely blocked based on blacklists provided by INTERPOL through the Central Bureau of Investigation (CBI), the national nodal agency for INTERPOL coordination.
Recently, the Telangana police arrested 15 individuals, including an enginering graduate, for allegedly viewing, storing, and distributing CSAM. Maharashtra police, too, have registered multiple CSAM-related cases in the past decade. However, conviction rates remain alarmingly low. During the peak of the COVID-19 pandemic in 2020, the state recorded 102 FIRs related to child pornography, the highest in a decade. Yet, not a single conviction was reported that year, according to data from Maharashtra Cyber Cell.