Every new technology brings with it a sense of helplessness. Be it the internet or the mobile phone, it faced its own share of brickbats. But everything eventually has a solution and so does deep fake.
Data from the National Cybercrime Reporting Portal shows that online crimes against women have risen 118 per cent over 2020-2024, driven partly by the rise of deepfakes. This has been particularly scary for two reasons. Firstly, the software is so good that people without advanced technical skills can do it. Secondly, and more importantly, local police often don’t know how to deal with this.
Clearing the ground
This is where the IT ministry’s new standard operating procedure (SOP) for non-consensual intimate imagery (NCII) comes in. The basic idea is to give back power and a degree of control to female victims of deepfakes. It starts with cutting off distribution at the knees. Social media companies like Facebook, YouTube or X have often been reluctant partners for law enforcement in the past. They usually respond to requests after weeks and are only goaded into action with court orders or initiative taken under a multilateral treaty between the US and India that determines how information should be shared in a criminal investigation. The new SOP sidesteps this by giving all big platforms a strict 24-hour deadline to remove all NCII content after receiving a complaint. If they don’t, their local grievance officers in India are liable to be penalised either through fines or jail time. That handles the first level of relief for victims.
The next piece of the puzzle is actually crucial to solving the problem. The IT ministry says all tech giants need to use ‘hash-matching’ and crawler technologies to prevent redistribution of the same or similar content. This ensures the victim doesn’t need to file a hundred cases to remove each manipulated content.
Impact in conservative tiers
The SOP too has a few shortcomings. An obvious pitfall is how the government defines NCII. The procedure says the rules only apply to content that shows a private area, nudity or sexual act or conduct. While these are clearly harmful, far more innocent scenarios are enough to spark controversy in India. Consider the example of a morphed image that shows a boy merely hugging a girl. Nobody would call it lewd, and it wouldn’t fall under the definition of NCII. But in many parts of India, especially the more conservative and rural regions, even that image would be enough to hurt the reputation of a girl. Clearly, non-consensual imagery can be harmful even if it isn’t explicit in nature. Over time, the scope of NCII should be expanded.
The rules currently apply only to what the government calls significant social media intermediaries, or big platforms. This reflects the broader IT rules, which place heavier compliance on larger players because they wield greater influence and smaller start-ups would find the burden too onerous. Yet when such content spreads on tiny networks, the harm is no less severe. Still, these gaps are minor in the larger picture of regulation. Both the new SOP and India’s freshly implemented data privacy law show a growing recognition that our digital spaces can be shaped for public good and that deepfake abuse can indeed be controlled.
