Friday, Apr 26, 2024
Advertisement
Premium

Vasudev Devadasan writes: The conflict between free speech and consent

Vasudev Devadasan writes: There are ways to take down non-consensual intimate images. But these must not be used to curb free speech

Publishing NCII is a criminal offence under the Information Technology Act 2000, with platforms doing their best to filter out such content. (Image: Pixabay)Publishing NCII is a criminal offence under the Information Technology Act 2000, with platforms doing their best to filter out such content. (Image: Pixabay)

The Delhi High Court in Mrs. X v Union of India is confronted with a familiar problem. A woman whose nude photos were shared online without her consent approached the Court to block this content. While the Court has impleaded the Delhi Police’s cyber cell and various online platforms to restrict the content, the case highlights the need for courts, law enforcement, and technology platforms to have a coordinated response to the sharing of non-consensual intimate images (NCII) online.

Publishing NCII is a criminal offence under the Information Technology Act 2000, with platforms doing their best to filter out such content. While a criminal conviction is desirable, the more urgent need for victims is to stop the spread of this illegal content. The Intermediary Guidelines 2021 provide a partial solution. They empower victims to complain directly to any website that has allowed the uploading of non-consensual images or videos of a person in a state of nudity or engaging in a sexual act. This includes content that has been digitally altered to depict the person as such. The website must remove the content within 24 hours of receiving a complaint, or risk facing criminal charges.

However, this approach relies on victims identifying and sharing every URL hosting their intimate images. Further, the same images may be re-uploaded at different locations or by different user accounts in the future. While the Intermediary Guidelines do encourage large social media platforms to proactively remove certain types of content, the focus is on child pornography and rape videos. Victims of NCII abuse have few options other than lodging complaints every time their content surfaces, forcing them to approach courts.

Advertisement

The fact that both technology companies and governments want to restrict NCII represents an opportunity for cooperation. For example, Meta recently built a tool to curtail the spread of NCII (www.stopncii.org). The tool relies on a “hashing” technology to match known NCII against future uploads. A victim whose intimate images have been shared without their consent can use the tool to create a “hash” (or unique identifier) of the offending image, which is shared with the platform. The platform then compares this user-generated hash with the hashes of all other images on its site, allowing for the identification and takedown of content identical to that reported by the victim. The victim’s private images stay with them, with only the hash being added to a database to guard against future uploads. Similar technology is already used against child-sex abuse material (CSAM) with promising results.

Experts note that if well-designed and administered, other websites could eventually use this NCII hash database to identify illegal content they may be unwillingly hosting. Victims could report NCII abuse at a centralised location and have it taken down across a range of websites. The government can also play a role in facilitating a redressal mechanism. For example, Australia has appointed an “e-Safety Commissioner” who receives complaints against NCII and coordinates between complainants, websites, and individuals who posted the content – with the Commissioner empowered to issue “removal notices” against illegal content. Pairing a hash database with an independent body like the Commissioner may significantly reduce the spread of NCII.

Festive offer

Independence is essential, as such image-matching technology could be used for surveillance or to simply remove unpopular (but not illegal) content from the internet. The CBI has already reportedly asked Microsoft to deploy its “PhotoDNA” tool (an image-matching software built to identify CSAM) for investigatory purposes. In fact, to counter such risks, the hash database for CSAM is not maintained by either Microsoft or any government but rather by independent organisations. Similarly, Meta has partnered with the Revenge Porn Helpline to administer its NCII tool.

The use of automated tools also raises free speech concerns that lawful content may accidentally be taken down. Automatic filters often ignore context. Content that may be illegal in one context may not be illegal in another. This is precisely why free speech advocates are wary of using automated tools to remove harmful content online. While there exist tricky cases where courts may be required to intervene (for example, content depicting a public figure committing sexual assault), the vast majority of NCII has no public interest component and can be taken down quickly. Automated tools also have a much better record against images than text, with images less likely to be misinterpreted. Finally, individuals who believe their content was incorrectly removed as NCII can be allowed to seek reinstatement.

Advertisement

The government’s reported overhaul of the Information Technology Act is an opportunity to develop a coordinated response to NCII-abuse that will provide victims meaningful redress without restricting online speech. In the interim, courts should balance the harm caused by NCII with the need to protect online speech. In the past, courts have required victims to continually supply URLs or directed intermediaries to remove all content remotely related to NCII. Going forward, courts may consider tasking a state functionary or independent body with verifying the URLs and coordinating with online platforms and internet service providers. If courts direct platforms to take down NCII, they should only do so where the NCII-content will be illegal in every foreseeable context. They must also ensure the individual who posted the content can seek reinstatement. And they should not demand absolute outcomes but rather require that platforms take affirmative steps to address the issue.

This column first appeared in the print edition on July 5, 2022, under the title, ‘Degrees of privacy and freedom’. The writer is project officer, Centre for Communication Governance, National Law University Delhi

First uploaded on: 05-07-2022 at 04:30 IST
Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement
close