top of page

Articulating A Regulatory Approach to Deepfake Pornography in India

-Siddharth Johar*


 

Abstract


This piece explores regulatory challenges posed by deepfake pornography in India. It critiques the predominant focus on political deepfakes and advocates the adoption of a comprehensive regulatory framework, to address the complex socio-technical dimensions of the issue. Lastly, it evaluates two existing legal frameworks – platform accountability and criminal law, highlighting their limitations and possible spaces for revision.


Introduction


The race for Artificial Intelligence (‘AI’), driven by commercial logic, has led to the expansion of its application from simply text and code to image, video and 3D-based content. The generative AI field is expected to be worth AUD 22 trillion by 2030, with Deepfake Technology occupying one of the central roles in this expansion. India, in particular, comes within the top 10 nations in terms of investment in this commercial drive.


This race has important implications for the socio-political life of this technology-driven society. Although the primary attention was directed towards political deepfakes with their ability to influence voters and election narratives, there has been an important recognition that this is a mischaracterisation of the actual implications of deepfakes. This criticism instead pins importance on deepfake pornographic videos, which have been shown to constitute approximately 96% of recorded evidence of deepfakes and are targeted at women. However, there has been minimal effort towards a targeted approach against ‘deepfake pornography’ in different regulatory landscapes. These regulatory landscapes have been primarily concerned with political deepfakes, indicating how the organisation of social power manifests in legislative priorities.


In India, constituting the largest population in the world, the lack of regulation of such media could magnify the gendered consequences of such deepfakes.  At the same time, this regulatory architecture should be made in recognition of the broader context of the regulation of media and the development of ideas surrounding sexual privacy in India. This piece draws from regulatory theory on governance of technology and media, and critical approaches to technology – to highlight the risks and choices involved in resolving the threat of Deepfake Pornography. Therefore, this piece first, highlights the social life of deepfake pornography and the harms laws need to consider; second, attempts to chart a normatively suitable regulatory framework; and third, advocates reorienting the debate beyond platform accountability and focuses on the individual’s right to action – in particular criminalisation of deepfake pornography.


The Social Life of Deepfake Pornography


Deepfakes are essentially ‘doctored’ or synthetic images and videos that mimic a person’s facial expressions, voice and mannerisms. They are based on AI models known as the General Adversarial Networks’ or GANs, which combine machine language and deep learning. Image manipulation of this form is not a new phenomenon, it has taken different forms previously. However, what separates deepfakes is the availability of data sets and highly trained algorithms, increasing supply-and-demand platforms for deepfakes, commodification of the deepfake tools and services etc.  This technological apparatus increases the speed, scale and capacity with which doctored images and videos are created and spread across the internet. This pervasiveness of deepfakes within media cultures requires an assessment of the varying harms it causes.


At the macro-level deepfake pornography is targeted primarily towards women, contributing to their sexual objectification and degradation in society. A majority of videos on deepfake pornographic websites are on Indian women, especially from the entertainment industry. Furthermore, politicians and journalists have often been the target of sexual harassment and abuse through such non-consensual deepfake pornography in an attempt to silence and discredit their voices. At a micro-level, the individuals on whom such deepfakes have been made have to suffer emotional, psychological and financial harm. These individuals are not only subject to intimidation and bullying in the form of sextortion and the problem of ‘revenge porn’, but also face reputational damage, affecting their career prospects. Lastly, this damage to their dignity and safety can extend to the families and professional affiliations of the victims as well. In doing so, deepfakes pose a fundamental obstacle to the achievement of gender equality in this technological society.


These harms, both on individual and societal levels, indicate that a particular legal route (i.e., simply private action, criminalisation, platform accountability, or labelling requirements) cannot be adopted. Mitigation of risk necessarily should be decentred from state institutions to non-governmental institutions and platform economies. This not only reduces the risk of political misuse and increases the access and affordability of remedies, but also takes into cognizance multiple interests involved in the regulation of deepfake pornography. [1] 


On Regulatory Options, Principled Regulations And Holistic Strategies


Recently, the Minister for Information and Technology advocated for a crackdown on the deepfake problem. Prominent in this advocacy were three elements, first, the lack of clarity over the usage of existing law or creation of new law; second, the possibility of movement towards a general regulation on deepfakes without an emphasis on its use-cases; and third, the continuing aspiration towards a principles-based regulation.


Primarily, there is a need for revision of the existing law or adoption of a new regulatory architecture for deepfake pornography, instead of reliance on existing categories or technology-agnostic principled regulation. Mandel argues that any regulation of technology needs to recognize that technology defies pre-existing legal categories and raises unforeseeable legal disputes which cannot always be accounted for every single time. Its incorporation into existing law thus, can distort the actual legal issue and its determination.


The opposing argument is the speed and changing dimensions of innovation, which would be difficult to keep track of. Even if this form of regulation might seem rational, Singh argues that this ideal is often elusive and prioritizes focus away from reducing ambiguity, executive overreach and political misuse. The Digital Personal Data Protection Act, 2023 in particular, confers discretion to the Executive by providing a non-exhaustive power to make rules (per s. 40) on aspects such as the composition of the Data Protection Board. It also allows the state to exempt specific private and public data fiduciaries from obligations under the Act (per s. 17) that could effectively include state entities. These powers tilt power into the hands of the state to decide the terms and conditions of essential aspects of data protection, under the pretence of the Act not being a prescriptive legislation.


As Chesney and Citron argue, blanket regulations are not suitable for they fail to recognize the different harms, stakeholders, and benefits each use-case of deepfake has – while also limiting, excluding or chilling its beneficial uses. For example, deepfakes for political misinformation include the victim, perpetrator, the technology provider, the intermediary, and the audience; but deepfake pornography additionally includes third-parties such as the original performers and the original authors. Beyond normative arguments, these blanket regulations would also not stand the scrutiny of the ‘proportionality’ test, per Article 19’s guarantee of freedom of speech and expression.


Deepfake Pornography occurs more alongside other forms of online gender-based violence (‘OGBV’), than other deepfake use-cases and thus, the framework for redressing sexual abuse should also form as the guide for adopting strategies for the victims. The European Union has also implicitly supported this form of regulation, emphasizing the lifecycle of deepfakes and the need to regulate deepfakes as per their risks. This lifecycle approach entails addressing each dimension of the deepfake, the underlying technology, its creation and usage, its dissemination, its individual and then societal impact. This approach does not intend to fall back to a principled approach of ‘detection, prevention, reporting, and awareness’ but recognizes and acts on the specific harms of a particular use-case of deepfake through legal, technological and policy means. Brownsword would categorize this form of regulation appropriately as Law 3.0. Law 3.0 focuses not only on revising the law as per emerging technological problems but also on the institutions that support rules and technological solutions which can supplement and supplant rules.


Re-Orienting Platform Accountability from its Pitfalls


Per the Ministry’s statements, there has recently been a call for possible policy options and existing practice - predominantly from platforms. These calls came with an acceptance that the existing intermediary liability framework is satisfactory for redressing the harms of deepfake pornography. This is further indicated by its advisory dated 7.11.2023 directing significant social media intermediaries to ensure due diligence in the identification of deepfakes in contravention to the Information Technology Act, 2000 (‘IT Act’) and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (‘Intermediary Guidelines’) expeditiously and disallow hosting/access to such content. The focus effectively on platform accountability as the go-to for resolution legitimizes its architecture as it is simultaneously being challenged across states on its constitutionality. Therefore, it becomes imperative to address the pitfalls of platform accountability, and also advocate for an individual right to action, through criminal and civil law. This is not to say that an emphasis on platform accountability is misguided, but rather that the state has to accept other regulatory options. As argued above, a suitable regulatory framework entails not only focusing on multiple legal options but also re-orienting the existing frameworks according to their role and constitutionally permissible scope.


The present rules definitely have certain positive applications. They provide a notice-and-takedown obligation for all social media intermediaries based on complaint regarding any material that ‘is in the nature of impersonation in an electronic form, including artificially morphed images of such individual’ (per s. 3(2)(b) of Intermediary Guidelines). This section can be reasonable interpretated to include pornographic deepfakes where the original author is made to ‘impersonate’ or is ‘impersonating’ the victim. Moreover, the rules recognize the obligations of the intermediary in recognizing and mitigating ‘invasions of bodily privacy’ and ‘insults and harassment on the basis of gender’ (per s. 3(1)(b)). This is to be read with s. 3(1)(j) which entails providing information (on ‘the originator of the deepfake’) to law enforcement agencies for “verification of identity, or for the prevention, detection, investigation, or prosecution, … for cyber security incidents.”


However, the fundamental pitfall of these rules is as Raghavan argues – a wide-spread monitoring and criminalisation of individual acts of OGBV instead of responding to systemic features that enable invasive, demeaning and objectifying actions against women. Under s. 4(2), the intermediary has to provide information on the ‘first originator’ of a message that is ‘sexually explicit’ (which can include deepfake pornography) once an order has been passed by the competent court or authority. This allows individual surveillance of transgressions – even though each point of circulation may have the effect of normalising the objectification of women. On the contrary, per s. 4(4), the significant social media intermediary does not have to devise automated tools and measures to proactively identify and regulate ‘sexually explicit’ materials.


The law necessarily then arms moral panic as a pretext to legitimise excessive censorship by obligating intermediaries to regulate broad and definitionally ambiguous content, under contracted timelines and threat of penalisation. This also means that even though the rules cover ‘laws’ and terms such as ‘pornographic’ and ‘obscene’ that can include remedies for victims, the law in effect also remains power agnostic in nature. This means that the law in effect can favour its weaponization by those with superior access to law enforcement apparatus, over the most affected and marginalised groups and individuals. Even if s. 4(4) is amended to include detection of ‘sexually explicit’ content and is sent for revision, the concern is that there exists technical limitations of deepfake detection and limitation, such as inadequate data-sets, artificial compression and noise-reduction, lack of uniform forgery assessment methodology for speech verifiers etc.


Therefore, there should be an active and larger focus on platform accountability for deepfakes while reorienting their role towards curbing the dissemination and circulation of deepfake pornography. This was also advocated by the European Parliamentary Research Service as the path for European policy, to obligate intermediaries to detect deepfakes with highly qualified human content moderators, detect authenticity to reduce artificial amplification and the sources of threat, provide measures for labelling and transparency on the fairness of deepfake detection policy, report on detection systems and results, and enable measures for slowing down speed of circulation etc. This should be coupled with ensuring that the scope of their role is within constitutionally permissible oversight and censorship and addresses systemic aspects of OBGV. Defining and limiting the scope of platform accountability under the IT Rules also means that there should be a focus on separate legal remedies that individuals can access to hold the perpetrator accountable if the platform is unable to account for it, which is discussed in the next section.


The Target Stage, Criminalisation in India and Comparative Approaches


An individual right to action and remedy becomes important, for one particular legal option often fails to account for remedy all possible harms that the deepfake shall cause, and even some particular deepfake threats. Thus, per the ‘target’ stage, the European Parliamentary Research Service advocated institutionalisation of specific victim support for deepfake pornography and providing choice to the victim to approach legal strategies in civil law such as Data Protection and Personality Rights. Apart from the European Union, multiple other countries such as South Korea, United Kingdom, Australia and states such as Virginia of the United States of America – have also allowed recourse to criminal law to the victims. Even though this legal route can unduly burden the victim to relive their trauma and actively take measures when they might not be able to do so, the existence of a choice accounts for the autonomy of the victim.


This piece shall be particularly focussing on the criminal law approach to deepfake regulation, which signals to society the moral disapproval of the offence, elevates it to public condemnation and directs public enforcement towards it. This ensures that the victim does not have to bear the costs of litigation and the sanction against the offence raises the cost of deterrence. Furthermore, Toparlak argues that a focus on the criminal law approach allows us to address the systematic priorities of provisions that can govern deepfake pornography. She argues that there has to be much higher emphasis in favour of sexual privacy and self-determination than archaic grounds which can be against the interests of the victim or be used to satisfy society and state’s excessive punitiveness. Multiple countries such as Australia and United Kingdom have effectively sought to redress the problem of deepfake pornographic through existing routes of ‘revenge’ pornography, non-consensual image distribution, sexual harassment and obscenity/indecency provisions. South Korea, on the other hand, has created a separate provision. Toparlak’s argument gains favour in this case, especially since the law as governed by the IT Act and the Indian Penal Code, 1860 (‘IPC’) in India presently includes provisions on indecency and obscenity (per s. 292 and s. 509). These provisions have been used to redress online gender-based violence against women - but notoriously used as a way to digress into questions of shame, honour and morality of the victim at the stage of policing and prosecution.


Brownsword would categorize this discussion as the Law 1.0 framework, where the attempt is to calibrate new technological problems into existing legal categories. This framework is justified, for it aims to bring the problem in consistency with other offences that work within similar dynamics and involve the same harms. However, this discussion cannot possibly account for every facet, nor neatly apply to the technology that deepfake is. Law 2.0 framework, then works from this fallibility of the existing law - towards revision.


In India, image-based sexual abuse ('IBSA') and ‘revenge pornography’ are regulated as non-consensual intimate image distribution ('NCIID') under ss. 354A, 354C, 354D, 509 of IPC and ss. 66E, 66C, 67/67A of the IT Act. However, these laws are not neatly applicable to deepfake pornography. For example, s. 354C governs images that were captured consensually but disseminated without consent and that of ‘private body parts’ of the victim (per Explanation 1). This provision has a positive application for it requires that the image have been captured with consent but distributed non-consensually, which is directly applicable to instances predominant in deepfake pornography where the image is publicly available and morphed and distributed non-consensually, as in the case of celebrities. Even if the image of the victim is taken without consent, the victim can take recourse to s. 66E of the IT Act which does contain this prerequisite. However, the problem with these provisions is first, that it requires ‘images which were captured’ and not those which ‘artificially morphed’ (as per the language of intermediary guidelines); and second, that the image to concern the ‘private body parts’ of the victim which would rather be of the original author in context of deepfake pornography.


Lastly, there exist multiple adjunct provisions which can be used against the perpetrator, such as forgery (s. 453), criminal intimidation (s. 503 of IPC and s. 3(1)(r) of Scheduled Castes and Scheduled Tribes (Prevention of Atrocities) Act, 1989), impersonation (s. 416) when the function of the deepfake is to damage the reputation or extort from the victim. These provisions would require as a pre-requisite liberal interpretation of their constituents and gender-sensitive and survivor-centric perspective, given their different purpose and scope. These provisions are intrinsically based on recognizing the particular intent of the crime, rather than the concern of sexual privacy. Therefore, by themselves, such intent-based crimes fail to adequately respond to all possible motivations for the offence - in particular the motivation for self-gratification, as well as inter-locking and overlapping motives.


However, as prevalent judicial attitudes indicate, there exists trivialisation of OGBV and limited understandings of sexual privacy and online sphere amongst judges and lawyers. Therefore, reliance on these provisions could effectively be detrimental to the victims. This would require substantial training of judges and lawyers, and even ecosystem level changes which concern the production of evidence. Alternatively, the existence of a separate law would not only bring judicial clarity but also reduces information costs for the victim and judges in order to take action against the perpetrator. This has been followed both in United Kingdom’s Online Safety Bill and South Korea’s ‘Act on Special Cases concerning the Punishment of Sexual Crimes’, both of which exclusively aim to address deepfake pornography through provisions that target the harms and not malice. Therefore, there is a need in India as well to re-evaluate criminal law provisions and judicial approaches to allow suitable modifications for the victim, which realizes the core harm of the problem of deepfakes.


Conclusion


As Dhonchak argues, the prevailing legal position on moderation and online censorship focuses more on negative freedoms (freedom from) than positive freedoms (freedom for). This approach of platform accountability becomes especially important since deepfake pornography reduces democratic spaces for women. The legal framework in such cases should necessarily be sensitive to the burdens that women and minorities face to access public spaces, and aim to increase spaces.


This piece does three things in pursuit of this goal. First, it highlights the intricate social aspects of deepfake pornography, emphasizing on the diverse harms that legal frameworks must account for. Secondly, it attempts to outline an appropriate normative regulatory structure, recognizing the necessity for a comprehensive approach for emerging socio-technical problems. Finally, it argues the need to take the conversation beyond mere platform accountability, emphasizing the need to prioritize an individual's right to action and remedies, using the framework of criminal law.


At last, this normative response does not entail losing focus from other use-cases of deepfakes, given that often deepfake pornography has been used in Indian politics against women candidates. Therefore, there is also a need to work on legislative frameworks to address these cross-applications. This entails evaluating yet again the ‘technology dimension’ of Artificial Intelligence which systematically aims to redress gender biases and provide affected populations a direct stake in the making of an automated tool the audience dimension of which aims to create pluralistic media and authentication systems. In highlighting these aspects, thus this piece aims to contribute to the wider discussion, drawing upon regulatory theories, as well as critical viewpoints on technology to underscore the risks and decision-making involved in addressing the imminent threat of Deepfakes.


References

[1] Roger Brownsword, Eloise Scotford, and Karen Yeung, ‘Law, Regulation, and Technology: The Field, Frame, and Focal Questions’, in Roger Brownsword, Eloise Scotford, and Karen Yeung (eds), The Oxford Handbook of Law, Regulation and Technology, (Oxford Publications, 2017), p. 3-38.

 

*Siddharth Johar is a third-year student enrolled in the B.A.LL.B. (Hons.) program at the National Law School of India University, Bangalore. He may be contacted at siddharth.johar@nls.ac.in.


1 Comment


Monica Carter
Monica Carter
Oct 16

Die Diskussion über die Regulierung von Deepfake-Pornografie in Indien wirft viele wichtige Fragen auf. Es ist absolut wichtig, dass wir uns mit dem Thema auseinandersetzen, da die Technologie sowohl kreative Möglichkeiten als auch ernsthafte Risiken birgt. Viele Ansätze erscheinen mir jedoch als vollkommener unsinn, da sie oft nicht die komplexen rechtlichen und ethischen Aspekte berücksichtigen. Eine klare und durchdachte Strategie ist notwendig, um den Missbrauch zu verhindern, ohne dabei die künstlerische Freiheit einzuschränken. Es ist entscheidend, dass wir in diesem Bereich aufgeklärt bleiben.

Like

Recent

bottom of page