top of page

The “Deepfake” Conundrum - Can the Digital Personal Data Protection Act, 2023 Deal with Misuse of Generative AI?

- Sarvagya Chitranshi*

 

Abstract


The present article analyses how the contemporary problem of AI-generated deepfakes can be dealt with through the application of the Digital Personal Data Protection Act, 2023. It identifies the responsibility of data fiduciaries under the Act to protect personal data from being misused for such purposes. The article also identifies certain gaps in the provisions of the Act and suggests a manner of interpretation for them that can aid in developing a holistic framework to counter deepfakes.

 

Introduction


The threat of Artificial Intelligence (“AI”) tools becoming more sophisticated is less concerned with replacing humanity and more with disconcerting it. A recent example of this issue, which sparked a row among the celebrities and the common populace alike, was the widely circulated video of actress Rashmika Mandanna entering an elevator. However, the issue with this viral video was that it was consistently morphed. Rashmika’s face was edited on to the original video and it would be nearly impossible for the average person to question its authenticity. This incident is not only extremely concerning but also raises a larger question on the ability of Indian data protection laws to counter such an occurrence.


The present article attempts to, firstly, decipher deepfake technology which is an application of Generative Artificial Intelligence. Secondly, it applies the applicable provisions of the Digital Personal Data Protection Act, 2023 (“DPDPA”) which can counter their presence on social media platforms. The article elucidates the responsibility of Data Fiduciaries under the DPDPA to counter the breach of personal data, thereby restricting the supply of ‘raw-material’ for the creation of deepfakes. Lastly, before concluding, this article recognizes certain shortcomings of the DPDPA’s provisions and suggests a manner of interpretation in which a holistic framework to counter deepfakes can be created.


Generative AI and Deepfakes


Generative AI is a branch of artificial intelligence that is capable of producing various types of audio-visuals and text-based content. Researchers have consistently worked to improve the output of generative AI products and its latest iterations can produce highly realistic results. While traditional AI has been used to analyse datasets or make logical predictions, generative AI is capable of producing new data based on the dataset it is trained on. This is possible due to its underlying infrastructure of Generative Adversarial Networks (“GANs”). GANs are deep learning neural networks that automatically identify and learn patterns in a dataset. They can then create a model which produces novel outputs based on the input data.  


A ‘deepfake’ is one such application of generative AI. Although the term is mostly used for videos, even a picture or an audio can be a ‘deepfake.’ It is essentially the digital alteration of any media to morph it into something artificial. It is mostly used to swap faces, bodies, or voice of individuals in multimedia content with that of other individuals. The technology can also be used to create seemingly real-life photographs and videos of people who do not exist. Various deepfakes have surfaced online in the recent times where celebrities are shown to be doing or saying something which they never actually did. The most striking feature of a deepfake is that ironically, it seems to be very authentic. This can have severe consequences ranging from tarnishing the image of an individual and breaching their privacy to political incitement and peddling of fake news. 


Protections given by the Digital Personal Data Protection Act


1. The wide-ranging definition of ‘personal data breach’


The DPDPA is concerned with protecting the personal data of individuals from misuse by ‘Data Fiduciaries.’ A Data Fiduciary is defined under the Act as any person who determines the means and the purpose for which the personal data of an individual would be used. An individual whose data is taken by such Data Fiduciary is defined as a ‘Data Principal.’ The two more important definitions under the Act are that of ‘personal data’ and ‘personal data breach.’ Personal data under the act is defined as any piece of data relevant to a person which can be used to identify them. The use of the phrase ‘any piece of data’ widens the ambit of this definition to a large extent.  This interpretation can be borrowed from the interpretation of the term ‘any information’ used in the definition of personal data under the EU Data Protection Laws. The definition itself is strikingly similar to the definition used in the DPDPA. In the case of Peter Nowak v. Data Protection Commissioner, the intention of the legislature to accord a wide scope to the definition of personal data under Directive 95/46 is recognised. It is not restricted to information that might be patently sensitive or private but also includes subjective information like opinions of a person that could be used to identify them. A person’s personal photographs or videos posted online could therefore, be covered under this definition under the DPDPA. These kinds of personal data are then used to train Generative AI models which can create deepfakes. Even a deepfake itself can be covered under the definition of personal data because it can be used to identify the person being featured in it. A personal data breach under the act is defined as -


“Any unauthorised processing of personal data or accidental disclosure, acquisition, sharing, use, alteration, destruction or loss of access to personal data, that compromises the confidentiality, integrity or availability of personal data.”


The language of this definition and largely the entire scheme of the Act affixes the responsibility of the Data Fiduciary. Section 4 states that a Data Fiduciary can only use personal data for the purposes expressly consented to by the Data Principal. Apart from this, personal data can be used by a Data Fiduciary for certain other legitimate uses elucidated under Section 7. These largely include requirements of compliance with any judgement or law. This ensures that unless expressly consented to, a Data Fiduciary cannot legally scrape personal data of an individual to train any Generative AI models. Since most social media platforms or search engines have access to extremely large amounts of personal data, this framework could prove to be effective.


However, the issue is not always limited to the Data Fiduciaries themselves. Individuals enlisted on a Data Fiduciary, can use photos or videos of other individuals to generate AI-based fake media. In this context, Section 8(5) of the Act becomes important as it puts an obligation on the Data Fiduciary to protect all personal data under its possession from any breach. Data Fiduciaries should therefore, be required to put up additional safeguards to ensure compliance with this obligation. Illustratively, Data Fiduciaries should disallow downloading of private media or put restrictions on the way any content could be shared on a platform. Data Fiduciaries are also required to inform the Data Principal under Section 8(6) as soon as a breach is detected. This would allow the Data Principal to take an action even before such data is specifically misused by reporting it to appropriate authorities.


2. Ensuring Accuracy and Completeness


The DPDPA does not specifically refer to any obligations of the Data Fiduciary to deal with AI-generated media. However, the obligation under Section 8(5) could be expanded to ensure that any illegitimate content which is created using the data available with a Data Fiduciary is removed as soon as it is detected. Under Section 8(3), a Data Fiduciary is required to ensure the ‘accuracy’ and ‘completeness’ of data even if it is ‘likely’ that it would be used to make a decision that affects the Data Principal. The use of the word ‘likely’ enlarges the obligation of Data Fiduciaries like social media platforms to include deepfakes. It must be noted that the business model of Data Fiduciaries is based on using personal data of its users to deliver targeted advertisements and a tailor-made cohort of content to them. As previously established, even a deepfake itself can be categorised as personal data, which can ‘likely’ be used to identify and make decisions affecting the Data Principal. This can be done in either of two ways. First, a Data Fiduciary can use a deepfake as one of the input pieces of information for its algorithm to show targeted content to the person featuring in it. Second, a deepfake can be recommended to another user who consumes or is ‘likely’ to consume the content of the person featured in it.


Data Fiduciaries under this provision should be mandated to ensure that fake AI-generated content should not persist on their platforms. Since, a major issue has been to identify and detect deepfakes, Data Fiduciaries should be mandated under these provisions to install mechanisms for detecting deepfakes. The current best practices rely on using existing signatures which are unique to every piece of content to identify any content which might be tampered with. However, the existing technology is not completely adept to detecting deepfakes. Therefore, a warning must be attached with any piece of content that Data Fiduciaries might reasonably believe to have been morphed. Along with this, respite by a Data Principal can be sought under Section 8(10) which mandates a Data Fiduciary to establish an effective grievance redressal mechanism. If a Data Principal encounters the misuse of their personal data on a Data Fiduciary’s platform, it would be the former’s prerogative to remove it immediately upon the receipt of a complaint.


3. Right to be Forgotten


The takedown action (outlined in the previous section) by the Data Fiduciary is further supported by the Data Principal’s ‘right to be forgotten’ as envisaged under Section 12 of the DPDPA. This right can be found under Article 17(2) of the General Data Protection Regulation (“GDPR”) as well. An important issue concerning this right is to balance it with the right to freedom of speech and expression. The European case of Google Spain SL, Google Inc. v. AEPD, Mario Costeja González addressed this issue by stating that the balance has to be struck between the sensitivity of the personal data and the right of the public to have that piece of information. The extent to which it affects the life of a Data Principal must also be taken into consideration. Since a deepfake essentially pedals fake information about the person featured in it, it has a much graver impact on the said person.


The Indian Courts have also recognised right to be forgotten as an important constituent of the right to privacy under Article 21 of the Indian Constitution. In the case of Zulfiqar Ahman Khan v. Quintillion Business Media Pvt. Ltd., the plaintiff had sought for the removal of two articles published online in a defamation suit, which had caused immense personal and professional harm to him. Recognising the plaintiff’s reputation, a single-judge bench of the Delhi High Court observed that “the right to be forgotten and the right to be left alone are inherent aspects” of the right to privacy. The Court, therefore, ordered for the removal of these articles for the period of subsistence of the suit. It also issued a direction to other digital platforms to ensure that even the search results regarding the article must not be published in any manner. Furthermore, in the case of Mahendra Kumar Jain v. State of W.B., it was noted that Section 8(1)(j) of the Right to Information Act, furthers the protection accorded to an individual’s reputation and dignity under Article 21 by upholding their right to privacy. The Section classifies certain information to be of a personal nature, the disclosure of which would not serve any public interest and is therefore, exempted from disclosure. Mahendra Kumar Jain recognizes the private space of an individual and puts a justified fetter in the free flow of information. A Data Fiduciary must, therefore, aid a person in letting them exercise their right to be forgotten with respect to a deepfake.


The DPDPA also affixes a Data Principal’s responsibility under Section 15 of the Act. It restricts the widespread problem of impersonation under clause (b). This is important in the present context as AI-generated media is frequently used to deceive people by impersonating another person. Data Fiduciaries can track the origin of a deepfake upon the receipt of a complaint and impose liability on the person involved in uploading it on their platform. The section would allow action to be taken against such an individual as well, thereby providing adequate respite to the aggrieved person. However, this provision is specifically helpful only in cases, where a human being is associated with a unique identifier or a unique account on the platform of a Data Fiduciary can be tracked. This individual using a deepfake to impersonate another person can be held liable for non-compliance with the provisions of Section 15. In a scenario where it is impossible to track the origin of the video or link it to a specific individual, the provision would be of little use.


The Gaps and the proposed solution


The DPDPA however, does not seem to completely deal with the problem of fake generative AI-based media. Section 3(c) of the Act is relevant here as it states the conditions under which the Act would not apply. The first clause states that the Act does not apply when data is processed by an individual for any personal or domestic purpose. The term ‘personal or domestic’ purpose is not defined under the Act. It also raises the question of whether an individual could be treated as a Data Fiduciary under the Act if they illegitimately acquire the personal data of another person. Even if the answer is yes, AI-generated fake media used for household circulation could easily spiral out of control. This is because in the age of social-media, a social circle can keep expanding within which a deepfake is circulated. It would be nearly impossible for the person who created it to keep a track of where their created deepfake goes once it is shared with anyone in their household or social circle.


This clause is very similar to the ‘household exemption’ envisaged under the European Union’s GDPR. The Frantisek Rynes v. Úřad pro ochranu osobních údajů judgement was instrumental in contextualising this exemption. In this case, the defendant had acquired images of burglars breaking into his house through public surveillance CCTV cameras. The Court of Justice of EU stated here that the household exemption would not apply to this case. This is because the source of data is relevant for the exemption. Since the defendant had acquired the images of the burglars from a public source, the way in which it was used was of little significance. The determinative factor for the household exemption is therefore, the source and not the use itself. Therefore, a deepfake created for personal use could be covered under this section if the data used to create a deepfake comes from a public source. This will prevent any person from claiming an exemption from the DPDPA solely based on the use of AI-generated fake content.


It must also be noted that the DPDPA does not a strike a distinction between personal and sensitive personal data. The lack of such distinction means that there is no added layer of adequate protection granted to sensitive personal data. While the personal data of a Data Principal can be used to identify them, sensitive personal data involves such information that could be used to discriminate against them. This includes information like their sexual orientation, ethnic origins, medical history, or even political beliefs. Article 9 of the GDPR allows access to such sensitive data in very limited circumstances. A deepfake generated from sensitive personal data could have graver consequences of organized discrimination or influencing political opinion in a specific direction. However, since the DPDPA lacks such a distinction, the present article analyses the protection of both these categories of data and deepfakes generated from them cumulatively.


Another exemption under the Act is when the Data Principal voluntarily makes the data publicly available. The ‘voluntary’ nature of the act essentially excludes any anonymous data or data related to fictitious persons. The present article is therefore, concerned only with restricting deepfakes made using data of real, living persons. In this context, it is difficult to protect celebrities whose media-based content is publicly available. Since under Section 3(c)(ii), the DPDPA would not apply to the protection of such publicly available data, it will be difficult to counter deepfakes made from publicly available data. In a world of social media where various individuals want to make their content available to a wide audience, it is rather difficult to protect their privacy under the DPDPA. The only protection which can be sought is by properly characterising the term ‘publicly available.’ Content published on a social media platform, even when available ‘publicly,’ must not be subjected to illicit usage. This can be ensured if a deepfake is considered to be an entirely new piece of data itself, attributable to the Data Principal portrayed in it. Since the Data Principal did not publish this piece of data for the public, it would still not be covered under the ‘publicly available’ exemption. This effectively means that even if a Data Principal has published certain data to be available to the public, the deepfake itself was not published to be publicly available. If the deepfake is now made available to the public, it effectively gives rise to a data breach of the Data Principal, to whom the deepfake features and should be attributable to. The Data Fiduciary should then be obligated to remove this deepfake from their platform. This also connects to the duty of the Data Fiduciary to ensure the accuracy of data that might be used to take decisions affecting the Data Principal. Since, the deepfake is not an accurate piece of data, the obligation of the Data Fiduciary arises to take it off their platform. Therefore, even if the use of publicly available data might not be considered a data breach, the publishing of a deepfake using such data would be actionable under the DPDPA. The Data Fiduciary would need to ensure that such deepfakes do not persist on their platform.


Conclusion


The DPDPA has been enacted to ensure that the integrity and privacy of personal data is protected in all respects. This includes countering the misuse of or illicit creation of personal data as well. Although there have been discussions on bringing a targeted legislation to regulate Artificial Intelligence, respite can be sought under the existing law for the foreseeable future. Therefore, it is important to assess and characterize the DPDPA in a manner that it can be effectively used against illicit use and creation of AI-generated fake media content. The suggestions made in this blog post connect the dots in the DPDPA to ensure the requisite protection against deepfakes. These should find stronger support through the jurisprudence developed under this Act by both the Executive and the Judiciary.


 

*Sarvagya Chitranshi is a IV Year Student pursuing a BSc. LL.B (Hons.) at the Gujarat National Law University.

Recent

bottom of page