top of page

Striking the Balance: Navigating Deepfakes and Free Speech

-Sohini Banerjee* and Anmol Bharuka**



Deepfakes are manipulated media content generated through artificial intelligence, ranging from morphed images to hyper-realistic videos which are divorced from reality. The emergence of deepfakes has ignited a crucial debate around the intersection of regulating online content and preserving a user’s freedom of speech and expression. In this piece, we discuss the significance of navigating the regulation of deepfakes without compromising the constitutional guarantee of free speech. We rely on a survey of international approaches to regulating deepfakes to suggest a light touch regulatory approach towards the same.


Deepfakes and Government Action

In early November 2023, a deepfake video involving a film actor thrust the issue into the national spotlight, following which the Prime Minister and President of India spoke about the potential threats posed by such manipulative content in public forums. In response, the Ministry of Electronics and Information Technology (MEITY) issued an advisory to social media intermediaries (SMIs), which inter alia, compelled the expeditious removal of reported deepfake content within a rigid 36-hour timeframe, as prescribed by the he Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021).

Further, an official press release dated 24 November 2023 stated the government’s intention to collaborate with stakeholders in formulating actionable strategies for the ‘detection, prevention, reporting, and awareness’ of deepfakes, signalling potential regulatory developments. In the most recent development, a press release dated 26 December 2023 stated that the government has issued advisories to all SMIs directing them to comply with the ‘agreed on’ procedure. Inter alia, this requires the SMIs to ensure that users are made aware of the prohibited content under Rule 3(1)(b) of the IT Rules, 2021 along with the legal consequences of sharing misinformation or patently false information online.

The recent advisories from MEITY have relied on existing frameworks under the Information Technology Act, 2000. However, the latest advisory states that MEITY may follow up these directions with further amendments to the law ‘if and when required’ based on observing the compliance undertaken by intermediaries. Nevertheless, before the formulation of any new regulations for deepfakes, it is worth keeping in mind the historical context of section 66A's annulment, the ongoing legal challenges to Part III of the IT Rules, 2021, and the challenge to the establishment of fact-check units as part of amendments introduced in 2023 in relation to right to free speech.

Global Practices

Governments worldwide are grappling with the challenge of regulating deepfakes. The European Union's groundbreaking Artificial Intelligence Act reportedly seeks to address this by imposing transparency obligations on creators, requiring them to disclose when content has been artificially generated or manipulated. However, concerns linger about the enforceability of these provisions.

In the United States (U.S.), the proposed Deepfake Accountability Act which was initially introduced in 2019 as a bill and re-introduced in 2023 with certain changes, mandates that deepfakes must be clearly labelled, with creators facing hefty penalties for non-compliance. However, there remain doubts on whether the proposed legislation will pass to become a law. Critiques have argued that the proposal falls short of its primary goal – preventing the distribution of harmful deepfake videos as malicious creators could exploit advanced technologies to evade detection, while legitimate creators may bear unnecessary burdens. In addition, U.S. also has with state-specific laws on deepfakes, covering areas such as elections, pornographic material, and content designed to harass or intimidate individuals in the states of Virgina, Texas and California. However, the effectiveness of these laws varies, with some facing criticism for potentially violating First Amendment rights.

In the UK, the Online Safety Act of 2023 introduces a new dimension by criminalizing the non-consensual distribution of intimate deepfakes. While this protects potential victims of online image abuse, it falls short in assisting those whose likeness is replicated in non-sexual deepfakes.

Amidst the above, Taiwan has adopted a unique innovative approach to regulating online content, known as the 'nerd immunity' strategy, which leverages a network of professional fact-checkers and actively involves the public. The Taiwanese government has collaborated with companies such as Line, Facebook, and the Internet Fact-Checking Network to add fact-check-bots directly to social media apps or to further investigate fake pages and 'likes’. Further, the government has engaged directly with ‘citizen hackers’, typically understood to be hackers working towards finding open sourced solutions to society-centric issues. Over and above this, innovative awareness campaigns such as ‘humour have helped in dispelling false narratives by relying on ‘memes’ or humour-based images. Such campaigns have ensured that corrective content is disseminated in a creative manner, while adding a layer of resilience against the impact of deepfakes. Recognizing the historically high engagement of civic society with the government, Taiwan aims to not only train citizens in recognizing false news but also empowers them to take active measures against its dissemination.

Solutions and the Way Ahead

It is imperative to ensure that the Central Government’s approach to regulating deepfakes remains proportionate – to ensure it does not encroach upon the right to free speech. Towards this, we suggest a three-pronged approach to regulating deepfakes.

First, the government may engage in targeted research collaborations with the private sector, which may include tech companies such as SMIs, companies in the generative AI space, etc. to identify and label deepfakes. This will enable a collaborative approach between the Government and the private sector, rather than any prescriptive engagements, which end up restricting the scope of innovation from the private sector. The data collected, or the conclusion reached by such entities in their research endeavours may inform policy making to ensure user’s rights are maintained while engaging with such platforms. This will also allow the Government to understand the tech-specific issues faced by the service providers in the generative AI and deepfakes space, which may then be accommodated in the regulation.

Given the highly technical nature of the generation of deepfakes, technical experts in the private sector would have valuable insights for regulating the technology in an effective manner. Moreover, the engagement between the government and private sector in a continuous dialogue will foster a comprehensive and holistic approach to pre-legislative consultations. This approach will not only expedite the formulation of effective regulations but also promote innovation in the sector. 

Second, citizen awareness to develop a ‘nerd immunity’ like in the case of Taiwan, where verified public and private sector fact checkers can label online content as false will also go a long way in neutralizing any negative impacts of wide circulation of any polarizing deepfakes. However, given the existing standard of court orders to take down content, coupled with the potential of intermediary platforms to be burdened with such reporting, a measure suitable to the Indian context must be explored on similar lines.

In addition to this, popularising measures to identify deepfakes among users will also be helpful. These may include identifiers like unusual movements or inconsistent facial features. In addition to fact checking, citizens can also be made aware of existing laws which help them deal with any deepfakes which have a harmful effect on their life or property. These would also include being vigilant about the enforcement of existing laws, including IT Rules, 2021 and criminal law provisions in a manner which ensures the balance between user protection and the neutral role played by intermediaries. While this has already been taken into motion by virtue of an advisory issued to intermediaries to ensure, inter alia, that users are made aware of consequences of sharing false content online, measures which ensure the role of an intermediary remains neutral must be explored.

Third, fostering international cooperation on identification and regulation of deepfake, and prosecution of wrongdoers would prevent bad actors from operating beyond borders. Further, development of globally recognized principles in development, deployment and use of such software to create deepfakes in the first place, will offer safe harbour to platforms and immunity to users.


In conclusion, the government’s endeavour to regulate deepfakes must not come at the cost of compromising freedom of speech and expression by introducing disproportionate legislation. Striking this balance is undoubtedly challenging, as evidenced by global struggles and discourse. A three-pronged approach, which introduces a holistic and light touch regulation - involving research collaborations, citizen education, and global cooperation would ensure a harmonious coexistence between regulation and fundamental rights can be achieved. It is imperative that the Government considers a nuanced approach to regulating deepfakes that addresses the intricacies of the technology, while upholding the pillars of a democratic and free society.


*Ms. Sohini Banerjee is a Research Fellow with the Policy Research Group at Shardul Amarchand Mangaldas & Co. focusing on the intersection between law and technology. She specialises in policy research, and leads policy research projects at the intersection of data protection and emerging technologies. She has extensive experience in government engagement and legislative drafting, having advised government ministries, regulators, expert committees, and industry bodies on significant reforms. She writes regularly on current public policy issues in various fora.

**Ms. Anmol Bharuka is a Research Fellow with the Policy Research Group at Shardul Amarchand Mangaldas & Co. Her research is focused on law and technology, on themes including privacy, digital markets and artificial intelligence.



bottom of page