Index of Headings
1.Introduction
2.Sadhguru vs. Rogue Websites and AI Misuse – Background and Legal Action
- 2.1 Claims of a Fabricated Interview, NFTs, and Unauthorized Literature
- 2.2 YouTube accounts are also in the Fray
3.What Did the Court Hold?
4.The HC Embraces the Concept of “Dynamic+ Injunction” to Combat Digital Impersonation
5.A Dynamic Precedent for Digital Integrity
6.Understanding Personality Rights in India
7.The Legal Provisions we have in Tow
8.Risks Posed by AI-Generated Content
9.Why Are Personality Rights at Risk?
10.Judicial Precedents
- 10.1 Amitabh Bachchan v. Rajat Nagi
- 10.2 Titan Industries Case
11.Deepfake Always Harmful?
12.Legal and Regulatory Responses: What’s Happening Globally?
13.Ethical Use of AI
14.Conclusion
Introduction
The digital era has opened up revolutionary technologies that have transformed the manner in which information is developed, disseminated, and consumed. With these, though, come novel challenges, most notably the protection of personal identity and reputation. The latest step by Isha Foundation founder Sadhguru to the Delhi High Court challenging the abuse of his personality rights by malicious websites and artificial intelligence (AI) is one such case that technological shift in India and the parallelly in the world.
Sadhguru vs. Rogue Websites and AI Misuse - Background and Legal Action
1.Claims of a fabricated interview, NFTs and unauthorized literature
The plaintiffs’ cause of action was caused by malafide acts of the creation of a fabricated interview titled “The SKAVLAN SHOW” by Defendant No. 1. In this publicised broadcast, the plaintiff sadhguru was falsely shown claiming to earn profits from an investment platform supposedly named the “Trendtastic Prism.”
Both the fabricated show and the fraud websites linked to it functioned as a veil, and users were redirected to various subdomains, which were truly in fact run by third party operators about whom the public was unaware of.
Similarly, the conduct of Defendant No. 2, which goes on to be identified as an X account using an AI-generated or animated avatar resembling Plaintiff No. 1(Sadhguru). This avatar employed his signature voice, style, mannerisms, dressing style just so they could promote an NFT collection dubbed “3rdEyeNFT” which was done without any consent from the aggrieved party.
Further still, another Defendant No. 4 was found promoting and selling literature centered on pregnancy under the guise of Plaintiff No. 1’s endorsement, once again misusing his image and video likeness to manufacture legitimacy and draw public attention.
2.YouTube accounts are also in the Fray
Perhaps the most widespread accusation of infringement, however, came from Defendant Nos. 5 to 41- which were recognised as various YouTube accounts accused of uploading spiritual lectures and motivational content while falsely attributing them to Plaintiff No. 1. These accounts and videos were largely built using AI-generated replications of his image and voice, presented in a way that convinced viewers they were hearing from the spiritual leader himself.
In light of these violations, it was argued that the infringing actions were being carried out across digital platforms owned or hosted by Defendant Nos. 42 to 45, which included major social media and web hosting entities. Defendant Nos. 46 and 47, that is, the Department of Telecommunications (DoT) and the Ministry of Electronics & Information Technology (MeitY) were also impleaded to ensure the enforceability of any effective court order. Defendant No. 48, comprising unknown infringers (Ashok Kumars/ John Does), represented the amalgamated digital entities behind the impugned content.
What did the Court hold?
The Delhi High Court, while delivering a judgement, acknowledged the aggrieved as a widely a spiritual leader with recognition that extended globally, a prolific author, and a recipient of prestigious awards for humanitarian contributions. This unique stature made him particularly vulnerable to misuse of the clips and images of him being present online and virtual impersonation simultaneously, the Court noted.
What makes this case unique is not merely the use of his voice or image, but the deliberate and streamlined fashion in which the impugned parties made use of emerging technologies to emulate his entire persona for money. These were not incidents of copyright violation that could be ignored or were sparse; rather, they represented a long running systemic pattern of conduct which was designed to mislead the public.
The HC Embraces the Concept of “Dynamic+Injunction” to Combat Digital Impersonation
Accordingly, the Court stressed the need for what it termed a “dynamic+ injunction” - a robust and adaptable legal remedy previously recognized in landmark cases such as Applause Entertainment Pvt. Ltd. v. Meta Platforms Inc. and Universal City Studios LLC v. Dotmovies. baby. Such injunctions empower plaintiffs to seek real-time relief and ensure that courts can respond dynamically to evolving forms of digital infringement without returning to the court for every instance of a new domain or alias.
Furthermore, the Court also acknowledged that the misuse of Plaintiff No. 1’s identity threatens not only his reputation and interests but also the legacy set by this judiciary upon public welfare, consumer protection, and the integrity of online discourse in Indian media as a whole.
The grant of a “dynamic+ injunction” signals judicial recognition of this new digital frontier, one where the protection of human identity must be as fluid and as intelligent as the ever-spawning technologies that threaten it are.
A Dynamic Precedent for Digital Integrity
In its comprehensive interim relief, the Delhi High Court extended not only declaratory protection but also procedural teeth to ensure enforceability in the digital realm. The Court directed Defendant No. 44, a major platform provider, to disclose all available basic subscriber information and related metadata for users operating the infringing accounts, both currently identified and those that may be subsequently notified on affidavit by the plaintiffs. This forward-looking mechanism empowers the plaintiffs to dynamically expand the enforcement scope as more infringing entities are uncovered during litigation.
Similarly, Defendant No. 45, identified as YouTube, has been directed to suspend, disable, or take down the identified infringing channels run by Defendant Nos. 5 to 41. Crucially, the Court’s order anticipates the persistence and adaptability of such violators by explicitly allowing plaintiffs to notify new infringing channels on affidavit, obligating Defendant No. 45 to take action on any newly discovered violations.
Defendant Nos. 46 and 47, namely the Department of Telecommunications (DoT) and the Ministry of Electronics and Information Technology (MeitY), have also been tasked with issuing appropriate directives to all relevant service providers and social media platforms, ensuring a systemic blockade of infringing content and accounts as flagged by the plaintiffs over time.
Recognizing the rapidly evolving nature of digital infringement, particularly through deepfakes and synthetic media, the Court has also provided a special liberty to the plaintiffs: should any new false or fabricated content emerge during the pendency of the case, they may request takedowns from Defendant No. 45 (YouTube) within 36 hours of notification. If any ambiguity arises, plaintiffs retain the right to approach the Court for immediate relief.
With this, the Court has also set a timeline for further proceedings that notice is to be issued to all defendants, with replies expected within four weeks and rejoinders within two weeks thereafter. The matter is slated for further hearing on October 14, 2025.
Understanding Personality Rights in India
Personality rights, often referred to as the right to publicity, protect an individual's identity, reputation, and personal attributes from unauthorized commercial exploitation. In India, these rights are not codified in a single statute but have evolved through judicial precedents and interpretations of various laws.
The Legal Provisions we have in Tow
1.Right to Privacy under Article 21 of the Constitution
i.The Supreme Court, in the landmark Justice K.S. Puttaswamy (Retd.) vs Union of India (2017) judgment, recognized the right to privacy as a fundamental right under Article 21, declaring it “intrinsic to life and personal liberty”.
ii.This article safeguards personal identity, autonomy, and dignity, and is relevant to cases of unauthorized use of personal data, such as those arising from AI-generated deepfakes.
iii.The right to privacy also underpins rights like those including data protection and the freedom to express or withhold personal information in the digital age.
2.Copyright Act, 1957: Moral and Economic Rights
i.Section 57 grants authors the right to claim authorship and to restrain or claim damages in case of any distortion, mutilation, or other modification of their work that would be harmful to their honour or reputation.
ii.Sections 38A and 38B include the right to attribution and protection against unauthorized or prejudicial use of their performances.
3.Information Technology Act, 2000: Addressing Digital Offenses
i.Section 66D criminalizes cheating by personation using computer resources, directly targeting impersonation offenses, including those enabled by AI.
ii.Section 66E penalizes the violation of privacy by capturing, publishing, or transmitting images of a person’s private area without consent.
4.Indian Penal Code, 1860 and The Bharatiya Nyaya Suraksha Sanhita, 2023: Traditional Offenses Applied to AI Contexts
i.Section 465/ Section 336: Addresses forgery, which can include digital forgeries such as deepfakes.
ii.Section 499/ Section 356: Covers criminal defamation, applicable even if a deepfake made of a person harms their reputation.
5.Trade Marks Act, 1999: Protection of Names and Identity
i.Section 14 prohibits the registration of a trademark that is a name belonging to a living individual or someone deceased within the last 20 years, without prior consent from the person or their legal heirs.
ii.Section 2(m) explicitly includes “names” within the definition of a trademark, recognizing the commercial and personal significance of names as identifiers.
The recently introduced Act, i.e, The Digital Personal Data Protection Act, 2023 (DPDPA) also, contains several provisions relevant to combating deepfakes.
- Section 4 mandates that personal data, including images, videos, or biometric data used in deepfakes, must be processed lawfully, fairly, and transparently, requiring data fiduciaries to inform individuals about how their data is used.
- Section 8 requires that the data fiduciary needs explicit, informed, and revocable consent for processing personal data, empowering individuals to prevent or withdraw consent for the use of their likeness in media.
- Section 15 places duties on data principals, including prohibiting impersonation, which is central to many deepfake offenses. The Act obligates data fiduciaries to implement strong security safeguards to prevent data breaches that could enable deepfake creation and to ensure the accuracy of data on their platforms, with the responsibility to remove fake or manipulated content upon complaint.
- However, Section 3(c)(ii) exempts data voluntarily made publicly available from protection, creating a loophole for deepfakes generated from such data. The Act also provides redressal mechanisms for aggrieved individuals and empowers the Data Protection Board to impose penalties for violations, making it a critical legal tool for addressing deepfake-related harms in India.
Risks Posed by AI-Generated Content
- Deepfakes: AI-generated videos and audio that convincingly mimic real people, often used for misinformation, fraud, or unauthorized endorsements
- Voice Cloning: AI tools that replicate a person’s voice, enabling fake interviews or endorsements
- Image Manipulation: Altering or creating images to show individuals in situations or with products they have no association with.
Why Are Personality Rights at Risk?
Several factors contribute to the heightened vulnerability of personality rights today:
- Ease of Content Creation: AI tools make it simple for anyone to create convincing fake content
- Viral Nature of Social Media: False or manipulated content can spread rapidly, causing widespread damage before it can be contained
- Anonymity and Identity Masking: Rogue actors often use techniques like URL redirection and identity masking, making enforcement difficult
- Economic Incentives: Unauthorized use of celebrity personas can drive traffic, sales, and ad revenue for unscrupulous actors
The proliferation of such content is particularly concerning for public figures, whose reputations and livelihoods are closely tied to their public persona. Unauthorized use can lead to economic loss, reputational harm, and erosion of public trust.
Judicial Precedents
1.Amitabh Bachchan v. Rajat Nagi (2022 SCC OnLine Del 4110)
Here, The Delhi High Court acknowledged that the Plaintiff is a prominent public figure, widely recognized and featured in various advertisements. Taking into account the facts presented, the Court observed that the Defendants appeared to be exploiting the Plaintiff’s celebrity status to promote their ventures, without obtaining his consent or authorization. (para no. 21)
The court was of the opinion that the plaintiff has been able to make out a good prima facie case in its favour for the grant of an ad-interim ex parte injunction. The balance of convenience was also found to be in favour of the plaintiff and against the defendants. The defendants was appeared to be using the plaintiff's celebrity status to promote their activities, without his authorization or permission. The plaintiff was, therefore, likely to suffer grave irreparable harm and injury to his reputation. Some of the activities complained of may also bring disrepute to him. (para no. 21)
Similarly, In the Titan Industries case, the Delhi High Court recognized an infringement of Amitabh and Jaya Bachchan’s publicity rights due to unauthorized usage of their persona. The case goes like this - Titan’s brand Tanishq had entered into an “Agreement of Services” with the couple, granting the brand exclusive copyright ownership over all materials created under the said agreement. Despite this, third parties including rivals and unrelated platforms used the advertisements without their explicit permission, prompting Titan to initiate legal proceedings for breach of the couple’s publicity rights and the damages caused to them in the process.
Is Deepfake always Harmful?
The misuse of deepfake technology to deceive social media users has sparked growing concerns about the potential dangers of artificial intelligence (AI). As per The Hindu, in 2023, Infosys founder Narayana Murthy publicly warned against such misuse after deepfake images and videos of him began circulating online. He urged citizens to stay vigilant and report such incidents to the authorities. That same year, in Khanapur, Belagavi, a 22-year-old man was arrested for using AI tools to create and distribute morphed images of a woman who had rejected his advances. The accused, employed by a private firm in Bengaluru, created a fake digital profile in her name and shared doctored images to damage her reputation.
These incidents are not isolated, and the dual nature of AI is increasingly evident in India. While the technology can be harnessed for good, as demonstrated by Prime Minister Narendra Modi, who used the government-developed AI tool Bhashini to deliver a real-time speech translated from Hindi to Tamil, it can just as easily be weaponized. In the past year, two viral videos falsely showing Bollywood actors Ranveer Singh and Aamir Khan campaigning for the opposition Congress party surfaced online. Both actors filed police complaints, stating the videos were deepfakes created without their knowledge or consent, illustrating how AI can manipulate not just images but also words and public perception.
Legal and Regulatory Responses: What’s happening globally?
Globally, lawmakers and regulators are grappling with similar challenges and have issued measures to combat the same:
In the United States, the right of publicity is strong, enabling individuals to manage the commercial exploitation of their name, image, and likeness.In response to mounting abuse through deepfake technology, a number of U.S. states have enacted focused legislation. Texas SB 751 makes false election-influencing videos criminal, while Florida SB 1798 and Louisiana Act 457 prohibit deepfakes that include minors in sexual material. South Dakota SB 79 extends child pornography definitions to AI-produced images mimicking minors. New Mexico HB 182 and Indiana HB 1133 add disclosure mandates for campaign literature using misleading media and permit civil enforcement of noncompliance. Washington HB 1999 offers civil and criminal relief for sexually explicit deepfake victims.
Meanwhile in the European Union, the General Data Protection Regulation GDPR has robust data protection for personal data, such as biometric and likeness data.
Generative AI drives applications such as ChatGPT and Sora but also allows for the generation of malicious deepfakes, such as concerning content for children (BBC, Feb 2024). Sites such as CivitAI are criticized for facilitating such abuse despite content rules. In turn, the EU AI Act, adopted in March 2024, does provide a definition of deepfakes (Art. 3(60)) and imposes transparency requirements (Art. 50), but they are dealt with as intermediate-risk systems. There is a critical opinion that this runs the risk of conflict with the ECHR and GDPR. Complementing existing efforts, the current study suggests two reforms: mandating structured synthetic data to detect deepfakes, and considering deepfakes employed for disinformation, extortion, or abuse to be 'high-risk' under the Act.
Moving ahead, in China, uauthorized use of an individual's voice in AI technology has been held to be a violation of personality rights by courts.
On November 25, 2022, China's Cyberspace Administration (CAC), Ministry of Industry and Information Technology (MIIT), and Ministry of Public Security (MPS) released jointly the Administrative Provisions on Deep Synthesis of Internet Information Services (DS Administrative Provisions), effective from January 10, 2023. The provisions assign the national authority of the cyberspace as the primary regulator, assisted by telecommunications and public security organs at both national and local levels. According to major laws such as the PRC Cybersecurity Law, Data Security Law, and Personal Information Protection Law, provisions are to be regulated on deep synthesis (DS) technologies, including deepfakes response to quick AI development. In addition, DS service providers need to adhere to legal and ethical principles, ensure proper political and public opinion guidance, as well as enable responsible development, thereby suppressing risks of rights infringement, misinformation, and cybercrime.
Ethical Use of AI
Developers and platforms must adopt ethical guidelines to ensure AI is not used to infringe on personality rights. This includes:
- Implementing watermarking or detection tools for AI-generated content
- Requiring clear consent for the use of personal attributes in AI systems
- Educating users about the risks and responsibilities associated with AI content creation
Health care experts see vast potential for AI, particularly in billing, paperwork processing, and, most significantly, in data analysis, imaging, and diagnosis. AI could enable doctors to draw on the full breadth of medical knowledge when making treatment decisions. In the employment sector, AI is transforming hiring by screening resumes and analyzing candidates’ voice and facial expressions, while also fueling the rise of “hybrid” jobs. Rather than replacing workers, AI often handles complex technical tasks, such as optimizing delivery routes, allowing employees to focus on higher-value responsibilities. “It’s allowing them to do more things better, reduce errors, and share expertise more effectively across the organization,” said Fuller, who researches the impact of AI on vulnerable workers.
Conclusion
Sadhguru's fight against the unauthorized use of his personality rights by malicious websites and AI-generated works is representative of the multidimensional challenges faced by persons in the digital age. With technology further dissolving reality from make-believe, strong legal frameworks, ethically driven AI development, and vigilant enforcement become necessary to protect personal identity, reputation, and public trust.
The Delhi High Court ruling handed down by Honourable Justice Sourabh Banerjee not only acknowledges personality rights in the new digital landscape but also establishes a judicial standard for pro-active, technology-aware solutions. Through the issuance of a "dynamic+" injunction and the initiation of affidavit-based future disclosures, the Court has adopted a living, dynamic enforcement system, one that reflects the very flexibility of the platforms and technologies being abused.
With an upcoming era of AI-made deepfakes, identity theft, and design-for-virality, strict legal remedies are not enough. Courts have to act back with instruments that can match the level of the threats they are endeavoring to hold back.
FAQs
1. Why is the unauthorized use of public figures' images and voices such a serious concern?
Public figures rely heavily on their image, voice, and persona for their careers and reputations. Unauthorized usage, such as deepfakes or unlicensed commercial exploitation can result in economic losses, reputational harm, and erosion of public trust. Courts have acknowledged that such misuse, especially without consent, causes irreparable damage to the individual's dignity and brand value.
2. What legal protections exist in India against misuse of a person’s identity or persona?
Indian courts have begun recognizing personality rights, especially for celebrities. In Amitabh Bachchan v. Rajat Nagi, the Delhi High Court granted an ad-interim ex parte injunction to protect Bachchan’s image from unauthorized use. Courts are increasingly proactive, issuing dynamic injunctions and acknowledging the evolving threats posed by AI-driven misuses like deepfakes.
3. Are all deepfakes illegal or harmful?
No. Deepfakes have both positive and negative uses. While they can enhance communication (for example, Prime Minister Modi’s use of AI for real-time language translation), they can also be weaponized to spread misinformation, harass individuals, or commit fraud. The legality and harm depend on context, consent, and intent.
4. How are other countries addressing the threat of deepfakes and misuse of AI?
Countries like the United States, China, and those in the European Union have implemented or proposed various regulatory mechanisms. The U.S. enforces state-level laws addressing election interference and deepfake pornography. China requires deep synthesis providers to follow ethical and political guidelines under laws like the DS Administrative Provisions. The EU AI Act mandates transparency and classifies misuse of deepfakes under “intermediate-risk,” though there is pushback urging stricter classification.
5. What role should AI developers and platforms play in ensuring ethical use of the technology?
Developers and platforms must implement watermarking and detection tools, ensure explicit consent for use of personal data or identity, and educate users about responsible AI usage. Ethical AI development should prioritize safeguarding personality rights, especially as tools become more powerful and accessible.
Join LAWyersClubIndia's network for daily News Updates, Judgment Summaries, Articles, Forum Threads, Online Law Courses, and MUCH MORE!!"
Tags :Others