1. Procedural and Linguistic Limitations: Barriers to Justice under the POSH Act
1. Syllabus
GS-II: Welfare schemes for vulnerable sections; mechanisms, laws, institutions and Bodies constituted for the protection and betterment of these vulnerable sections.
2. Context
The editorial provided a critical analysis of the procedural and structural limitations of the Sexual Harassment of Women at Workplace (Prevention, Prohibition and Redressal) Act, 2013 (POSH Act), arguing that the Act’s execution is often hindered by unforeseen barriers.
3. Main Body in Multi-Dimensional Analysis
Despite being a landmark piece of legislation, the POSH Act faces significant implementation hurdles, often turning justice into a procedural ordeal for the survivors.
- The Three-Month Limitation Period: A major procedural flaw is the strict three-month limit for filing a complaint. While the Internal Complaints Committee (ICC) can extend this, the rigidity disproportionately affects survivors who often take time to process trauma, gather evidence, or overcome fear of retribution. The law needs greater flexibility based on the survivor’s circumstances.
- Linguistic Limitations and Perceived Gravity: The language used to record and process complaints can inadvertently shape how seriously misconduct is perceived. Subtle linguistic biases in committee reports or procedural forms can undermine the survivor’s testimony and experience, contributing to the burden of proof often shifted onto the victim.
- Inter-Institutional Misconduct: The Act has a silent blind spot regarding inter-institutional harassment (e.g., harassment by a visiting faculty member, or during a conference). As modern academia and corporate environments thrive on cross-institutional collaboration, the lack of a clear mechanism for complaints against non-employees or across different organizations allows repeat offenders to move across campuses/firms without accountability.
4. Implications
These limitations lead to under-reporting of cases, disillusionment among survivors, and allow perpetrators to escape accountability, defeating the purpose of the law. The failure to address these structural issues risks weakening the rule of law regarding gender justice.
5. Way Forward
The government must amend the Act to allow a more flexible limitation period, provide mandatory, specialized gender-sensitization training for all ICC members, and draft guidelines for handling inter-institutional complaints via a Central/State-level appellate authority.
2. The Evolving Legal Landscape of Personality Rights in the Age of AI
1. Syllabus
GS-III: Science and Technology—developments and their applications and effects; Intellectual Property Rights (IPR).
2. Context
The editorial explored the complex legal and ethical challenges posed by Generative AI and deepfakes to the personality rights of individuals. It reviewed recent landmark judgments in India and compared India’s approach to global frameworks.
3. Main Body in Multi-Dimensional Analysis
Personality rights protect an individual’s distinct identity, including their name, voice, image, likeness, and even signature, from unauthorized commercial exploitation.
- India’s Reactive Approach: Personality rights are not codified in a specific statute but are recognized through landmark judgments like the Amitabh Bachchan v. Rajat Nagi (2022) and Anil Kapoor v. Simply Life (2023) cases, which affirmed the right to prevent the misuse of one’s voice, likeness, and catchphrases by AI. India’s approach remains hybrid and reactive, relying on common law, trademark, and copyright principles.
- The AI Threat: The ability of AI to create hyper-realistic synthetic voices and deepfakes (e.g., Arijit Singh v. Codible Ventures 2024 ruling protecting his voice) has made unauthorized replication cheap and scalable, exposing structural gaps in legal protection.
- Global Regulatory Mosaic:
- US: Follows a Property-Based ‘Right of Publicity,’ focusing on the commercial value of the persona.
- EU: Adheres to a Dignity and Consent Framework under stringent data protection laws (like GDPR).
- China: Focuses on Stricter Consumer-Focused Enforcement to prevent AI deception.
- Need for Robust Indian Legislation: The rise of deepfakes necessitates comprehensive action: codifying personality rights (including style, persona, and creative patterns), mandating AI watermarking and transparency (to label AI-generated content), and strengthening platform liability for hosting or profiting from deepfakes.
4. Implications
The absence of explicit statutory law creates a fragmented global framework for AI governance. India risks eroding human autonomy and dignity if it fails to protect citizens from AI exploitation, which also complicates liability frameworks and IP enforcement.
5. Way Forward
India must enact comprehensive legislation that explicitly defines and protects personality rights in the digital age, aligning its AI policy with its stance on digital sovereignty and ethical technology development.