RIGHT TO PRIVACY IN THE AGE OF AI

“Law and technology produce, together, a kind of regulation of creativity we’ve not seen before.” ~Lawrence Lessig

INTRODUCTION

Peter Parker’s Uncle Ben, from Spider-Man, was right when he said, “With great power comes great responsibility.” Applying it to the case in point, ‘with the great power that comes with Artificial Intelligence (AI), comes responsibility in safeguarding privacy’. Artificial Intelligence (AI) has revolutionised the industries and world order. While it has a lot of benefits for humanity, such as increased productivity and efficient outcomes, it comes with its set of challenges. The more the technology progresses, the greater the risk it poses. 

Now, with the boom in AI as well as the growth of Large Language Models (LLMs), new challenges have arisen with respect to privacy. LLMs and AI are trained on vast sets of data. The question if our personal and sensitive data will be used to train some or another AI model and if the prompts we put in are shared with third-parties. 

This blog delves into the impact of AI on privacy of data and personal information, the associated threats, and current regulatory frameworks. 

RIGHT TO PRIVACY 

The right to privacy is a part of the constitutional ideals in India under Article 21, which states, “No person shall be deprived of his life or personal liberty except according to procedure established by law.” In the case of Kharak Singh v. State of Uttar Pradesh (1963), the Supreme Court of India held that the right to life is not mere existence like that of an animal, rather it is to enjoy life with full use of all facilities of human body. The court mentioned that a person cannot enjoy his life to the fullest if he kept under surveillance or if he is confined to a place. 

In the case of Maneka Gandhi v. Union of India (1978), the court widened the scope of liberty and said that if liberty has to be stopped for a person, it should have a legislative reason behind it. 

It was finally in the case of K.S. Puttaswamy v. Union of India (2017) that court interpreted the meaning of Article 21 to include ‘Right to Privacy’ and thus, started an era of debates surrounding privacy and individual personal information. 

PRIVACY AND PRIVACY RIGHTS IN THE DIGITAL AND AI ERA

Privacy is not just limited to the physical state of being left alone but includes control over one’s personal decisions and personal data. Artificial Intelligence systems and models are trained on large sets of data, which are taken from various sources and may even include biometric information. These AI models use this data to generate responses and come up with solutions. The information could be sensitive in nature, such as information about a person’s personal life. Anyone who uses digital services and digital devices is susceptible to be a part of this data collection with or without their consent.

Different jurisdictions define protections for privacy differently. India has taken its stance in the case of K.S. Puttaswamy v. Union of India (2017). Article 12 of Universal Declaration of Human Rights says, “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” Article 8 of European Convention on Human Rights provides for ‘Right to respect for private and family life’ and its clause 2 says, “There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.” 

The use of AI models is redefining how we look at privacy rights, especially now that there are debates on the advancement of technology and the need to protect the privacy of individuals. 

PRIVACY THREATS CAUSED BY AI 

Many threats related to privacy are posed by AI, such as the ethical collection of data and use of personal information by AI models. These threats are difficult to address using today’s legal framework. The AI models are complex in nature and they pose threats that were previously non-existent. Some of them are as follows.

Complexity of data collection

The algorithms of AI models collect data in many ways that are quite complex for a layman to understand. How and when data is collected is often not visible to individuals. This lack of knowledge may create unknown data breaches of an individual.

Lack of consent

AI systems that collect information do not ask for explicit consent of the individuals. In many cases, individuals have no control over what data is being processed by the AI model and how it is being used. This means that the privacy of individuals is violated, as many jurisdictions define privacy as control over their being and personal information. 

Ethical issues

The process of data collection by AI models is not transparent. It means that the personal information of the individuals is used without their permission. Even the data collection pattern is not visible to AI. These do to the very core of the right to privacy and raise ethical issues regarding the use and development of these AI systems. 

Deep Fakes

Deep Fakes are images and videos in which a person’s face or body is altered in order to use it maliciously or to spread false information. AI can create these deep fakes without the consent of the person. This creates a fundamental debate of the digital identity of a person and may threaten the reputation of the person. 

SOLUTIONS TO THREATS CAUSED BY AI TO PRIVACY

All technological advancements come with their own set of problems. The pros and cons of the technologies have to be evaluated to decide their fate. In the case of AI, it provides significant advantages, but the challenges associated with it have to be managed carefully. Thus, some solutions to the threat to privacy created by AI are listed below. 

Transparency in data collection

A user has a right to know which of their data is being used for the training of AI models, and therefore, a transparent process to collect the data has to be adopted. 

Shifting to opt-in systems 

The AI collection software should only collect and use the data which are opted for by the individual. If a user opts out from the collection software, their privacy should be respected. 

Explicit consent mechanism

At the heart of the right to privacy is control over one’s data and being. Thus, explicit consent of the individual should be taken before collecting and using their data. This would ensure data privacy of individuals. 

CURRENT INDIAN LAWS

While India does not have a regulatory framework that directly deals with AI in matters of privacy, the current legal frameworks can help in guiding future legislation. One such framework is the Digital Personal Data Protection (DPDP) Act, 2023. It introduces principles like data limitation and data minimisation. It also sets up a data protection board to oversee any breaches and violations. This act promotes transparency in data collection and processing. Another act is the Information Technology Act, 2000. It also does not directly address AI concerns but puts restrictions on companies to collect unnecessary data. 

CONCLUSION

The rise of AI has highlighted unique issues and challenges, especially in the context of privacy. There is a need for a holistic approach in our thinking and our solutions. This requires legal minds, AI developers, technicians, and policy makers to come together and propose solutions to guide the future of AI while securing the privacy rights of individuals. 

A balanced future is possible, where AI innovation coexists with privacy protections but it requires urgent, collective action such as stronger legal frameworks and development of ethical AIs. The right to privacy is the foundation of autonomy, dignity, and a free society. As AI continues to evolve, we must ask: Do we want a future where technology serves humanity, or one where humanity serves technology? The answer lies in the choices we make today.

AUTHOR: SARGUN SINGH

REFERENCES 

Judicial pronouncements: 

  1. Khrak Singh v. State of Uttar Pradesh (1963)
  2. Maneka Gandhi v. Union of India (1978)
  3. K.S. Puttaswamy v. Union of India (2017)

Web resources:

  1. Alice Gomstyn and Alexandra Jonker, Exploring privacy issues in the age of AI, IBM, September 30, 2024. (Available at: https://www.ibm.com/think/insights/ai-privacy
  2. Dr. Rahul Bharti, The Right to Privacy in the Age of Artificial Intelligence: Challenges and Legal Frameworks, SSRN, Volume 54 Issue 7, 2024. 
  3. Unknown, Privacy in an AI Era: How Do We Protect Our Personal Information?, Standford University Human centred Artificial Intelligence, March 18, 2024. (Available at: https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information
  4. Vaishali Tomar, Privacy in the age of AI: challenges and opportunities, Lawful legal, November 21, 2024. (Available at: https://lawfullegal.in/privacy-in-the-age-of-ai-challenges-and-opportunities/
  5. Akshaya R, An Analysis On Artificial Intelligence And Data Privacy In India, Indian Journal of law and Legal Research, Volume 7, Issue 2, 2025. 
  6. Universal Declaration of Human Rights, United Nations. (Available at: https://www.un.org/en/about-us/universal-declaration-of-human-rights
  7. European Convention on Human Rights, Council of Europe. (Available at: https://www.echr.coe.int/documents/d/echr/Convention_ENG

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top