The New Frontier: Addressing Legal Issues in Artificial Intelligence

You are currently viewing The New Frontier: Addressing Legal Issues in Artificial Intelligence

This Article has been written by Nainika Saini, a law graduate from SRM University, Sonepat, Delhi.


Certainly, the term “Artificial Intelligence” or AI is not unfamiliar to many of us. Interestingly, it is not a contemporary concept. The American computer scientist John McCarthy first introduced “artificial intelligence” at the Dartmouth Conference in 1956[1]. At its essence, AI denotes the simulation of human cognitive functions by machines, predominantly computer systems[2]. In layman’s terms, this means machines have the capability to emulate decision making processes akin to human reasoning.

As technology becomes increasingly embedded in our daily lives, there is a palpable shift in how we conduct our day-to-day tasks. However, a burgeoning concern is the absence of legal frameworks addressing intellectual property rights in the AI realm.[3] Furthermore, the swift advancements in AI have spotlighted pivotal societal issues like data privacy, system integrity, and accountability.

In this discourse, we aim to delve deep into the nuances and implications of AI and ponder over its management and the challenges it poses.

Delving Into Intelligence

The 1950s was a decade marked by heightened enthusiasm and curiosity about AI. Among the pioneers was Alan Turing, who authored the paper “Computing Machines and Intelligence”. This piece explored the intricacies of creating intelligent machines and the methods to assess their intelligence.[4] The period from 1957 to 1970 witnessed an AI surge, propelled by more affordable and faster computing. However, by the late 1990s, the fervor around AI appeared to recede. Now, we are navigating through the “big data” era, characterized by extensive data harnessing across sectors such as technology, business, and entertainment.[5]

The National Artificial Intelligence Strategy elucidates that AI comprises a spectrum of technologies designed to endow machines with capabilities to mimic human cognition and actions. This suggests that machines can assimilate intellectual competencies from their surroundings, gather and process information, and refine these competencies based on historical data. 

Echoing the significance of intelligence, Chief Justice of India, D.Y. Chandrachud, highlighted its pivotal role in bolstering efficiency, transparency, and objectivity in public governance.

Illustrating the integration of AI in judiciary processes, the Supreme Court recently unveiled an AI-powered portal named SUPACE. While it does not make verdicts, it assists by collating and presenting relevant data to the judge.[6]

Machine Learning: An Subset

Machine Learning (ML) is an integral subset of AI. It is the methodology employed to derive precise models from voluminous datasets[7]. In essence, machine learning enables software applications to yield precise outcomes and forecasts without exhaustive human programming.[8]

It also focuses on developing algorithms and models that enable computers to learn from and make predictions or decisions based on data, without being explicitly programmed. It encompasses various techniques, including supervised learning, unsupervised learning, and reinforcement learning, to enable machines to improve their performance on specific tasks through experience and data analysis. Machine learning has applications in fields like image recognition, natural language processing, recommendation systems, and more.

The Challenges Surrounding Artificial 


India’s trajectory in AI governance seems to be in flux. In April 2023, Rajeev Chandrasekhar from the Ministry of Electronics and Recording Technology articulated that there were no immediate plans to roll out dedicated regulations overseeing AI’s evolution. However, by June, the narrative seemed to pivot with hints at possible AI incorporation via the Digital India Act (“DIA”).[9] This shift in stance might be influenced by international developments, notably from the European Union. In May, the EU’s legislative bodies deliberated upon and drafted a proposal aimed at pioneering a unified policy for AI. This policy is slated for finalization by year’s end.[10]

The ascent of Artificial Intelligence (AI) represents a paradigm shift in technological advancements. While it has the potential to reduce human effort and drastically improve accuracy, the very potency of AI has brought to light several concerns that must be addressed.

Potential Bias in AI

Language Modules (LMs), sophisticated tools like ChatGPT, are designed to produce human-like responses. Their accuracy and appropriateness, however, hinge on their training data. When an LM is exposed to biased information, its outputs reflect that bias. For instance, consider an LM trained on literature related to climate change. If the majority of its training data favors an economic perspective over an environmental stance, the AI’s understanding, and hence its responses, could be skewed, potentially sidelining critical environmental concerns.[11]

AI’s Role in Crime and Fraud

Uncontrolled AI applications have dire consequences. A pertinent example is the data breach involving ChatGPT in Italy, underscoring the security vulnerabilities associated with AI systems.[12] Beyond the immediate risks, there is the overarching “black box” problem. This term captures our limited grasp on how AI makes decisions, making it challenging to anticipate or even understand AI actions post-factum.[13]

Intellectual Property Challenges

AI blurs the lines of copyright and intellectual property. Training LMs often involves feeding them vast amounts of literature. How much of this information gets retained and regurgitated? Where does human-authored content end and AI-generated content begin? As AI technology grows, it’s crucial to dissect and understand these intricacies to protect original creators and define AI’s position concerning intellectual property.[14]

Privacy Implications of AI

As AI integrates into our daily lives, the potential mishandling or misuse of personal data is a looming concern. The Personal Data Protection Bill 2019 (PDPB) in India mirrors the EU’s GDPR, representing a step towards securing personal data. However, it’s important to question: Is this enough? NITI Aayog’s pending AI strategy emphasizes AI ethics and privacy, indicating a move in the right direction.[15]

Conclusion And Suggestions 

The march of Artificial Intelligence in our contemporary world is irrefutable. Its transformative prowess is palpable across industries, making tasks more streamlined and enhancing productivity. However, as the adage goes, “With great power comes great responsibility.”[16] The unchecked and unregulated development and deployment of AI systems raise multifaceted concerns ranging from data privacy to potential biases, to the usurpation of intellectual property.

To ensure AI evolves as a boon and not a bane, the following suggestions are posited:

  1. Comprehensive Legal Framework: India must fast-track its journey towards formulating a holistic and robust legal framework that caters specifically to AI. While the PDPB 2019 is a step forward, the dynamism of AI demands specialized attention.
  1. Ethical Oversight: Apart from legal regulations, there should be an ethical board or committee responsible for examining the development and deployment of AI systems.

This body should include experts from diverse fields, ensuring multiple perspectives guide AI’s journey.

  1. Education and Awareness: As AI becomes ubiquitous, public understanding is crucial. Campaigns, seminars, and educational courses should be developed to make individuals aware of the benefits, risks, and rights related to AI.
  1. Collaborative Approach: Engage in dialogue and collaborations with international entities. AI is a global phenomenon, and India can learn from and contribute to global best practices.
  1. Protecting Intellectual Property: The blurred lines between AI-generated content and original human creations must be demarcated clearly. AI should be programmed to respect copyright laws and the intellectual property of individuals.
  1. Transparency and Accountability: It’s imperative to develop mechanisms that ensure AI systems are transparent in their operations. Moreover, accountability structures should be in place in case of discrepancies or violations.
  1. Constant Review: Given the speed of AI development, the regulations and guidelines established should be under periodic review. This ensures that the legal framework remains relevant and adaptive to the latest advancements.

In closing, while AI’s promise is astronomical, it’s only by intertwining this promise with principles of fairness, ethics, and accountability that we can truly harness its potential. As we stand on the cusp of an AI revolution, the imperative is not just to be technologically advanced but to be ethically enlightened and legally prepared.

Reference Links

  1. Spiderman: Marvel