What is Artificial Intelligence

With the AI Impact Summit 2026 underway in New Delhi, this is an apt moment to revisit what we mean by artificial intelligence and to ask how it unsettles legal ideas that were framed long before such systems appeared, especially in an economy like India’s that is both rapidly digitising and deeply unequal.

 Artificial intelligence is, at its core, an attempt to build systems that can carry out tasks we normally associate with human intelligence: learning from experience, spotting patterns, drawing inferences and making decisions. Unlike traditional software, which follows explicit step by step instructions, many AI models learn from large datasets and then generate their own internal rules for solving new problems. They generalise from examples instead of simply executing a fixed script.

 The Summit’s theme “Sarvajana Hitaya, Sarvajana Sukhaya”, welfare and happiness for all, captures the aspiration of using AI as a force multiplier for public good. In India, that promise is tangible: from predicting crop yields and optimising fertiliser use, to triaging patients in overburdened public hospitals, to analysing GST data for better compliance, AI tools are already inching into core economic and governance functions. But the law that must control their use was drafted in an era of paper files and human clerks, not selflearning models and cloud infrastructure.

 Existing statutes and doctrines assume that any harmful act can ultimately be traced back to a human mind, a person who forms an intention and acts upon it. AI systems, by contrast, are selflearning, partly autonomous and often opaque. When they operate as black boxes within public and private decisionmaking, doctrines that require courts and regulators to identify a human agent, and to draw a clean line of causation between that person’s conduct and the harm, begin to strain. The shift from human judgement to datadriven, modeldriven processes has significantly altered how we must think about “agency” and “causation”, and this has direct consequences for an economy like India’s that is trying to modernise without losing sight of equity.

Agency and Causation: The Legal Starting Point

 Agency, in the broad sense, refers to the capacity to act intentionally to form purposes, control one’s behaviour, and understand that actions carry consequences. In law, agency and legal personhood are reserved for those who can know the law, understand its implications and orient their conduct accordingly. Children, certain patients and persons with serious cognitive impairments are treated differently because they do not have full legal agency.

Causation is the bridge between conduct and harm. In both civil and criminal law, liability rests on showing that a particular act or omission contributed to a harmful outcome in a way the law recognises. Together, agency and causation allow the system to point to a person and say, in effect: you did this, and therefore you are responsible.

This structure was built for a world where tools from ploughs to power plants were ultimately inert, and humans were the only entities that made genuine decisions.

How AI Stretches Agency and Causation – and Why It Matters for India

Artificial intelligence does not fit neatly into that picture. These systems can refine their own strategies, discover new patterns and take decisions that were not specifically scripted by any human being. They influence, and in some contexts effectively make, choices that have moral content, choices between competing claims and values, without possessing any consciousness or understanding of those values.

India’s particular economic context makes this tension sharper. The state is increasingly tempted to use AI to cope with scale, screening crores of applications for welfare schemes, scoring taxpayers for scrutiny, flagging “highrisk” cargo at ports, allocating police resources, or ranking school performance. Private players are adopting similar tools for credit scoring, insurance pricing and hiring. The pressure to automate is high because human capacity is limited and demands are vast. But when AI systems sit between citizens and crucial economic outcomes, tax assessments, subsidies, loans, jobs, the way we assign responsibility cannot remain vague.

Several features of AI systems are especially troublesome:

1. Absence of Intent

Machine learning models do not possess will, awareness or any grasp of legal or moral norms. When such a system denies a small trader a loan, wrongly flags a farmer as ineligible for a subsidy, or misclassifies a taxpayer as “highrisk”, the error is a computational outcome, not a deliberate act. Yet the consequences are very real for livelihoods in an economy where a single loan or subsidy can determine whether a family climbs out of poverty or slips back.

Our existing categories, intentional wrong, negligence, pure accident were crafted around human mental states. They struggle to accommodate harms generated by optimisation routines. We cannot honestly say the model “intended” to discriminate against a region or a caste; but we also cannot treat the resulting pattern of denials as mere bad luck.

2. Dilution of Control

Many contemporary AI systems are effectively black boxes. Their internal representations and decision paths are complex and not meaningfully explainable even by their designers. Once deployed, they may be updated by vendors abroad, trained on data from multiple jurisdictions, and integrated into legacy Indian systems with little documentation.

In a large bureaucracy, this opacity dilutes control. An official in a district office may rely on a risk score generated by a central system, without understanding how it was produced or how to contest it. When an unlawful or unjust outcome emerges from a village repeatedly denied benefits, a group of MSMEs consistently downranked for credit, the responsibility is diffused across software vendors, central ministries, system integrators and local users. Traditional causation analysis, which looks for a relatively linear chain from instruction to harm, quickly reaches its limits.

 3. Autonomous Learning and Drifting Behaviour

 If a model is continually updated with new data, say, realtime GST filings or credit histories, its behaviour will evolve. A decision rule that was acceptable at the time of deployment may morph into something quite different a year later, especially if the underlying data themselves embody shifting economic realities, such as regional slumps or sectoral booms.

 This dynamism complicates regulation. It is not enough to certify a model once and assume that its decisions remain stable. Yet India’s regulatory capacity is already stretched, and repeated technical audits for every deployed model are difficult. As behaviour drifts, linking a harmful outcome back to any original human instruction becomes more tenuous, but the harm to affected citizens remains.

 4. Multiplicity of Inputs and Actors Across the Value Chain

 The outputs of AI systems in India are shaped by a web of actors: global model providers, domestic system integrators, government departments that supply training data, private firms that finetune models for local use, and endusers, from tax officers to bank loan officers who feed in prompts and act on recommendations. Each actor adds a layer of influence, but often no single actor has full visibility or control.

 When a systemic bias emerges, for instance, certain districts persistently flagged as “high fraud risk” leading to disproportionate inspections and delays the harm is real but the immediate cause is distributed. Traditional doctrines, which ask “whose act caused this?”, do not map easily onto a situation where many small design and data choices amplify into large-scale economic disadvantage for specific communities.

 5. Embedded Bias in Data and Models

 Indian data are not neutral. Historical patterns of access to credit, land, education and formal employment carry the imprint of caste, gender, region and class. If AI systems are trained on such data without careful correction, they will tend to reproduce and reinforce these patterns. A hiring algorithm trained on past corporate recruitment may quietly favour English medium urban graduates; a credit model trained on historic repayment may systematically assign lower scores to certain regions or social groups.

 Here, discrimination can occur without any conscious intent by any participant. Yet the effects are economically significant: restricted access to credit, employment and state benefits can entrench structural inequality in subtle, automated ways. For a constitutional democracy committed to substantive equality, that is not a side issue; it goes to the heart of economic justice.

 Recalibrating Agency and Causation for an AIEnabled Indian Economy

Rewriting the entire legal system is neither feasible nor wise. The more realistic path is to adjust existing concepts of agency, causation, liability and responsibility so that they respond honestly to technological change and to India’s economic realities, without abandoning the principles, i.e., fairness, accountability, nondiscrimination that give law its legitimacy.

A genuinely human-centric approach to AI, especially in a developing economy, must ensure that the drive for efficiency does not outrun responsibility. That calls for at least three broad moves.

  1. RiskBased Regulation with an Indian Priority Map: India needs a clear sense of where the stakes are highest: systems used in public distribution, health insurance, tax administration, criminal justice, credit and employment decisions have a direct impact on livelihoods and rights. These should be treated as high risk and subject to tighter ex ante and ex post controls.
    In such domains, the law can require prior testing for disparate impact across regions, castes, genders and income groups; periodic audits by independent bodies; and clear channels for grievance redress, including the ability to have decisions reviewed by human officials empowered to override the system. Lower risk uses in entertainment, basic productivity tools, or contexts where harms are easily reversible can be governed more lightly. 
  2. Meaningful Transparency and Contestability in HighImpact Uses: Demanding that every AI model be fully transparent is neither realistic nor always helpful. But where AI is used to allocate scarce public resources, deny benefits, flag taxpayers, or grade students, people must be able to understand, in plain terms, why a particular decision was made and how it can be challenged.
    This suggests legal requirements for “explainable enough” systems in certain sectors: systems that can provide reasons at an appropriate level of abstraction, keep logs of key factors influencing decisions, and allow regulators to probe how different inputs affect outputs. Embedding these capacities at the design stage makes it harder for institutions, public or private, to hide behind the complexity of the technology when questioned about unfair outcomes.
  3. Layered Liability Across the AI Ecosystem: In an Indian setting, where both public and private bodies deploy AI, liability should be distributed in a way that reflects actual influence without letting any actor evade responsibility. Developers can be held to standards regarding training data quality, robustness and documentation; data supplying departments to standards of accuracy and nondiscrimination; deployers to duties of appropriate use, context specific safeguards and human oversight; and end users to duties of non misuse and escalation when outputs appear anomalous.
    Such a layered approach does not fragment accountability; it makes it more precise. It also encourages better governance within institutions. For example, a bank using credit scoring AI could be obliged to regularly test whether its model yields systematically different outcomes for borrowers of similar financial profiles across caste or regional lines, and to adjust its systems or practices if such patterns appear.

 Conclusion

AI enabled systems that participate in resource allocation and moral choices without direct human supervision pose a direct challenge to a legal order that has long assumed that only human beings can be true legal agents. In India, the stakes are particularly high. An opaque model used to prioritise patients in a government hospital, to flag “suspicious” GST refunds, or to screen job applicants for a major employer can tilt life chances for millions. These are not marginal technicalities; they shape who participates fully in the economy and on what terms.

Machines themselves cannot understand rights, duties or fairness. Human beings can. But the unpredictability and complexity of AI make it harder to connect a specific harmful decision back to a particular human act using familiar tools of causation and intent. The risk, if the law does nothing, is that responsibility will evaporate into the system as a whole, while real harm falls disproportionately on those with the least power to challenge it.

 The task, therefore, is not to pretend that AI systems are persons, nor to let their complexity become a cloak for impunity. It is to adjust doctrines of agency and causation so that they keep human responsibility clearly in view even when decisions are mediated by algorithms, and to do so with a conscious awareness of India’s economic structure and constitutional commitments. The aim is to harness AI to expand opportunity and efficiency, while preserving the simple but demanding rule that when things go wrong, someone, not something remains answerable.

Author: Akshi seem, Associate Partner