India's AI Governance Regime: Interplay of IT Act, DPDP Act, and Sectoral Regulations
- AK & Partners

- 5 days ago
- 14 min read
Introduction
India’s regulatory ecosystem is proactively shaping AI governance without a standalone AI Act, leveraging robust sectoral frameworks to ensure innovation thrives alongside accountability. As of February 2026, there is no dedicated, comprehensive AI-specific legislation, with oversight emerging from the Information Technology Act, 2000, Digital Personal Data Protection Act, 2023, and recent February 2026 amendments to IT Intermediary Rules targeting synthetically generated content.
Liability and risk allocation adopt a control-based principle, holding human deployers accountable for foreseeable harms like biased decisions or breaches under tort, contract, consumer protection, and DPDP norms, rather than diffusing responsibility to algorithms or third parties.
Algorithmic fairness and ethics are anchored in constitutional imperatives, mandating non-arbitrariness, equality, and due process for public AI uses, while private deployments navigate indirect obligations through consumer rights and anti-discrimination principles. This adaptive approach empowers organisations to map compliance proactively across AI applications.
I. Information Technology and Cyber Security
The Information Technology Act, 2000 forms the backbone of India’s digital regulatory framework. AI systems are generally treated as computer resources under the IT Act, 2000, encompassing computers, systems, networks, data, databases, or software. Ai system fall into this category because they are:
Software: AI models, Large Language Models (LLMs) and algorithms are fundamentally software and computer programs.
Data: AI systems rely on vast datasets for training and processing, which are protected as “data” under the Act.
Computer Systems: The hardware and infrastructure (e.g., servers and GPUs) used to run AI are classified as “computer systems.”
This subjects them to provisions on cyber offences, data protection, electronic records, digital signatures, and intermediary liability.
Key Offences and Penalties
Section | Offence (AI Application) | Penalty |
43 | Unauthorised access/damage to computer systems (example stolen data for ML training) | Up to 3 years imprisonment and/or INR 5 lakh fine. |
43A | Negligent handling of sensitive data in AI (e.g., breaches from insecure APIs/adversarial attacks) | Shall be liable to pay damages by way of compensation to the person so affected. |
66C | Punishment for identity theft (example AI identity theft/personation) | Up to 3 years imprisonment and/or INR 1 lakh fine. |
66D | Punishment for cheating by personation by using computer resource (example deepfakes, voice cloning, impersonating chatbots) | Up to 3 years imprisonment and/or INR 1 lakh fine. |
66E | Punishment for violation of privacy (example Non-consensual private image publication via AI, GANs/diffusion models) | Up to 3 years imprisonment and/or INR 2 lakh fine. |
66F | Punishment for cyber terrorism (example using AI for sending threats) | Life imprisonment. |
67 | Punishment for publishing or transmitting obscene material in electronic form (example obscene AI-generated material |
|
67A | Punishment for publishing or transmitting of material containing sexually explicit act, etc. (example AI-generated sexually explicit material) |
|
67B | Punishment for publishing or transmitting of material depicting children in sexually explicit act (example AI-generated child sexual abuse material) |
|
Practical Illustration: An AI credit scoring platform aggregating data (bank statements, CIBIL, GST) for ML-based decisions qualifies as a "computer resource"/potential intermediary. Data breaches trigger s 43A liability; stolen training data invokes ss 43/66; discriminatory proxies may invoke DPDP/constitutional remedies (doctrinal uncertainty noted).
II. Generative AI, Deepfakes and Content Regulation
Under the IT Act, intermediaries means entities that receive, store, transmit, or provide services related to electronic records are granted conditional safe harbour to intermediaries broadly, entities receiving, storing, transmitting, or servicing electronic records. Immunity applies if: (i) function limited to access provision; (ii) no initiation, receiver selection, or content modification.
On February 10, the MeitY announced changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”). These updates specifically include synthetically generated content such as deepfakes under the due‑diligence obligations for intermediaries. The new rules governing synthetic media will take effect on February 20, 2026, giving platforms ten days to comply.
Definition of Synthetically Generated Information: “Synthetically generated information" refers to audio, images, or videos created or changed using computers or AI so realistically that they:
Look or sound real, and
Portray a person or event as if it actually happened,
Even though it is artificial, edited, or algorithmically generated.
In short, it includes deepfakes or any AI‑made content that seems real.
What is NOT considered synthetically generated information: The following types of normal or good‑faith editing do not count as synthetic information:
Basic editing or cleanup: Such as trimming, formatting, fixing errors, adjusting colour or sound, reducing noise, transcribing, or compressing files, as long as the meaning is not changed.
Normal creation of documents or learning materials: Like making PDFs, presentations, educational content, research materials, or using templates, drafts, or illustrative examples, as long as no fake documents or false records are created.
Tools that only improve accessibility or clarity: Such as translation, adding descriptions or captions, improving quality, or making content easier to search, as long as no major part of the original content is changed.
Key Legal Provisions Introduced:
Due diligence by an intermediary: Intermediaries including social media platforms, significant social media intermediaries (SSMIs), and online gaming intermediaries must follow enhanced due diligence requirements.
Mandatory User Notifications (Every 3 Months): Intermediaries must inform users, in simple language, about key rules and consequences.
Table 1: User Notification Requirements
Requirement Category | Details Users Must Be Informed About |
Consequences of Breaking Platform Rules | - Immediate suspension/termination of account - Removal or blocking of violating content |
Legal Consequences for Unlawful Content | - Users may face penalties/punishment under applicable laws |
Mandatory Reporting of Serious Offences | Offences under:
Must be reported to authorities if legally required |
Additional Disclosure for Platforms Offering AI Tools: If an intermediary provides tools to create, edit, or distribute synthetic/AI-generated content, it must further notify users about legal implications.
Table 2: Mandatory User Information for AI Tool Providers
Category | Requirements |
Laws Under Which Misuse Can Be Punished |
|
Actions Intermediary Can Take on Violation |
|
Obligations When Intermediary Detects Violation (Including Synthetic Information)
If the intermediary becomes aware through self-detection or complaint of unlawful synthetic content, it must take swift action, including:
Removing or disabling access
Suspending or terminating user accounts
Any other necessary remedial steps
Faster Takedown Timeline
Old Requirement | New Requirement |
36 hours to remove unlawful content | 3 hours after receiving actual knowledge |
This applies to content involving contempt of court, defamation, or incitement to offences.
Due Diligence Related to Synthetically Generated Information
Platforms providing AI content creation/modification tools must adopt technical measures (including automated systems) to prevent generation or sharing of illegal AI-generated content.
Table 3: AI Generated Content That Must Be Blocked
Category of Harmful Content | Examples |
Sexual/Harmful Content | Child sexual abuse material, non-consensual intimate imagery, obscene/pornographic content, privacy violations |
False Documents/Records | Creation or alteration of fake documents or electronic records |
Illegal Weapons/Explosives | Content enabling making/obtaining explosives, arms, ammunition |
Deceptive Depictions of Real Persons/Events | Fake identity, voice, statements, false event creation, misleading portrayals |
Requirements for AI Generated Content
If the content is AI-generated but not illegal, intermediaries must ensure transparency.
Table 4: Labelling & Metadata Requirements
Requirement | Description |
Labelling of Visual AI Content | Clear, visible label stating it is AI-generated |
Audio Disclosure | Clear notice before playback |
Metadata/Provenance | Permanent metadata/unique ID indicating it is AI-generated and the tool used |
Tamper Prevention | Platform must prevent users from removing/editing metadata or labels |
Additional Due Diligence for SSMIs & Online Gaming Intermediaries
Platforms where users upload or publish content must follow prepublication checks.
Table 5: Pre-Publication Requirements for Significant Social Media Intermediaries
Step | Requirement |
1. User Declaration | Ask users whether content is synthetically generated |
2. Verification | Platform must verify user’s declaration using reasonable technical measures |
3. Labelling | AI-generated content must be clearly labelled before publication |
If a platform knowingly allows or promotes violative AI-generated content, it will be deemed to have failed in due diligence.
III. Data Protection and Privacy
The Digital Personal Data Protection Act, 2023 (“DPDP Act”) applies to AI systems to the extent they “process” digital personal data in India, and also to processing outside India if it is connected to offering goods or services to individuals in India. “Processing” is defined broadly to cover wholly or partly automated operations on digital personal data.
Illustration: Common AI Use Cases
Common AI use-cases such as customer profiling, credit scoring, behavioural analytics, and model training / fine-tuning on user data, will generally fall within the DPDP Act where personal data is involved.
Requirements for Consent and Notice for Processing Personal Data: As a baseline, personal data may be processed only for a lawful purpose on the basis of consent or “certain legitimate uses”. Where consent is used, the request must be accompanied or preceded by a notice that describes:
The personal data and purpose;
How the individual can withdraw consent / exercise rights, and;
How to complain to the Data Protection Board.
Consent must be free, specific, informed, unambiguous, and limited to the personal data necessary for the “specified purpose”; individuals must be able to withdraw consent with ease comparable to giving it. The DPDP Rules add that the notice should be understandable independently and drafted in clear, plain language.
Illustration: Operational Obligations for AI Deployments
Operationally, organisations deploying AI must implement technical and organisational measures, protect personal data with reasonable security safeguards, and ensure accuracy/consistency where the data is likely to be used for a decision affecting the individual (a frequent feature of AI scoring and recommendation systems).
Cross-Border Data Transfers: The DPDP Act does not create a separate regime for automated AI decision-making, but if personal data is used, the above duties and the individual’s rights to correction/erasure and grievance redressal still apply. Cross-border transfers are permitted unless restricted by government notification, and the DPDP Rules additionally require compliance with government-specified requirements for making personal data available to a foreign State (or entities under its control).
IV. Cybersecurity and Information Security
The Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 (SPDI Rules) operationalises information security programme and security practices commensurate with the information assets and risks, with ISO/IEC 27001 recognised as a benchmark (or an industry code approved by the government).
Illustration: Security Measures for AI Systems
For AI systems, this typically translates into securing training/validation datasets; controlling access to model weights and inference endpoints; hardening internal encryption pipelines; implementing logging/monitoring; and contractually flowing down equivalent safeguards to data processors, cloud vendors, and model hosting partners.
Security Safeguards and Breach Notification Under the DPDP Act, 2023: The Digital Personal Data Protection Act, 2023 imposes an explicit obligation on Data Fiduciaries to take “reasonable security safeguards” to prevent personal data breach, and to notify the Data Protection Board of India and affected Data Principals upon a breach. The DPDP Rules further specify breach intimation content and timelines, including intimation to the Board “without delay” and additional details within 72 (seventy-two) hours.
Intersection With CERT-In Incident-Response Obligations: Incident reporting and response also intersect with CERT-In’s mandate as the national agency for incident response, including its power to call for information and issue directions that regulated entities must comply with. Organisations should therefore maintain an AI-specific incident response playbook that covers data leakage, model compromise/theft, unauthorised access via AI interfaces, and integrity attacks on training pipelines, and triages whether notifications are triggered under DPDP and CERT-In-linked obligations.
V. Intellectual Property Rights
Human Authorship as Basis of Copyright: Copyright protection in India subsists only in works created by human authors, and ownership is attributed to identifiable natural or juristic persons. As a result, content generated autonomously by AI systems without substantial human creative contribution raises unresolved questions regarding authorship and ownership. Where AI is used merely as an assistive tool under human direction, copyright is generally vested in the individual or entity that has arranged for the creation of the work. However, purely machine-generated outputs do not fit comfortably within the existing statutory framework, leaving their legal status uncertain.
Illustration: Copyright Ownership in AI‑Generated Content
A marketing team uses a generative AI platform to draft advertising copy but reviews, edits, and approves the final text before publication. Here, the human involvement is sufficient to treat the AI as a tool, and copyright would likely vest in the company as the entity arranging for the work. In contrast, if an AI system independently generates thousands of product descriptions with no human selection or modification, asserting copyright ownership over those outputs becomes legally uncertain.
Patent law: AI As An Instrument and Not An Inventor: Patent law in India adopts a similarly human-centric approach. The Indian patent regime requires that an invention be linked to a human inventor, and only natural persons can be recognised as inventors for the purpose of filing patent applications.
Consequently, AI systems cannot be named as inventors. Nevertheless, inventions developed with the aid of AI may still qualify for patent protection, provided they satisfy conventional requirements of novelty, inventive step, and industrial applicability. In such cases, the human developer or user of the AI system is treated as the legal inventor, reinforcing the principle that AI functions only as a technological instrument rather than an independent creator. Indian governmental reports on responsible AI also reiterate that current legal frameworks attribute accountability and ownership to human beings rather than to AI systems.
Illustration: Human Inventorship in AI‑Assisted Innovations
An engineer employs an AI system to analyse chemical compounds and proposes a new pharmaceutical formulation based on the model’s recommendations. Although the AI may have generated the core insights, the patent application must name the engineer as the inventor. The AI’s contribution is legally irrelevant to inventorship; it is treated as an advanced research tool.
Use of Third‑Party Data and Copyrighted Material in AI Training: A further area of complexity concerns the use of third-party data and copyrighted material for training AI models. Training large language models and generative AI systems typically requires the large-scale reproduction and processing of text, images, and other protected works. Indian copyright law does not presently contain explicit statutory exceptions permitting commercial text and data mining for AI training purposes. Accordingly, the unauthorised use of copyrighted works or proprietary databases for model development may expose organisations to infringement claims. The absence of clear legislative safe harbours creates significant compliance risks for AI developers and deployers operating in India.
VI. Consumer Protection
AI-driven products and services (including AI-driven interfaces and algorithmic pricing) are offerings, and the Consumer Protection Act, 2019 applies to “all goods and services”. AI journeys should respect “consumer rights”, including the right to be informed (including about price) and the right to seek redressal, and avoid “deficiency” in service performance.
Misleading Representations: A “misleading advertisement” includes false descriptions, false guarantees, or content likely to mislead consumers about the nature, substance, quantity or quality of a product or service. “Unfair trade practice” includes unfair or deceptive methods used to promote sale/use/supply or provision of services, including false statements about quality or benefits. “Unfair contract” includes terms that significantly change consumer rights, such as disproportionate penalties or unilateral termination without reasonable cause.
Pricing Algorithms and Dark Patterns: A complaint may allege that a service provider charged a price in excess of what is fixed by law, displayed, or agreed between the parties. The Central Consumer Protection Authority (“CCPA”) can issue guidelines to prevent unfair trade practices and has issued the Guidelines for Prevention and Regulation of Dark Patterns, 2023 which are persuasive in nature. “Dark patterns” are deceptive UI/UX practices designed to mislead or trick users by impairing autonomy or choice, amounting to misleading advertisement, unfair trade practice or violation of consumer rights. The Guidelines apply to platforms offering goods/services in India, advertisers and sellers, and prohibit engaging in specified dark patterns.
Illustration: AI Driven Dark Patterns
Key UX risks for an AI platform include drip pricing (e.g., charging more than the amount disclosed at checkout); subscription trap (making cancellation complex/hidden or cancellation instructions confusing); and disguised advertisements (including disclosures obligations for sellers/advertisers).
Regulatory Scrutiny and Implications: The CCPA may conduct preliminary inquiries and cause investigations into prima facie violations and order discontinuation of unfair practices and, where relevant, withdrawal of services and reimbursement. For false or misleading advertisements, it can direct discontinuation/modification, impose penalties and prohibit endorsers for specified periods. Non-compliance with directions can attract imprisonment up to six months or fine up to INR 20,00,000 or both.
Liability and Risk Allocation
India’s absence of an AI-specific liability statute does not imply regulatory vacuum rather, it produces a convergence model of liability, where exposure crystallises at the point of deployment through tort, contract, consumer protection, product liability, and data protection norms. The unifying principle across these regimes is attribution of control, not technological authorship.
Control as the basis of liability: Indian courts are unlikely to entertain arguments that liability should be diffused across opaque algorithms, third-party model providers, or autonomous decision systems. Instead, liability will attach to the entity that introduced AI into a decision-making chain and held itself out as responsible for the outcome. This mirrors existing jurisprudence where automation does not dilute human accountability but heightens the applicable standard of care.
Tort Law: AI failures as foreseeable risks: From a tort perspective, AI failures will increasingly be framed as foreseeable harm arising from negligent deployment, not unforeseeable machine error.
Illustration: Where an AI-based credit underwriting or recruitment model produces discriminatory outcomes due to biased training data, deployers cannot rely on absence of intent or model opacity as a defence. The legal inquiry will focus on whether bias audits, human in the loop controls, and post-deployment monitoring were reasonably implemented.
Contract law perspective: In contract, risk allocation turns on whether AI errors fall within the assumed performance risk of the service. A provider offering an “AI-powered compliance or risk-screening tool” cannot characterise regulatory penalties triggered by false negatives as purely consequential losses if accuracy was central to the commercial bargain. Courts are likely to test limitation clauses against representations made in marketing materials, request for proposal responses, and service descriptions, not merely contractual fine print.
Consumer Law: Consumer protection law further compresses available defences. AI-driven interfaces that manipulate users through automated nudging, false urgency, or opaque pricing structures will be treated as unfair trade practices, irrespective of whether the manipulation was rule-based or model-driven. The inquiry shifts decisively from intent to consumer harm and effect.
Algorithmic Fairness and Ethics
There is no single, generally applicable Indian statute that mandates “algorithmic fairness” or “explainability” for all AI systems. However, constitutional principles become relevant where AI is deployed by public authorities (or drives State-linked decisions) in ways that can affect individuals’ rights. The Constitution’s commitment to justice, liberty, equality and dignity informs the minimum standard of responsible design for high-impact AI in such settings. Fundamental rights in Part III are framed largely as obligations on “the State”, and the State is constrained from making laws that take away or abridge these rights.
Equality and Non-Arbitrariness: The State must not deny “equality before the law” or “equal protection of the laws”. Where AI systems allocate welfare benefits, prioritise inspections, score risk, or rank candidates, this principle supports governance measures such as objective criteria, representative training data, bias testing across relevant cohorts, monitoring for drift, and human review for adverse decisions.62 Relatedly, if an AI-driven decision results in unequal treatment on protected grounds (for example, religion, race, caste, sex, place of birth), it can raise non-discrimination concerns, and public-sector recruitment/selection tools also need to be sensitive to equality of opportunity.
Freedoms, Due Process and Remedies: Public-body use of AI for content moderation or profiling may affect freedom of speech and expression, and AI-driven restrictions that impair a person’s ability to practise a profession or carry on trade/business may engage, subject to restrictions imposed by law. Where AI materially affects life or personal liberty, action must still be “according to procedure established by law”.
Conclusion
India’s current approach positions AI not as a special legal subject, but as a technology embedded within established regimes of cyber security, data protection, IP, consumer rights, and constitutional safeguards. This keeps human and institutional controllers firmly at the centre of accountability, even when decisions are generated by opaque or third‑party models.
However, as generative AI, synthetically generated information, and automated decision‑making scale across finance, public services, and consumer interfaces, pressure will grow for more targeted AI‑specific norms on liability, training data, and algorithmic fairness. In the near term, organisations deploying AI in India must treat compliance as a cross‑cutting exercise mapping each AI use‑case to obligations under IT, DPDP, CERT‑In, IP, consumer protection and constitutional principles rather than waiting for a standalone AI law to appear.
Disclaimer
The note is prepared for knowledge dissemination and does not constitute legal, financial or commercial advice. AK & Partners or its associates are not responsible for any action taken based on its contents.
For further queries or details, you may contact:
Ms. Kritika Krishnamurthy
Founding Partner
AK & Partners





Comments