Enterprise AI Oversight: Strategic Controls for In-House and Shadow AI Deployments
- AK & Partners

- 5 days ago
- 14 min read
I. AI in Enterprise Contracts: Legal Framework
The use of AI within enterprises for productivity-related functions, including legal drafting, automated negotiation processes, and large-scale data analysis, is presently regulated in India through existing statutory frameworks rather than through any AI-specific legislation. Indian law does not yet recognise a distinct legal regime governing AI-enabled contracts or the general deployment of AI in commercial operations. Nonetheless, the prevailing legal framework, principally the Indian Contract Act, 1872 and the Information Technology Act, 2000, is generally considered adequate to govern such activity.
Principle of Attribution: A central principle governing enterprise AI deployment under Indian law is attribution. AI systems are not recognised as legal persons and do not possess independent juridical status. They are therefore incapable of assuming contractual rights or obligations in their own name. The Indian Contract Act confines contractual capacity to persons competent to contract, which necessarily excludes automated systems from acting as parties to an agreement. As a result, AI operates only as a tool through which human or corporate intent is expressed.
Enforceability of AI-Assisted Contracts: The enforceability of contracts formed with the assistance of AI is clarified by the Information Technology Act, 2000. This provision confirms that an agreement does not lose legal validity merely because it is concluded through electronic or automated means. Contractual validity continues to depend on the existence of offer, acceptance, lawful consideration, and mutual assent. Consistent with the Indian Contract Act, consensus is determined by intention rather than by the medium through which consent is communicated.
Attribution to Human/Organisational Principal: Where an AI system is configured and deployed by a human actor or an organisation, the meeting of minds required for contractual formation is therefore attributed to that principal and not to the algorithm generating the output. This position accords with the reasoning of the Supreme Court in Trimex International FZE Ltd. v. Vedanta Aluminium Ltd. (2010),1 where the Court held that contracts concluded through electronic communications are enforceable when mutual intent is evident from the conduct of the parties. Any offer, acceptance, representation or contractual obligation generated through an AI system is accordingly treated in law as an act of the entity that authorised its use.
Liability Implications: This attribution framework has direct implications for liability. The use of AI in contract drafting, negotiation, review or automated execution entails foreseeable risks, including fabricated legal authorities, internally inconsistent provisions, unlawful terms and commercially unreasonable outcomes. These risks arise from known limitations of generative systems and are not exceptional in nature. Organisations cannot avoid liability by contending that such errors were produced autonomously by an algorithm, particularly where outputs are relied upon without meaningful human review. In such circumstances, liability is likely to attach to the deploying entity under established principles of breach of contract, negligent misrepresentation or failure of consideration.
II. In-House Productivity Tools
Internal AI tools such as email summarisers, document drafters, task assistants process employee and client personal data within organisational systems.
Key Risk: Unauthorised processing, weak security controls or uncontrolled access converts an internal efficiency tool into a statutory breach under Indian data protection and cybersecurity law.
Illustration: An internal AI tool auto-summarises company emails containing unanonymised personal data (salary, medical disclosures, performance reviews).
Failure Point: Inadequate encryption and access controls allow attackers to exfiltrate AI-generated summaries.
Applicable Legal Framework
Statutory Provisions
Act | Provision | Why the Law Applies to In-House AI | How It Applies in Practice | Illustration (Typical Internal Use Case) |
Digital Personal Data Protection Act, 2023 | Section 8(1) | In-house AI tools process identifiable employee and client personal data for organisational purposes. | AI deployment requires valid consent or another lawful ground before ingesting emails, documents, or communications containing personal data. | An internal email-summarisation tool processes HR emails containing health disclosures without informing employees or obtaining consent. |
Digital Personal Data Protection Act, 2023 | Section 25 | AI-driven processing failures constitute systemic personal data protection violations. | Regulatory penalties may be imposed for inadequate safeguards, excessive data use, or uncontrolled AI processing. | A company rolls out an AI assistant across all teams without data minimisation or access controls, exposing large datasets and triggering regulator scrutiny. |
Digital Personal Data Protection Act, 2023 | Section 28 | AI systems increase the likelihood and scale of personal data breaches. | Breaches involving AI outputs or training data must be notified to the Data Protection Board and affected individuals. | AI-generated summaries containing personal data are leaked via a compromised internal dashboard, requiring breach notifications. |
Information Technology Act, 2000 | Section 43A | Organisations deploying AI qualify as body corporates handling sensitive personal data. | Negligent security practices in AI systems attract civil compensation claims from affected individuals. | Poorly secured AI logs expose employee salary data, leading to compensation claims for negligent security practices. |
IT (Reasonable Security Practices and Procedures and SPDI) Rules, 2011 | Rule 8 | AI tools store, transmit, and analyse personal and sensitive data electronically. | Encryption, access controls, and periodic audits must be implemented for AI infrastructure. | An AI system transmits contract summaries between internal servers without encryption, allowing interception in transit. |
IT Rules, 2021 and CERT-In Directions | Reporting obligation | AI systems are part of an organisation’s information infrastructure and may be breach vectors. | Cyber incidents involving AI systems must be reported to CERT-In within six hours of awareness. | A breach involving an AI analytics tool is detected but reported after ten hours, triggering regulatory action for delayed reporting. |
Consumer Protection Act, 2019 | Section 2(47) | AI processing of client or consumer data affects service quality and fairness. | Data misuse or leakage through AI tools may qualify as an unfair trade practice. | An internal AI pricing tool produces biased recommendations that are relied upon in consumer offerings, leading to complaints before the CCPA. |
Soft Law and Policy Guidance
Policy/Scheme | Status | Enforceability | Why It Applies | How It Applies in Practice |
MeitY Advisory on responsible AI (2024) | Policy advisory | Non-binding (soft law) | Regulators assess AI deployments against emerging norms of responsibility and safety. | Bias testing, accuracy checks, and human oversight are used as benchmarks for reasonableness during audits and enforcement. |
Practical Considerations for Businesses
Consent Notices: Purpose-specific consent notices must be issued under Section 5 of the Digital Personal Data Protection Act, 2023 before any AI deployment.
Data Minimisation: Data inputs must be minimised, and personal data is anonymised or pseudonymised wherever feasible in accordance with the Digital Personal Data Protection Act, 2023.
Encryption Standards: Personal data must be encrypted at rest and in transit in compliance with Rule 8 of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.
Access Controls: Role-based access controls, activity logging, and segregation of duties must be implemented as reasonable security practices under the Information Technology Act, 2000.
Employee Training: Employees must be trained on personal data handling obligations and compensation liability under Section 43A of the Information Technology Act, 2000.
Governance Ownership: Clear internal ownership must be assigned for AI governance and data protection compliance under the Digital Personal Data Protection Act, 2023.
Regular Audits: Quarterly security and AI governance audits must be conducted to ensure ongoing statutory compliance.
Incident Response: A CERT-In-compliant incident response plan must be maintained under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and CERT-In Directions, enabling cyber-incident reporting within six hours.
II. Employee Use of Generative AI
The use of generative AI by employees, even where limited to routine activities such as drafting internal communications or conducting preliminary research, gives rise to significant legal risks. These risks arise primarily in relation to intellectual property ownership and data protection, and they persist irrespective of whether the use of such tools is formally authorised or informally adopted within the organisation.
Human Authorship Requirement: Indian copyright law is firmly grounded in the requirement of human authorship. Under the Copyright Act, 1957, protection subsists only in original works that reflect a minimum threshold of human intellectual effort and creative input. The Copyright Act provides that copyright protection extends to original literary, dramatic, musical, and artistic works. In Eastern Book Company v. D.B. Modak (2008)2, the Supreme Court clarified that originality under Section 13 requires more than mere labour or investment. The Court held that a work must demonstrate a minimal degree of creativity, reflected through intellectual effort, skill, and judgment, in order to qualify for protection. AI systems are not recognised as authors or rights-bearing entities. The legal consequence is that purely AI-generated outputs may fall into the public domain, depriving organisations of enforceable exclusivity over content that may otherwise be commercially valuable.
Third-Party IP Infringement Risks: Employee use of generative AI also creates exposure to third-party intellectual property infringement. AI models are trained on large and heterogeneous datasets that may include copyrighted works, trademarks and other proprietary material. As a result, AI-generated outputs may reproduce or closely approximate protected third-party content. Liability for infringement arises irrespective of the employee’s intent or awareness. Organisations must therefore implement structured intellectual property clearance mechanisms, including similarity analysis and reverse image searches, prior to publication, deployment or commercial use of AI-generated material.
Personality and Publicity Rights: In addition to copyright concerns, AI-generated content may infringe personality and publicity rights. Images, videos, or voice simulations that resemble identifiable individuals without authorisation raise legal risks under personality rights recognised through Indian constitutional jurisprudence and tort law. In R. Rajagopal v. State of Tamil Nadu,3 the Supreme Court affirmed an individual’s right to control the commercial use of their identity. The unauthorised use of AI-generated likenesses in advertising, promotional material, or endorsements may therefore attract injunctive relief, damages and reputational consequences. These risks are particularly pronounced in the context of deepfakes and synthetic endorsements.
Patent Inventorship Constraints: Patent law imposes further constraints on employee use of AI in inventive processes. AI systems cannot be named as inventors under Indian patent law. Inventorship must vest in natural persons who contribute to the conception of the inventive idea. Where AI tools are used in research and development, organisations must adopt structured invention disclosure processes to accurately document human contribution, identify inventors, and articulate the technical problem and solution. This is necessary to ensure compliance with the Computer Related Inventions Guidelines and to mitigate risks of rejection or invalidation arising from improper inventorship or inadequate disclosure.
Key Risk: Uploading confidential or personal data into external AI tools results in loss of organisational control over data, triggering statutory exposure under Indian data protection, cybersecurity, and intellectual property law.
Illustration: An employee uploads confidential client contracts containing personal data and commercially sensitive clauses into a public generative AI tool for summarisation.
Practical Safeguards for Businesses using Generative AI
Approved Tools Policy: Unapproved use of public generative AI tools should be prohibited through formal internal policy, with approved in-house alternatives aligned with consent requirements under Section 5 of the Digital Personal Data Protection Act, 2023.
Mandatory Employee Training: Employees must receive mandatory training on personal data handling, sensitive personal data or information obligations, and compensation liability exposure under Section 43A of the Information Technology Act, 2000.
Technical Controls: Technical controls, including data loss prevention tools, upload blocking, and data masking, should be implemented in accordance with Rule 8 of the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011.
Incident Logging and Reporting: System logs should be maintained, and incidents arising from generative AI misuse are reported within six hours of awareness in compliance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and CERT-In Directions.
III. Shadow AI Risks
“Shadow AI” refers to the unauthorised use of public or consumer-facing AI tools by employees outside approved organisational systems. Such use creates material operational and legal risk, particularly in relation to data protection, confidentiality and regulatory compliance. The risk arises not from the technology itself, but from its deployment without governance, oversight or contractual safeguards.
Unauthorised Data Disclosure: A primary risk associated with shadow AI is unauthorised data disclosure. Employees may upload internal documents, customer data, contracts, or other business information into public AI tools for summarisation or analysis, potentially breaching confidentiality obligations owed to clients, counterparties or data principals. This exposure is amplified by AI provider terms of service that reserve broad rights to retain or reuse uploaded content for model training or derivative purposes, placing confidential information beyond organisational control.
Employer Liability Persists: Unauthorised employee use of AI tools does not displace employer liability. Under the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023 (DPDP Act), organisations remain responsible as data fiduciaries or processors for personal data handled in the course of employment, particularly where internal controls or technical safeguards are inadequate.
Civil and Criminal Liability: Failure to implement reasonable security practices for AI data pipelines, prompts, or outputs may attract civil liability and criminal liability under the Information Technology Act, 2000 where unauthorised disclosure causes harm. The DPDP Act further introduces enhanced penalty exposure, including monetary penalties of up to INR 250 crores for non-compliance with statutory data protection obligations.
Consent and Transparency Issues: Shadow AI also raises consent and transparency concerns. Data principals or counterparties may allege insufficient disclosure regarding the role of AI in data processing or decision-making, particularly where automation materially affects outcomes. Such failures may implicate statutory rights under the DPDP Act and be treated as governance deficiencies rather than isolated employee misconduct.
Key Risk: Shadow AI operates beyond approved systems, it evades oversight, bypasses security safeguards, and enables uncontrolled disclosure of personal data and proprietary information, triggering liability under Indian data protection, cybersecurity and intellectual property law.
Illustration: For example, uploading a confidential client contract to a public generative AI platform for summarisation, where platform terms permit data reuse for model training, constitutes a breach of confidentiality. Liability in such cases attaches to the employee and employer as the data fiduciary under the DPDP Act and the responsible body corporate under the Information Technology Act, 2000 irrespective of authorisation. The AI provider stores and processes the data externally, outside the organisation’s control. A provider-side breach exposes uploaded content.
IV. Data Leakage, Confidentiality, and Employer Liability
Data leakage and confidentiality breaches arising from the deployment of AI attract well-established civil and criminal liability under Indian law. These consequences flow from existing statutory provisions governing the handling, security, and disclosure of personal data in digital operations.
Scenario | Illustrative AI Use Case | Law Applicable | Practical Compliance Implication |
Insecure AI data pipelines | Organisation deploys an AI tool that processes employee or client data, but fails to secure prompts, training datasets, logs, or outputs. | IT Act, 2000 – Section 43A | AI systems must be included within cybersecurity frameworks, with encryption, access controls, and auditability across the full AI lifecycle. |
Negligent AI configuration causing mass exposure | AI model trained or operated using internal datasets is breached due to weak controls, exposing large volumes of personal data at once. | IT Act, 2000 – Section 43A | Risk assessments must account for AI’s amplification effect and require enhanced safeguards compared to conventional IT systems. |
Employee misuse of AI access | Employee inputs personal data into AI tools outside authorised workflows while performing official duties. | IT Act, 2000 – Section 72A | Internal policies must explicitly restrict AI usage and treat unauthorised AI input as a confidentiality breach with disciplinary consequences. |
AI processing without valid consent | AI systems are trained, refined, or operated using employee or customer data without purpose-specific consent. | DPDP Act, 2023 – Section 8(1) | Consent notices must expressly disclose AI use cases, including training, optimisation, and analytics. |
Secondary AI use beyond original purpose | Data collected for HR, customer onboarding, or service delivery is reused for AI model training or testing. | DPDP Act, 2023 – Sections 5 and 8 | AI use must be purpose-mapped at design stage; blanket or bundled consents are insufficient. |
Failure to secure AI systems | AI tools lack encryption, access controls, or logging. | IT (Reasonable Security Practices and SPDI) Rules, 2011 – Rule 8 | AI infrastructure must meet baseline security standards applicable to all SPDI-handling systems. |
Delay or failure in breach notification | Data leak occurs through AI tools, but organisation delays reporting due to uncertainty over AI responsibility. | DPDP Act, 2023 – Section 28; IT Rules, 2021 and CERT-In Directions | Incident response plans must expressly cover AI systems and shadow AI scenarios. |
AI-driven exposure of client IP | Employees upload proprietary documents or contracts into AI tools without licence or authorisation. | Copyright Act, 1957 – Section 51 | Acceptable Use Policies must prohibit AI uploads of protected material without legal clearance. |
Governance failure due to fragmented compliance | Organisation complies with DPDP Act but ignores sector-specific AI or data rules. | RBI Cybersecurity Directions; ICMR AI Ethics Guidelines | AI governance must integrate sector-specific compliance rather than treating DPDP compliance as sufficient. |
Post-breach enforcement escalation | Organisation lacks documented AI governance, training, or controls. | DPDP Act, 2023 – Section 25 | Internal AI policies, training records, and audits become critical defensive evidence. |
Failure to align with emerging AI norms | Organisation deploys AI without bias checks, transparency, or oversight. | MeitY Responsible AI Advisory, 2024 (Soft law) | Soft law should be operationalised as internal benchmarks to mitigate enforcement risk. |
V. Governance Frameworks and Policy Controls
Effective management of legal risk arising from AI deployment requires organisations to adopt structured governance frameworks that integrate binding statutory obligations with non-binding policy guidance. In India, AI governance operates through a layered interaction between enforceable legislation and normative instruments that articulate regulatory expectations in the absence of AI-specific statutes.
Enforceable Obligations vs Voluntary Policy Recommendations
Instrument | Nature | Is Compliance Mandatory? | How Organisations Should Treat It in Practice |
Digital Personal Data Protection Act, 2023 | Binding statute | Yes. Enforceable by law | Must be hard-coded into AI system design, consent flows, security controls, and breach response mechanisms |
Information Technology Act, 2000 | Binding statute | Yes. Enforceable by law | Forms baseline liability for AI security failures, employee misuse, and negligent oversight |
IT Rules (SPDI, Intermediary, CERT-In) | Binding delegated legislation | Yes. Enforceable by regulators | Operational requirements. Encryption, logging, breach reporting timelines must be implemented |
Consumer Protection Act, 2019 | Binding statute | Yes. Enforceable by CCPA | Requires transparency and fairness in AI systems impacting consumers |
India AI Governance Guidelines, 2025 | Policy guidance | No. Non-binding | Should be used as internal benchmarks and governance standards |
MeitY AI Advisories | Advisory | No. Non-binding | Inform regulator expectations during audits and investigations |
Soft Law vs Enforceable Obligations
Category | Examples | Legal Effect | Risk of Non-Compliance |
Enforceable Obligations | DPDP Act 2023, IT Act 2000, IT Rules 2011/2021 | Direct statutory liability | Monetary penalties, compensation, criminal exposure, licence suspension |
Soft Law / Guidance | India AI Governance Guidelines 2025, MeitY AI Advisory 2024, NITI Aayog AI Principles | No direct penalties | Used to assess “reasonableness”, aggravation, and governance failure |
Soft Law / Guidance | RBI FREE-AI Framework, SEBI AI frameworks | Indirect enforcement via supervision | Regulatory sanctions if ignored during audits or supervisory action |
Sector-Agnostic Regulations
Law / Regulation | Type | AI-Relevant Obligations |
Information Technology Act, 2000 | Binding law | Security, disclosure, identity misuse, intermediary liability |
Digital Personal Data Protection Act, 2023 | Binding law | Consent, purpose limitation, breach notification, penalties |
IT (SPDI) Rules, 2011 | Binding rules | Encryption, access controls, audits |
IT (Intermediary Guidelines) Rules, 2021 | Binding rules | 6-hour breach reporting, grievance redressal |
Copyright Act, 1957 | Binding law | Training data and output infringement |
Consumer Protection Act, 2019 | Binding law | Unfair or opaque AI practices |
Indian Contract Act, 1872 | Binding law | Validity of AI-assisted contracts |
Indian Evidence Act, 1872 | Binding law | Admissibility of AI outputs |
Sector-Specific Regulation
Sector | Instrument | Nature | AI-Specific Focus | Enforcement |
Finance | RBI FREE-AI Framework (2025) | Soft law | Fairness, explainability, bias audits | RBI supervision |
Finance | RBI Master Directions (PA, NBFC, IT outsourcing) | Binding | Model governance, data localisation, risk management | RBI penalties |
Securities | SEBI AI/ML Frameworks | Binding | Investor data protection, algorithmic accountability | SEBI enforcement |
Healthcare | ICMR Ethical AI Guidelines | Soft law | Clinical safety, consent, ethics review | Medical regulators |
Telecom | TRAI AI Recommendations | Soft law | Network optimisation ethics | TRAI directions |
Data Centres | MeitY Data Centre Policy | Binding policy | Infrastructure localisation for AI workloads | MeitY oversight |
Taming Shadow AI: Governance for In-House AI in India
As enterprises accelerate AI adoption for contracts, productivity, and innovation, India's layered framework – anchored in the DPDP Act 2023, IT Act 2000, and emerging guidelines like MeitY's 2024 Advisory and India AI Governance Guidelines 2025 – demands proactive governance over both approved in-house tools and rogue "shadow AI".
Forward-thinking organisations treat AI as a controlled asset: embedding human oversight, purpose-specific consents, encryption/DLP tech stacks, and quarterly audits as "reasonable security practices". Layer soft law (RBI FREE-AI, ICMR ethics) as internal benchmarks to demonstrate "reasonableness" during audits, turning compliance into competitive edge amid 2026's tightening IT Rules on deepfakes and synthetics.
By mapping risks across contracts (liability attribution), employee genAI (IP/human authorship), shadow use (data loss), and governance (hard vs soft law), businesses can harness AI's 47% production adoption in Indian finance while mitigating hallucination-driven tax notices or bias exposures. The path forward: channel innovation through secure DPI (UPI for AI), bias-tested pipelines, and executive-owned frameworks – ensuring AI amplifies value without unleashing uncontrolled risks.
Footnotes
Trimex International FZE Ltd. v. Vedanta Aluminium Ltd. (2010) 3 SCC 1.
Eastern Book Company v. D.B. Modak AIR 2008 SC 809
R. Rajagopal v. State of Tamil Nadu (1994) 6 SCC 632.
Disclaimer
The note is prepared for knowledge dissemination and does not constitute legal, financial or commercial advice. AK & Partners or its associates are not responsible for any action taken based on its contents.
For further queries or details, you may contact:
Ms. Kritika Krishnamurthy
Founding Partner
AK & Partners





Comments