Nov 13, 2025

MeitY releases Guidelines on AI Governance – The Way Ahead and Roadmap for AI use in India

A. Background and Context

The Ministry of Electronics and Information Technology (“MeitY”) released the India AI Governance Guidelines on November 5, 2025 (“Guidelines”), providing a framework to advance the India AI Mission.

The Guidelines build on the conceptual foundation laid out in MeitY’s Report on AI Governance Guidelines Development (January 2025) (“Report”), which outlined a principle-based approach to AI governance. These Guidelines aim to operationalize the recommendations of the Report and advance India’s AI policy vision – ‘AI for All’, with a combination of institutional design, voluntary governance measures and techno-legal instruments.

The Guidelines emphasize governing the applications of AI rather than the underlying technology itself. Recognizing AI as a general-purpose technology with transformative potential but one that also poses significant risks, such as misinformation, bias, and national security threats, the Guidelines aim to balance innovation with accountability and safety, without suggesting compliance-heavy regulations.

B. Structure of the Guidelines

 The Guidelines are structured in four parts, namely:

  • key principles,
  • key recommendations,
  • graded action plan, and
  • practical guidelines for industry and regulators, each addressing a distinct aspect of AI governance.

Part I: Key Principles of India’s AI Governance Framework (“Seven Sutras”)

The Guidelines articulate seven key principles or ‘Sutras’, adapted from the Reserve Bank of India’s Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI), that are described to form the ethical and foundational basis for India’s AI governance model:

  • Trust is the Foundation: Trust must be embedded across the value chain, i.e., the underlying technology, stakeholders and users, to achieve common good at scale.
  • People First: Human-centric design, oversight and empowerment to ensure accountability and safety as well as human capacity development.
  • Innovation over Restraint: Prioritising responsible innovation over cautionary restraint, aiming to maximize overall socio-economic benefit while reducing potential harm.
  • Fairness and Equity: Design and test AI systems in a way that outcomes are fair, non-exclusionary, unbiased and do not discriminate, including for marginalized communities.
  • Accountability: Clear attribution of accountability for AI developers and deployers based on their role and risk of harm, to be ensured through policy, technical and market led mechanisms.
  • Understandable by Design: AI systems should be explainable and interpretable to the extent feasible, to help regulators and users understand how the system works, user impact and likely outcomes intended by deployers. Understandability is key to building trust.
  • Safety, Resilience & Sustainability: AI systems should be designed to minimise risk of harm, be environmentally responsible (i.e. adoption of light-weight AI models should be encouraged), and be resilient. They should detect anomalies and provide early warnings to limit harm.

These principles should be adopted across all sectors, be technology-neutral, and form a uniform baseline for responsible and consistent AI development and deployment.

Part II: Issues and Recommendations

The Guidelines outline six governance pillars that should form the foundation of India’s AI governance model, and provide detailed recommendations under each category:

(a) Infrastructure

The Guidelines recognize one of the primary and immediate goals of India AI Mission, which is building infrastructural capacity for large-scale adoption of AI. It also states that while AI adoption is more developed in sectors like telecom, media, pharmaceuticals, more targeted intervention is needed to increase adoption of AI in sectors like agriculture, education, healthcare and public services. To continue addressing the need for greater deployment, the Guidelines recommend that India should (i) expand access to local and high-quality datasets (like being done by AIKosh and the Open Government Data Platform) and offer incentives to entities who contribute to Government of India’s measures in this area. This will ensure that adoption of AI in India will be culturally representative and relevant; (ii) provide affordable and reliable access to computing resources (e.g., over 38,231 GPUs being allocated by Government of India to start-ups and researchers at subsidized rates) in the form of incentives, financing support, tax breaks, AI-linked loans, and tailored starter-pack AI products to specific industries. All ministries and sectoral regulators should coordinate in this direction; and (iii) leverage and monetize the benefits of digital public infrastructure (“DPI”), including identity databases (DigiLocker), authentication options (Aadhaar), data exchanges, and payments systems (UPI) which can be used to launch AI solutions that are scalable, affordable and specific to local needs, resulting in a wider adoption of AI across sectors. The Guidelines recommend all ministries, sectoral regulators and State Governments to work in this direction in a coordinated manner.

(b) Capacity Building

Recognizing the knowledge and capability gap among regulators, civil servants and smaller enterprises, the Guidelines call for comprehensive capacity-building initiatives across these stakeholders. It recommends training on AI risk management, ethical deployment and responsible procurement for Government officials and regulators, along with specialized programs for the detection, investigation and tracking of AI-enabled offences by law enforcement agencies. The Guidelines also emphasize the need to extend such training initiatives beyond major urban centres, encouraging deeper AI adoption and skill development in Tier-2 and Tier-3 cities to ensure the equitable distribution of the benefits of AI.

(c) Policy and Regulation

 The Guidelines observe that India’s existing legal framework provides a substantial foundation for AI deployment through statutes such as the Information Technology Act, 2000 (“IT Act”), the Digital Personal Data Protection Act, 2023 (“DPDPA”), the Consumer Protection Act, 2019 (“CPA”), and the Copyright Act, 1957 (“Copyright Act”), supported by sectoral regulations issued by authorities such as the RBI, SEBI, ICMR, etc (a summary of these laws / regulations is provided under Annexure 3 of the Guidelines). It notes that these laws collectively address many risks associated with AI, including data misuse, AI-generated deepfakes and impersonation, copyright infringement, consumer harm and unfair trade practices.

The Guidelines further acknowledge the need for targeted refinements to existing laws to address emerging challenges from generative AI, autonomous systems, agentic AI, and the complexity of the AI value chain. Instead of proposing a standalone AI statute, they call for focused amendments to existing laws to promote innovation and ensure regulatory clarity. The Guidelines state that many of the above laws, such as the Copyright Act are presently being examined to address implications arising from use of AI, and without stating any specific position, the Guidelines advise the industry to wait for such deliberations to conclude.

Specifically, the Guidelines highlight four key areas for future legal development.

  • Copyright and Text-and-Data Mining (TDM): The Guidelines recognize that most of the AI training methodologies taking place in the market may not fall within the ‘fair dealings’ exception under Section 52 of the Copyright Act and are likely to be cases for copyright infringement of content creators. However, to support the practical need for AI development and deployment in India, the Guidelines propose exploring a TDM exception (like in EU, Japan, Singapore, UK but in varying capacity), provided such concepts are balanced with the equivalent need to protect the rights of copyright holders.
  • Content Authentication and Provenance: The Guidelines acknowledge that while the scope for creative expression by generative AI is underexplored and should be given impetus, there is also a need to regulate deepfakes and unlawful material that can be generated at scale with such technology, often at the risk of harming vulnerable groups like children and women. To address misuse of AI, the Guidelines propose content authentication and provenance as the way forward. It provides examples of techniques like watermarking or similar unique identifiers (aligned with existing industry standards such as C2PA[1]), use of forensic tools and methods of attribution (eg, dataset provenance tools, determination of whether AI content originated from a specific AI model, etc), to achieve these objectives. It also acknowledges that the limitations of these techniques need to be examined as well. The Guidelines recommend establishing an expert committee (comprising members of the industry, Government, academia, standard-setting bodies) that should be tasked to develop global standards for content authentication and provenance. These measures should then be applied through standard-setting bodies and tested rigorously. It also advises AI specific bodies that are yet to be set up – namely the AI Governance Group (“AIGG”) with support from Technology & Policy Expert Committee (“TEPC”) to provide specific recommendations to MeitY to counter misinformation and deepfakes.

This recommendation is interesting, given that MeitY has recently released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (open for public comments till November 13, 2025), wherein one of the suggestions is to require significant social media intermediaries (“SSMIs”) to establish ‘Know Your Content‘ practice for AI-generated content on their platforms, effectively placing a provenance-verification obligation on such entities. It further requires SSMIs to adopt reasonable and appropriate technical measures in this regard, however, fail to specify the specific or ideal techniques that will meet the due diligence threshold proposed under the law. Given that now the Guidelines recommend for an expert committee to suggest best standards for content authentication and provenance, it provides a timely opportunity to the industry to (i) provide their recommendations on best practices, international technical measures that are proven to be effective for such purpose; and (ii) have certainty on the technical measures that will satisfy the regulator, in event of any enforcement actions. It would be helpful for the industry if the proposed draft amendments are finally released subsequent to the release of recommendations of this expert committee.

  • Platform Classification and Liability: The Guidelines propose updating the IT Act to address the roles of AI developers, deployers and users; clarifying the scope of safe harbour and due diligence in relation to generative and adaptive systems; and allocating liability across the value chain proportionate to function and risk. As we have been highlighting in our past publications and recommendations to MeitY, most AI developers are unlikely to be classifiable as ‘intermediaries’ under the IT Act, as they do not ‘on behalf of another person receive, store or transmit an electronic record or provide any service with respect to such record’, but instead generate outputs based on user prompts, or even autonomously, and continue to refine their outputs. This necessitates an updated classification of digital entities in context of AI systems, and attribution of liability accordingly.
  • Data Protection: The Guidelines recommend that greater clarity can be brought under DPDPA, including on compatibility of consent and purpose limitation principles of data privacy with how AI systems operate; an explanation of the scope of consent exemptions under DPDPA, such as those for research, publicly available data and “legitimate use”, and the reliance of AI development efforts on such exemptions; and recognition of the value of dynamic and contextual notices in a world of multi-modal AI and ambient computing.
  • Common and sectoral benchmarks: Lastly, the Guidelines propose developing common standards and benchmarks on cybersecurity, fairness, and data integrity; establishing regulatory sandboxes for supervised innovation; and fostering international engagement to shape global standards.

(d) Risk Mitigation

Consistent with the Report, the Guidelines recognize that AI systems, being probabilistic and adaptive, can amplify existing harms or generate new ones, necessitating a structured, evidence-based approach to risk assessment and management of AI technology, tailored to India’s socio-economic context.

It identifies seven areas of risk: (i) malicious use, (ii) bias and discrimination, (iii) transparency failures, (iv) systemic risk, (v) loss of control, (vi) national security threats, and (vii) risk to vulnerable groups.

As also stated under the Report, the Guidelines propose the following measures to address risk:

  • setting up a national AI incidents database, with the support of local AI incident databases managed by sectoral regulators, provided that each database follows structured data interoperability and collection standards. Existing incident reporting with CERT-In should be leveraged for assessing AI system vulnerabilities. While this database should reflect classified threat intelligence, the Guidelines mandate dissemination of such reporting to be disclosed and stored securely. The data available in this database should later be reviewed by law enforcement agencies, AI Safety Institute (“AISI“) and TPEC to assess how existing threats can be analysed to develop better risk preparation frameworks especially for critical infrastructure like telecom, energy grids, nuclear facilities. Industry, Government, and sectoral regulators should be encouraged to contribute to the national AI incident database, without any threat of consequences.
  • voluntary frameworks (principles, standards, self‑certifications and audits) alongside techno-legal solutions such as privacy-preserving architectures, algorithmic auditing, watermarking and consent-based data-sharing models like “DEPA for AI Training” should be developed to embed accountability and safety into system design. The Guidelines indicate that in the long term, such voluntary frameworks may transition into binding baseline mandates by sectoral regulators. It stresses on the fact that any framework, even if voluntary, should be risk based, and therefore low-risk AI use may be subject to basic commitments, as opposed to high-risk uses in finance, health. Further, the industry may be incentivized to adopt such voluntary measures.
  • For loss‑of‑control risks due to high-velocity AI use cases (like in trading), the Guidelines emphasise human‑in‑the‑loop and/or system‑level safeguards (like audit trails and monitoring), particularly in case of critical sectors.

(e) Accountability

The Guidelines recommend adopting a graded liability framework proportionate to each AI stakeholders’ role and level of risk, supported by a mix of organizational and regulatory mechanisms such as transparency reports, self-certifications, internal policies, committee hearings, peer monitoring and techno-legal measures that can support voluntary measures, and bring in a culture of accountability in the industry. Entities deploying AI are encouraged to establish accessible, multilingual and responsive grievance‑redressal mechanisms with feedback loops for product improvement. Transparency across the AI value chain is emphasized under the Guidelines to facilitate effective oversight, and therefore we can expect encouragement for voluntary disclosures from the industry on their underlying technology, interplay between different actors, and flow of resources (data, compute) within their products. Lastly, the Guidelines appear to diverge from RBI’s tolerant stance on ‘one-off’ aberrations and stresses that while individual regulators may adopt their own stance, as a general principle, rule of law should be paramount in event of risk incidents.

The Guidelines recommend starting with voluntary commitments, to be followed with binding mandates, and that MeitY will follow up with a compliance schedule to cover these aspects within the next 9-12 months.

(f) Institutions

The Guidelines build on the institutional architecture first proposed under the Report, which had called for a ‘whole-of-government approach’ to AI oversight. Accordingly, the Guidelines propose the creation of the AIGG as a high-level inter-ministerial body responsible for policy direction and harmonisation across ministries and regulators, supported by the TPEC, which will provide technical and strategic expertise to AIGG. The Guidelines also formalize the establishment of the AISI, also envisioned in the Report, as the operational arm for AI safety testing, standard-setting and international cooperation.

Sectoral regulators are expected to continue their domain‑specific enforcement, harm assessment, and standard‑setting, while MeitY will operate as the nodal ministry responsible for AI adoption and regulation, in collaboration with CERT‑In and other agencies.

Part III: Graded Action Plan

Similar as under the Report, the Guidelines propose a phased and capacity-first approach to implementing AI governance in India. The Action Plan suggests short, medium and long‑term horizons.

In the short term, the Guidelines recommend focusing on foundational readiness by (i) establishing the AIGG and the TPEC and resourcing the AISI; (ii) developing an India-specific risk classification framework; (iii) conducting a legal gap analysis to align existing laws with AI use cases; and (iv) adopt voluntary frameworks. Foundational measures in this period also include expanding access to data and compute, advancing safe‑and‑trusted tools and launching public awareness programmes.

In the medium-term, (i) publication of common technical and ethical standards for cybersecurity, data security, content authentication; (ii) operationalisation of the ‘AI incidents’ database with local reporting and feedback loops; (iii) launch of regulatory sandboxes; (iv) amend laws to address regulatory gaps; and (v) integrate AI with DPI to ensure scalable and interoperable deployment.

In the long term, (i) monitor AI governance framework as proposed under these Guidelines; (ii) adopt new laws to address emerging risks; (iii) periodic policy reviews; and (iv) India to have stronger participation in global AI standards forums to consolidate India’s leadership in responsible AI governance and establish AI standards; and (v) carry out horizon scanning to prepare for emerging risks.

Part IV: Practical Guidance for Industry and Regulators

The Guidelines provide non-binding guidance aimed at both industry participants (like developers and deployers) and regulatory bodies. For industry, it calls for compliance with applicable Indian laws, adoption of voluntary frameworks, publication of transparency reports, establishment of grievance redressal mechanisms and use of techno-legal tools such as privacy-preserving and bias-mitigation technologies within their technologies. For regulators, it recommends adopting flexible, proportionate and innovation-friendly governance approaches, avoiding compliance-heavy regimes and encouraging the use of technology-enabled oversight mechanisms.

C. Concluding Thoughts

The Guidelines mark a shift from principle to practice, albeit still measured in its approach.

The Guidelines reflect the Government of India’s intent to first allow AI to advance in India, through a principles, evidence and harm based approach, examining and updating existing laws, and enabling the Government to have access to relevant information, and later intervene with more firm guidance depending on the risk factors of development and deployment of AI in India.

Endnote:

[1] Coalition for Content Provenance and Authenticity

AUTHORS & CONTRIBUTORS

TAGS

SHARE

DISCLAIMER

These are the views and opinions of the author(s) and do not necessarily reflect the views of the Firm. This article is intended for general information only and does not constitute legal or other advice and you acknowledge that there is no relationship (implied, legal or fiduciary) between you and the author/AZB. AZB does not claim that the article's content or information is accurate, correct or complete, and disclaims all liability for any loss or damage caused through error or omission.