May 30, 2024

AI, Machine Learning & Big Data Laws and Regulations 2024

This India Chapter in the 2024 edition of the book titled ‘AI, Machine Learning & Big Data’ was published by Global Legal Insights (Global Legal Group) and provides crucial market trends, legal and policy developments in AI, machine learning and big data laws and regulations across the Indian jurisdiction.

India has seen remarkable digital transformation in recent years, greatly impacting multiple sectors including healthcare, finance, e-commerce, education and the like.  This digitisation of the Indian economy has significantly augmented the demand for technologies such as Artificial Intelligence (“AI”) and Machine Learning (“ML”).

India’s AI market size is projected to reach USD 5.47BN by the end of 2024 and USD 14.72BN by 2030.[i]  Acknowledging this potential of AL/ML in transforming the economy, the Indian Government has shown active interest in the development, adoption and promotion of AI and ML tools/technologies across multiple sectors, envisioning AI as a ‘catalyst’ and a ‘kinetic enabler’ for India’s digital economy.[ii]  Multiple policy interventions have been introduced to achieve this objective, some of which are summarised below.


Presently, India does not have a legislative framework that expressly regulates the development and use of AI and ML tools/technologies.  It is expected that this sector will be governed by the Digital India Act, which may be released for public consultation by July 2024.  This law is expected to facilitate AI development by ‘safeguarding’ innovation in AI, ML and other emerging technologies.  The Government of India has indicated that while it will support monetisation of AI/ML technology in India, this process should be regulated by specific compliances for high-risk use cases, including human intervention and oversight, and ethical use of AI/ML tools and technology.[iii]

In the meantime, the Ministry of Electronics and Information Technology (“MeitY”) has issued advisories to ‘intermediaries’ and ‘platforms’ that develop and make available AI tools and/or technologies to Indian users, asking them to comply with additional requirements specific to AI tools, as part of the due diligence obligations imposed upon such ‘intermediaries’ under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”), framed under the Information Technology Act, 2000.  While these advisories do not have a legislative backing, it appears that the private sector is working with the Government to address their concerns, to the extent feasible.

Advisory on deep fakes: On December 26, 2023, MeitY issued an advisory to all ‘intermediaries’  to address the growing concerns around the misinformation powered by AI deepfakes.  This advisory urged social media platforms and other intermediaries to comply with the IT Rules, particularly regarding the identification and removal of prohibited content, including deepfakes that impersonate others or spread misleading information.[iv]

Advisory on the use of AI models/large language models (“LLMs”)/generative AI/software or algorithms: Subsequently, MeitY issued another advisory to ‘intermediaries’ and ‘platforms’, including ‘significant and large platforms’ on March 15, 2024 (an earlier version issued on March 1, 2024 was updated), recommending that they, inter alia: (i) ensure compliance with content-related regulations prescribed under the IT Rules in relation to the use of AI models/LLMs/generative AI/software/algorithms; (ii) ensure that the use of AI models/LLMs/generative AI/software/algorithms, do not permit any bias or discrimination, or threaten the integrity of the electoral process; (iii) label the possible inherent fallibility or unreliability of the output generated from the AI models and implement a consent mechanism that explicitly informs users of the fact that the content is derived from an AI tool/technology; and (iv) ensure that any synthetic creation, generation or modification of a text, audio, visual or audio-visual information, that can potentially result in creation of misinformation or deepfakes, is labelled or embedded with permanent unique metadata/an identifier, such that the computer source and the user of such content can be identified.

This advisory is the first formal guidance issued by the Government of India, relating to the use and allowance of AI models and tools including generative AI and LLMs in India.

Privacy Law aspects: The Indian Government has recently enacted a new data privacy law, the Digital Personal Data Protection Act, 2023 (“DPDP Act”).  The DPDP Act, among others, affixes varied obligations on data fiduciaries (a person who decides the purpose and means of processing personal data), including imposition of significant penalties (up to INR 250 crores) for personal data breaches.  Additionally, the DPDP Act prescribes specific consent requirements for processing personal data and prohibits behavioural monitoring, profiling of, and targeted advertisements involving children.  While the DPDP Act itself does not regulate AI, it will have indirect implications on the way AI systems are developed and deployed, particularly when they make use of personal data.


When asked about the Indian Government’s potential plans/policies for the use of AI, the Minister of State for MeitY, Mr. Rajeev Chandrasekhar, stressed the importance of ensuring safety and trust in AI for all citizens.  The Minister also spoke of the necessity of implementing rules and regulations that provide guardrails for ethical and safe use of AI.[v]

National Programme on AI and the National Strategy for AI (2018): As a part of India’s national programme on AI, NITI Aayog, India’s public policy think tank, was tasked with the responsibility of formulating policies and rules for the development of AI in India.  In 2018, NITI Aayog released the National Strategy for Artificial Intelligence #AIforAll (“NSAI 2018”),[vi] which focused on leveraging AI for social and inclusive growth in line with the Government of India’s projected AI roadmap.  The NSAI 2018 identified five sectors to benefit the most from AI: (i) healthcare; (ii) agriculture; (iii) education; (iv) smart cities and infrastructure; and (v) smart mobility and transportation.  The NSAI 2018 also launched ‘AIRAWAT’ (Artificial Intelligence Research, Analytics, and Knowledge Assimilation Platform) for promoting research and development of AI by facilitating collaboration among various stakeholders including academia, industry and Government agencies, to advance AI technologies and applications in India.  ‘AIRWAT’ was recently ranked 75th in the top 500 global supercomputing list at the International Supercomputing Conference in Germany in 2023.[vii]

In February 2021, NITI Aayog published a set of principles outlining responsible AI practices.[viii]  These principles emphasise the importance of ensuring safe, reliable, fair, transparent, accountable and inclusive AI systems.  Recognising the potential societal impacts of AI technologies, these principles aim to guide policymakers, researchers and industry stakeholders in developing ethical and responsible AI solutions.  Building upon these foundational principles, NITI Aayog further operationalised responsible AI practices in August 2021 by releasing guidelines for integrating these principles into real-world AI applications.  These operationalising principles provided actionable steps and frameworks for incorporating ethical considerations and risk mitigation strategies throughout the AI development lifecycle, reinforcing India’s commitment to fostering AI innovation while safeguarding against potential harms and ensuring societal well-being.

Taskforce Report: The Ministry of Commerce and Industry also constituted a Task Force on Artificial Intelligence[ix] to submit a report on AI for economic transformation of India.  The Task Force Report acknowledged that data is the bedrock of AI systems and reliability of AI systems depends primarily on quantity and quality of data.  The Report assessed that it is crucial for AI systems, among others, to:

  • have explainable and demonstrable behaviour;
  • have engineering for safety and security;
  • undergo an audit for non-contamination by human biases and prejudices; and
  • be transparent and comply with industrial standards.

The Task Force also required for legal provisions applicable to human users of AI systems to continue to apply, as relevant, to autonomous machines and called for specific liability provisions to be worked out for certain categories of machines.

Draft National Data Governance Framework Policy (“NDGFP”): In May 2022, MeitY released the draft NDGFP[x] with an aim to capitalise the full potential of digital governance by maximising data-led governance and data-based innovation.  Further, the policy also launched the non-personal data-based India Datasets programme which outlined the methods and rules to be adopted by the Government and private entities to safely access non-personal data and anonymised data for research and innovation use cases.  Among others, the NDGFP proposes to set up a Data Management Office responsible for framing, managing and periodically reviewing the policy, as well as design, and manage the India datasets platform that will process requests and provide access to non-personal and/or anonymised datasets.

India AI 2023 Expert Group Report by MeitY: The seven expert working groups set-up by MeitY released their first edition of ‘IndiaAI’ in October 2023,[xi] which outlined comprehensive strategies for leveraging AI to propel India’s growth and development.  The report emphasises a holistic and ambitious approach, encompassing various aspects like research, development, skill, infrastructure and ethical considerations.  Key recommendations include:

  • Enhancing AI skill penetration: The report suggests ways to equip India’s workforce with the necessary AI skills through targeted programmes and training.
  • Strengthening AI computer infrastructure: It proposes public–private partnerships to bolster India as a destination for AI infrastructure and innovation.

By implementing these recommendations, India aims to become a global leader in responsible AI development and utilisation.

Complex Adaptive System (“CAS”) framework to regulate AI: The Economic Advisory Council to the Prime Minister of India (“EAC-PM”) recently proposed a unique approach for regulating AI through a CAS framework.[xii]  This CAS framework views AI as dynamic, unpredictable and one that cannot be regulated through traditional regulatory mechanisms, which typically rely on ex ante impact analysis and risk assessment.  The CAS framework proposed by the EAC-PM will work on five key principles: (i) establish guardrails/boundaries to ensure that AI technologies do not exceed their intended functions and to avoid a domino effect where a malfunction in one system cascades into a larger systemic failure; (ii) establish control through manual overrides to ensure human intervention when AI systems become unpredictable; (iii) ensure transparency by adopting open licensing for core algorithms, where external experts can conduct audits and assess AI systems for bias, privacy and security risks; (iv) ensure accountability by mandating standardised incident reporting protocols and establishing predefined liability protocols to ensure that entities or individuals are held accountable for AI-related malfunctions or unintended outcomes; and (v) set-up a specialist regulator who can respond swiftly and ensure that governance remains proactive.  Overall, the CAS framework offers an adaptable and effective approach for governing AI in India.


Telecom sector: Recognising the transformative potential of AI, the Telecom Regulatory Authority of India (“TRAI”) issued recommendations in July 2023 to shape responsible adoption of AI within the telecom sector.[xiii]  TRAI emphasises the need for telecom service providers to invest in AI and ML-driven solutions for network optimisation, predictive maintenance and personalised services, thereby improving the efficiency and reliability of telecom infrastructure.

TRAI envisages use of AL and ML, inter alia, for: (i) real-time network analysis and optimisation, which can help improve call quality and data speeds; (ii) predicting potential network issues and enabling preventive maintenance, thereby minimising service disruptions; (iii) offering personalised service based on individual user preferences and usage patterns; (iv) identifying and blocking spam calls and messages, thereby protecting users from unwanted communication and potential scams; and (v) analysing communication patterns to identify and prevent fraudulent activities associated with spam and phishing attempts.  These recommendations also emphasise the importance of adopting a conducive ecosystem for AI innovation by promoting collaboration between telecom operators, technology service providers and research institutions, to facilitate knowledge sharing and capacity building in development and support of AI/ML applications.

Agriculture sector: The Indian Government has recognised the application of AI and ML in the agriculture sector, particularly in areas of precision farming, agricultural drones and hopping systems, livestock monitoring, monitoring climate conditions, etc.[xiv]  Several Agri-Tech startups are developing AI-powered solutions for precision agriculture, supply chain management and market linkages.[xv]

Healthcare sector: The Indian Council of Medical Research has published guidelines that aim to tackle ethical concerns pertaining to the utilisation of AI in medical research and healthcare.  These guidelines are directed at technology companies, healthcare practitioners and research organisations who seek to utilise health data for medical research and facilitate healthcare delivery using AI technology.[xvi]

Additionally, the Government has also launched programs such as the National AI Portal for Healthcare, which serves as a central repository of AI-based healthcare applications, research and resources.  This initiative facilitates knowledge-sharing and capacity building among healthcare providers, researchers and technology developers.  Moreover, various Government-funded research institutions and academic centres are conducting research and development in AI-enabled healthcare technologies, focusing on areas such as medical imaging analysis, predictive analytics and telemedicine.

Education sector: The NSAI 2018 proposed several key initiatives for the education sector, such as to leverage AI for adaptive learning platforms that tailor content and cater to individual student needs, utilise AI-powered tutors and virtual assistants who can provide personalised feedback and support to students, etc.  The Government has also established the National Educational Technology Forum (“NETF”), which aims to facilitate the integration of technology, including AI, into teaching and learning practices across all levels of education.  NETF serves as a platform of collaboration for policymakers, educators, researchers and technology developers to explore innovative AI-driven solutions that enhance educational access, quality and equity.

Finance sector: AI and ML can have multiple uses in the finance/fintech space, such as for customer due diligence, credit assessment, customer onboarding, underwriting and risk assessment, fraud mitigation and detection, etc.  In a speech delivered on December 22, 2023, the Reserve Bank of India (“RBI”) Deputy Governor, Shri Rajeshwar Rao, spoke about the potential of AI in the financial space, while also warning regulated entities such as banks and non-banking financial companies (“NBFCs”) of the risks and concerns associated with it.[xvii]  RBI is also working on developing AI and ML systems that can help improve its regulatory oversight of banks and NBFCs.[xviii]

Bureau of Indian Standards (“BIS”) Standards: India’s Standards-setting statutory body, the BIS, is working on formulating Indian Standards for the use of AI.[xix]  The BIS has also framed and notified standards for AI using ML and AI assessment of ML classification performance.[xx]  These standards have not yet been made mandatory.


As the lead chair of the Global Partnership on Artificial Intelligence (“GPAI”) for 2024, India hosted the GPAI Summit this year.  The Summit witnessed participation of 29 member countries and various international organisations, such as United Nations Educational, Scientific and Cultural Organization, World Economic Forum, World Bank, etc., experts in the fields of AI, industry and start-up veterans, AI practitioners, academicians, students and officials from Central and State Governments.  Prime Minister Narendra Modi, during his inaugural speech, stressed the responsibility enshrined in each nation for the responsible development of AI.

As a part of the Summit, all 29 member countries unanimously adopted the GPAI New Delhi Declaration (“Declaration”), which acknowledged their commitment to work towards safe, secure and trustworthy AI, including, as appropriate, through the development of relevant regulations, policies, standards and other initiatives.  The Declaration also stressed the need to mitigate risks associated with misinformation and disinformation, unemployment, lack of transparency and fairness, protection of intellectual property (“IP”) and personal data, and threats to human rights and democratic values.  The member countries conveyed their support for India’s intentions to promote collaborative AI for global partnership.


[i]            Market Statistics available at –

[ii]           AI will be kinetic enabler of India’s Digital Economy, make Governance smarter and more Data-led: MoS Rajeev Chandrasekhar – Press Release dated April 14, 2023, available at –

[iii]           Digital India Dialogues held on September 3, 2023, available at –

[iv]          MeitY issues advisory to all intermediaries to comply with existing IT rules – PIB Release, available at –

[v]           Available at –

[vi]          National Strategy for Artificial Intelligence, 2018, available at –

[vii]          Available at –

[viii]         Approach Document for India Part 1 – Principles for Responsible AI, available at –

[ix]          Available at –

[x]           Available at –

[xi]          IndiaAI 2023: Expert Group Report – First Edition, available at –

[xii]          Available at –

[xiii]         TRAI Recommendations on Leveraging Artificial Intelligence and Big Data in Telecommunication Sector, available at –

[xiv]         Internet of Things and Artificial Intelligence in Agriculture – PIB Release, available at –

[xv]          Available at –

[xvi]         Available at –

[xvii]         Innovations in Banking – The emerging role for Technology and AI (Remarks delivered virtually by Shri M. Rajeshwar Rao, Deputy Governor, Reserve Bank of India – December 22, 2023 – at the 106th Annual Conference of Indian Economic Association in Delhi), available at

[xviii]        RBI selects McKinsey and Company, Accenture Solutions to use AI, ML to improve regulatory supervision, available at –

[xix]         Available at –

[xx]          Available at –


In India, the relevant statutory framework that could create legal rights (i.e. IP rights) over an AI algorithm or the output generated from AI algorithms is envisaged under the Patents Act, 1970 (“Patents Act”) and the Copyright Act, 1957 (“Copyright Act”).

The Patents Act permits patenting of any ‘invention’ that is capable of industrial application and has the following essential elements:

  1. it is a technical advancement over the existing knowledge or has an economic significance, or both;
  2. it should not be obvious to a person skilled in the art; and
  3. it must have characters of novelty, non-obviousness and enablement.[xxi]

As per the Patents Act, ‘a mathematical or business process or computer program per se or algorithms’ are not  ‘inventions’.  The phrase ‘per se’ leaves some doubt that the software can be patented provided it contains all the elements of an invention discussed under (i) to (iii) above.[xxii]  Similarly, if the output from the AI algorithm is to be protected by a patent, such output will also need to satisfy the essential elements of an ‘invention’.  There have been successful patent applications for AI-based software inventions in the recent years, and guidance in this regard has been provided by the Patent Office from time to time.

Further to the above, the autonomous capacity of an AI system to create ‘inventions’ without direct human involvement may complicate the process of obtaining patents for AI-based innovations in India, since the application process may require demonstration of human ingenuity.  For instance, under the Patents Act, an application for a patent can be made by a ‘person’ who is the ‘true and first inventor’ or the assignee of the person claiming to be the ‘true and first inventor’ or by the legal representative of the person who is entitled to make such an application.  Even the definitions of a ‘Patentee’[xxiii] and ‘true and first inventor’[xxiv] include references to a ‘person’.  It accordingly appears that the Patents Act presently necessitates human involvement or a human inventor for an invention to be deemed eligible for a patent.  However, the Parliamentary Standing Committee in the report titled ‘Review of the Intellectual Property Rights; regime in India’ has observed that: ‘…the condition to have a human inventor for innovating computer related inventions (innovations by AI and machine learning) hinders the patenting of AI induced innovations in India.  Therefore, there is a need to review the provisions of both the legislations on a priority basis.”

It is safe to presume that there is presently insufficient clarity on whether the algorithm-based originator of the AI algorithm from which the output has been generated can be recognised as the owner of the patent under the Patents Act.[xxv]


The Copyright Act grants copyright protection, inter alia, to a literary work, which is defined to include computer programs.  The term computer program is broadly defined and is likely to include the source code of an AI algorithm.  However, to be eligible for copyright protection, such source code must meet the following criteria:

  1. firstly, it must be original, which means it must originate from the author; and
  2. secondly, the work must have a minimum level of creativity, rather than being solely the result of skill and labour.[xxvi]

Similarly, for securing copyright over the output created by an AI algorithm, the output needs to satisfy the essential elements stated above.

In India, it is possible for AI software/algorithms to obtain copyright protection under the Copyright Act, as computer programs are eligible for such protection.  Under the Copyright Act, the author of the work is recognised as the first owner of the copyright.  The term ‘author’ is defined in the context of computer-generated literary work as the ‘person’ who causes the work to be created.  The courts in India have interpreted the reference to ‘person’ under the Copyright Act to mean a ‘natural person’.[xxvii]

On the other hand, like similar developments on this issue in other jurisdictions, it is not possible to take a conclusive position on whether an AI-generated output will satisfy the test of originality mandated under the Copyright Act, given that many of the commonly-used AI tools, particularly generative AI applications, process information available in the public domain and create content, resulting in generation of an output that may infringe third-party copyright or closely mimic pre-existing works.  In such cases, the output generated by AI applications may not meet the criteria of originality and/or minimum level of ‘creativity’ necessary for copyright protection.  The fact that AI-generated outputs are not created by a ‘natural person’ and are unable to meet the ‘author’ standard prescribed under the Copyright Act will also make it challenging to register such computer programs for copyright protection.

Accordingly, akin to the Patents Act, the Copyright Act cannot presently grant legal protection to the output created by an AI algorithm, if the process is devoid of a human intervention.  The 161st Parliamentary Standing Committee Report also concluded that the Patents Act and the Copyright Act lack the necessary provisions to effectively support authorship and ownership by AI.[xxviii]

In view of the above, there is currently no certainty or reliable examples of AI material securing adequate protection under the IP laws in India, which makes it necessary for appropriate legislative measures to be undertaken to align the IP rights regime with ownership/proprietary nuances specific to the AI sector, so that the growth of the sector can be insured.


AI applications rely on multiple datasets to train their models.  Some of such data may be considered as ‘trade secrets’ and entitled to protection under common law as well as the Copyright Act.

While there is no dedicated law in India that grants protection for trade secrets, and this term lacks a formal definition, trade secrets are commonly understood as non-publicly available information that has commercial value, and for which the rights holder has taken reasonable steps to protect – such as formulae, patterns, compilations, programs, devices, methods, techniques or processes.  Typically, such data is shared under a confidentiality agreement or is subject to confidentiality obligations.  Examples of trade secrets include client lists, technical drawings, etc.  Any use of trade secrets by a third party entitles the rights holder to remedies under the Copyright Act, contract laws, as well as under the common law applicable in India.  It would therefore be important to consider a fact-specific assessment of the category of data that may qualify as a trade secret.


[xxi]         Mariappan v. A.R. Safiullah, (2008) 5 CTC 97; and FAQ 6, Page 2, available at –

[xxii]         Can Artificial Intelligence (AI) Machine be Granted Inventorship in India? – Journal of Intellectual Property Rights, available at –,given%20the%20title%20of%20inventor

[xxiii]        Section 2(1)(p) of the Patents Act, 1970.

[xxiv]        Section 2(1)(y) of the Patents Act, 1970.

[xxv]        Report available at –

[xxvi]        Eastern Book Company v. D.B. Modak, (2002) PTC 641.

[xxvii]       Tech Plus Media Private Ltd. v. Jyoti Jand, (2014) 60 PTC 121, Navigators Logistics Ltd. v. Kashif Qureshi & Ors, 254 (2018) DLT 307.

[xxviii]       Ref. Page 30 of the Report available at –

AI-driven technologies are not only redefining market dynamics but also raising complex techno-legal and regulatory challenges.  For instance, when AI-powered systems independently interact and exchange information, there is a risk of these machines coordinating strategies, leading to anti-competitive practices like self-preferencing, predatory pricing, rebates, tying and bundling, excessive pricing, unfair trading conditions or price discrimination.  Among the above, the most crucial use of AI has been for developing pricing algorithms that observe the surge in sales at different pricing events and accordingly devise a pricing strategy that can be adopted by the organisations.  Furthermore, AI pricing algorithms of organisations operating in the same market can collude by devising a pricing strategy that is based on competitor pricing, which effectively could result in a situation where market factors direct the pricing of competing products to be the same.  These organisations can, in other words, achieve the effect of horizontal agreement without sharing any information with each other.[xxix]  Such practices have caught the interest of the anti-trust regulator in India.  In January 2014, the Competition Commission of India (“CCI”) investigated the allegations of collusion by airlines that had implemented a pricing algorithm to determine the pricing of tickets.[xxx]

Since then, the surge in AI development has prompted a closer examination of competition and antitrust laws, and how they can be applied to the ongoing practices.  To adapt the regulatory landscape to the effect of AI technologies, CCI is actively assessing the impact of AI on market dynamics and potential anti-trust concerns stemming from data access, algorithmic biases and the dominance of AI-driven companies.[xxxi]

In addition to the above initiatives, the Ministry of Corporate Affairs has constituted a Committee on Digital Competition Law (“CDCL”), which has been tasked to examine the need for a separate law to regulate the competition in digital markets, and to effectively deal with challenges that are specific to the digital economy.  CDCL issued a draft report on February 27, 2024, wherein CDCL has recommended, inter alia, the introduction of a Digital Competition Act (“DCA”), which will be an ex ante legislation and is proposed to be applicable to large digital enterprises.  The objective of the DCA will be to prescribe measures to proactively monitor the conduct of large digital enterprises to ensure intervention by the regulator before anti-competitive conduct transpires.[xxxii]  The approach, if finalised, may be similar to the Digital Markets Act in the European Union.

In this age of digitisation, organisations can access multiple sources to collect large volumes of diverse data, ranging from consumer behaviour to the pricing of goods.  This diverse collation of data – Big Data – is being monetised to develop strategies for business growth and customer engagement.  Expectedly, due to their existing presence in the market, dominant enterprises are at an advantage as they have an abundance of such Big Data at their disposal, which they can rely upon to disrupt a new entrant in the market.

Additionally, any organisation having Big Data can analyse the demand and supply of goods or services to influence the pricing of the products.  Section 4 of the Competition Act, 2002, prohibits enterprises or groups from abusing a dominant position by limiting or restricting supply of goods or services, which could be extended to activities occurring in the digital economy.  CCI is evaluating the market position of big tech companies and how they impact the competition in the market.[xxxiii]


[xxix]        AI and its Effects on Competition, blog available at –,choice%20architecture%20for%20downstream%20firms

[xxx]        Article available at –

[xxxi]        Market Statistics available at –

[xxxii]       Available at –

[xxxiii]       News report available at –

A company acts through its Board of Directors (“BoD”), as the management and governance of the company is vested in its BoD.  While Indian laws on corporate governance do not prohibit AI from assisting in decision-making functions of the BoD, whether AI can assume the role of the BoD and perform their duties is something that can be determined in the context of the fiduciary duties imposed upon directors in charge of running the affairs of the company.

The (Indian) Companies Act, 2013, contemplates a director appointed to the BoD to be a natural person.  The fiduciary responsibilities of a director of a company include: (i) acting in good faith to promote the objects of the company for the benefit of its members and in the best interests of its stakeholders; (ii) exercising duties with due and reasonable care, skill and diligence while exercising independent judgment; (iii) a duty not to be involved in a situation in which he may have a direct or indirect interest that conflicts, or possibly may conflict, with the interest of the company; and (iv) to not achieve or attempt to achieve any undue gain or advantage, either to himself or to his relatives, partners or associates.[xxxiv]

It is unlikely that AI and ML will completely replace BoD in the foreseeable future.  While AI and ML technologies have demonstrated remarkable capabilities in data analysis, pattern recognition and predictive modelling, and can assist the BoD in making important decisions, they lack the understanding, ethical judgment and strategic vision that can be found in human board members.  Moreover, the idea of primarily replacing human board members with AI and ML raises significant ethical, legal and societal concerns.  Algorithms are influenced by the datasets on which they are trained, which can perpetuate biases and lead to unfair outcomes.  Additionally, delegating crucial decisions to AI systems could undermine accountability and transparency, potentially eroding stakeholder trust.  That said, AI and ML systems, when used accurately, can act as good support functionaries to the BoD.  It is also important that companies implement these tools with the help of robust risk-management frameworks, that includes AI adoption policies, stakeholder roles, accountability and oversight mechanisms.


[xxxiv]       Section 166 of Companies Act, 2013.




These are the views and opinions of the author(s) and do not necessarily reflect the views of the Firm. This article is intended for general information only and does not constitute legal or other advice and you acknowledge that there is no relationship (implied, legal or fiduciary) between you and the author/AZB. AZB does not claim that the article's content or information is accurate, correct or complete, and disclaims all liability for any loss or damage caused through error or omission.