“Hey Agent, can you plan a birthday party for me?”
In the world of agentic commerce, this single instruction could set in motion a series of decisions and transactions. The AI agent may query the user’s calendar and contact list, shortlist and book a venue, arrange catering, order a cake, book a DJ, send invitations to contacts, collect their dietary preferences, and process payments across multiple merchants/vendors.
Based on the user’s personal data, the party gets planned. But one question remains: across this cascade of automated actions, what exactly did the user consent to, and how far does that consent extend? What appears to be a single act of intent unfolds into a complex web of personal data processing involving several independent parties.
India’s New Data Protection Regime meets the New Agentic Architecture
India’s new data protection regime consists of the Digital Personal Data Protection Act, 2023 (“DPDP Act”), read with the Digital Personal Data Protection Rules, 2025 (collectively, “DPDP Laws”). Under the DPDP Act, the personal data of an individual (called the data principal) can be processed on two primary grounds: (a) consent; or (b) one of nine specified legitimate uses. Unlike other jurisdictions, broader legal bases such as legitimate interest or performance of a contract are not available, and the legitimate uses recognized under the DPDP Act are narrow in scope.
In an agentic commerce flow, therefore, consent is likely to be the most appropriate legal basis for processing personal data for most use cases. Under the DPDP Act, consent from a data principal must be free, specific, informed, unconditional, and unambiguous, and must be given through a clear affirmative action. The architecture of agentic commerce complicates the scope, nature, and quality of consent needed to meet these high standards.
In conventional e-commerce models, users initiate and make a choice at each step: they search, discover, select, place the order, and finally make the purchase. In agentic systems, the user initiates once, and the system executes a sequence of actions autonomously across time, platforms, and parties. The complexity deepens in multi-agent environments, where the user’s AI agent may interact with AI agents deployed by merchants, logistics providers, and payment platforms. Yet, legally, these ‘agents’ are not agents in the juridical sense: they cannot consent or bear obligations. Responsibility rests with the data fiduciaries, that is, the merchants or service providers deploying them.
Returning to the birthday party example, each merchant involved, the venue provider, the caterer, the payment processor, may determine its own purposes and means of processing personal data. Each may therefore qualify as an independent data fiduciary, independently responsible for meeting its compliance obligations under the DPDP Laws.
This article explores the tension between the standard of consent envisioned under the DPDP Laws and the realities of implementing an agentic commerce architecture.
The DPDP Standard: What Counts as Consent in Law
As noted above, the DPDP Act sets out the conditions for valid consent: consent given by the data principal must be free, specific, informed, and unambiguous, and must be expressed through a clear affirmative action. These requirements reflect a substantive model of the consenting individual, one who is present, deliberate, and specifically authorising a defined purpose for the processing of their personal data.
Each element places distinct pressures on the agentic model.
- Specific consent means consent for a specified purpose. A data fiduciary should give notice identifying what personal data will be processed, and for what purpose, before or at the point of seeking consent. An agent mandate – “plan my birthday party” or “manage my travel” – is, by design, open-ended. The purposes of processing may not always be determinable at the point of instruction or authorisation; they may be determined by the AI system as it executes, in response to conditions that did not exist when the instruction was issued. Consent to an unspecified set of future purposes is difficult to characterise as specific in the terms of the DPDP Act.
- Informed consent compounds the problem. It requires the data principal to understand what personal data will be processed, for what specific purposes, and how their rights may be exercised. In agentic systems, personal data categories may emerge at runtime. An AI agent collecting a guest’s dietary preference, for example, differs meaningfully from one inferring a budget ceiling from transaction history, and neither may have been disclosed upfront. Purposes also expand as execution unfolds: location data used to complete a booking serves a different purpose than the same data retained for analytics. Downstream data fiduciaries may not even be identifiable at the point of instruction or authorisation. By the time a transaction concludes, personal data may have passed through several independent data fiduciaries, making practical exercise of a data principal’s rights of access, correction, or erasure uncertain.
- Free consent is also a threshold requirement. The DPDP Act provides that consent should be limited to such personal data as is necessary for the specified purpose. If a data fiduciary seeks consent to process personal data beyond what is necessary for providing a product or service, consent for that additional, non-necessary purpose cannot be bundled with the original consent (given for the necessary purpose). In the agentic context, this cuts in two directions. First, at the outset: if an agent platform conditions access to its core functionality on the user’s consent for behavioural profiling, preference inference, or data sharing with downstream commercial partners – purposes arguably not necessary to execute the user’s instruction, such consent may be considered inconsistent with the DPDP Act. Second, as the agent executes: each downstream data fiduciary that conditions its service on consent for processing beyond what the immediate transaction requires (a caterer retaining guest dietary data for marketing, or a booking platform harvesting location history for analytics) compounds the same concern further down the chain.
The Agentic Tension
- Single Authorisation, Multiple Fiduciaries
The DPDP Act imposes consent obligations on each data fiduciary independently. In a multi-agent transaction, a single user instruction may trigger processing of personal data by multiple independent data fiduciaries – each merchant, each platform, and each automated counterparty system provider. Each faces its own question of compliance with the consent requirement. If disputed, the DPDP Act requires a data fiduciary to prove that notice was given and consent was obtained in accordance with the DPDP Laws. How each downstream data fiduciary will discharge this evidentiary burden remains to be resolved.
- Third-Party Data Principals
When the AI agent collects dietary preferences from invitees, messages contacts, or shares a guest list with a venue, it processes the personal data of individuals who have not interacted with the system and have not provided any consent. This raises a threshold question: does such processing fall within the ‘personal or domestic’ exemption, or does it trigger compliance obligations under the DPDP Laws? While a user organising a private event may appear to act in a personal capacity, the involvement of an AI agent and of multiple commercial parties downstream complicates that characterisation. If consent is required, obtaining it becomes a practical challenge. The personal data of these data principals is typically sourced indirectly through the user, leaving little scope for prior notice and making meaningful consent and rights exercise difficult in practice.
- Effectiveness of withdrawal
The DPDP Act gives data principals the right to withdraw consent at any time, and the ease of doing so must be comparable to the ease with which consent was given. The Act further clarifies that the consequences of withdrawal should be borne by the data principal, and that withdrawal should not affect the legality of processing based on consent before its withdrawal. In an agentic context, however, this right is structurally compromised. By the time a data principal seeks to withdraw consent, the system may have already booked a venue, committed to an order, made a payment, and dispatched invitations. The agentic model compresses the act-consequence timeline so severely that the window for meaningful withdrawal may effectively not exist.
- Behavioural Profiling
This tension point cuts deepest. When an AI agent has been operating for long enough, it is no longer simply executing instructions; it may be executing instructions shaped by a behavioural profile it has built about the user, derived from their past behaviour. The AI agent selects a caterer not because the user said so, but because the user’s earlier behaviour was interpreted as a preference. The user never stated, “I prefer this.” The system inferred it. The DPDP Act’s consent architecture assumes the data principal holds a pre-existing, stable will that authorises processing of their personal data. Agentic systems with behavioural profiling partially invert this: the system generates the preferences that it then acts upon. The very will that consent is meant to express is itself a product of prior processing. An AI agent that presents options ranked by inferred behavioural preferences with defaults adjusted to those inferences – shapes choices rather than simply facilitating them. Whether such environmental structuring vitiates the freedom of consent under the DPDP Act is untested but is an argument worth exploring.
Rethinking Consent Architecture
The most commercially practical response to these challenges may be a comprehensive privacy notice, disclosures drafted broadly enough to cover the categories of personal data the agent may access and process, the classes of third-party data fiduciaries it may engage, and the types of processing and purposes it may undertake. Consent is obtained once, against a fully articulated future operating perimeter. This anticipatory disclosure model may, for now, be the most defensible position available to businesses, and a carefully structured notice is undoubtedly preferable to an inadequate one.
This model can potentially be strengthened, without undermining automation, through layered consent structures (which may be just-in-time), purpose-bound authorisation, and contextual visibility tools (such as dashboards or activity logs), which preserve user awareness and control without requiring repeated intervention.
The challenge deepens because agentic transactions are rarely linear. A ‘buy a laptop’ instruction may cascade into accessory purchases, warranty enrolments, and subscription sign-ups. A comprehensive privacy notice must account for this dynamism, or it quickly becomes stale.
In theory, consent managers, interoperable consent management platforms envisaged under the DPDP Act, may offer another partial architectural solution. A well-designed platform could record, manage, and signal consent states across interactions. However, the current framework envisages consent managers as entities answerable to the data principals. Whether they can operate meaningfully in environments where AI systems transact with each other at machine speed, where the interval between instruction and execution is measured in milliseconds, remains to be seen.
These are not, however, complete solutions. Structural tensions persist.
First, specificity and comprehensiveness are in tension: a notice that attempts to capture every conceivable processing activity risks collapsing into functional blanket consent – the very outcome the DPDP Act seeks to avoid. The broader the disclosure, the thinner the specificity. Second, a notice to the primary user does not cure the absence of consent from third-party data principals – such as invitees, contacts, and incidental counterparties whose personal data is processed during execution, who have independent rights under the DPDP Laws. Third, agentic systems are inherently adaptive, and the universe of processing is not fixed at onboarding and may evolve at runtime in response to changing conditions. In multi-agent environments, where counterparty systems may independently modify or expand their own processing logic, the anticipatory disclosure model is writing to a specification that rewrites itself.
The Unresolved Questions
Return to the birthday party. The party was planned. Five merchants and their payment processors processed personal data. Thirty contacts received messages. A venue holds a booking. Across that cascade of acts, was there consent of the standard the DPDP Act requires? The honest answer is: perhaps not in a form the DPDP Act was designed to recognise. The law attributes each downstream act to the original consent. In the multi-agent context, where AI systems transact with other AI systems on behalf of multiple data fiduciaries, that attribution becomes a compounding fiction – each layer further removed from any genuine deliberative act.
None of this renders agentic commerce impermissible. Businesses designing these systems thoughtfully with comprehensive privacy notices and clear purpose disclosures are better positioned than those that are not. What the DPDP Laws have not yet done, and may eventually need to address, is the agentic model itself. Purpose-bound authorisation frameworks, layered consent architecture for multi-fiduciary chains, standards for third-party personal data collected incidentally by automated systems, and clarity on accountability in multi-agent environments where AI systems transact with one another are all areas where the data protection framework may evolve to provide greater guidance.
Between the notification of the DPDP Act in 2023 and the DPDP Rules in 2025, the technological landscape has already shifted dramatically. By the time enforcement of substantive obligations begins in May 2027, it may shift again in ways we cannot yet anticipate. The law, as it stands, is trying to anchor itself to a moving target.