Dec 05, 2025

India’s Telecom-AI Bundles: Why it’s a privacy and competition crisis in the making

Originally published on Live Mint

India’s telecom giants are reshaping how millions access artificial intelligence (AI). When Airtel partners with Perplexity or Jio bundles Google’s AI tools into its data plans, it creates a new architecture of power, data flow and risk that existing regulations aren’t designed to handle.

AI-telecom bundling involves pre-installing or subsidizing AI services—assistants, search tools, content generators—through authorized telecom providers, either free or at reduced cost. These arrangements give AI companies instant access to countless users while telecom firms gain a new revenue stream and stickier customer relationships.

But this convenience comes with two big complications. One, opaque commercial arrangements between telecom firms and AI providers; and, two, unknown limits on access to customer behaviour data. When your mobile network bundles an AI assistant, questions multiply. Are AI models training on your conversations and phone usage? Where does liability fall if things go wrong? What is the regulator’s role in balancing customer protection with commercial rights?

Data is a major concern. Telecom firms hold longitudinal data-sets tied to accounts, devices and usage patterns, while bundled AI can access photographs, location traces, call records and device telemetry. When combined, AI analysis can reveal sensitive attributes—location patterns that suggest religious observance, browsing habits that imply political views and video analytics that offer biometric templates.

The bundling architecture creates continuous data-sharing pathways between telecom services and AI providers, often without granular informed consent. India’s Digital Personal Data Protection Act mandates specific, informed consent for each processing purpose. Yet, many platforms frame training toggles in altruistic terms like “improve the model for everyone” to obtain perfunctory user permission while obscuring privacy implications.

Google retains conversations with its Gemini chatbot to train its machine learning systems, unless users opt out. For users aged 18 years or older, chats are kept by default for 18 months. ChatGPT also uses conversations for training unless one opts out. Anthropic has moved from a non-training posture to using consumer chats for model-training by default, unless users opt out. When consent is buried in the fine print or bundled with essential services, it’s not truly voluntary.

India’s recently released AI governance guidelines offer a foundation. They emphasize transparency, fairness, accountability and safety, principles that translate well to telecom bundles. But principles need teeth. Regulators should require telecom firms to separate consent for core services from optional AI features. Training AI models on customer data should require explicit opt-in, not buried opt-out. Commercial relationships influencing AI output must be disclosed clearly at the point where consumers make decisions.

A recent market study by the Competition Commission of India identified three critical barriers facing AI startups: data availability, cloud computing costs and talent.

Telecom-AI bundles exacerbate the first barrier dramatically. Large AI firms gain ready access to vast data volumes and financial capital, creating competitive advantages over new entrants.

When a telecom service with millions of subscribers partners exclusively with one AI provider, it creates ‘network effects’—the service becomes more valuable as more people use it, making it nearly impossible for rivals to catch up. If an AI provider’s algorithm favours its related entities through increased visibility or advertising exposure, it distorts competition, restricts consumer choice and creates hidden preferences that shape customer decisions and market behaviour.

Telecom Regulatory Authority of India (TRAI) regulations don’t address ancillary services that telecom companies bundle with their core offerings. The Consumer Protection Act fails to adequately cover common AI problems like misinformation, algorithmic bias or stereotype reinforcement.

Liability attribution in case something goes wrong is murky. While India’s data protection law places responsibility on the data fiduciary, global frameworks adopt joint and wide liability across the AI supply chain. Indian regulators must affix responsibility where it is due, whether it is a domestic player or foreign, given the prevalence of cross-border AI deployments through telecom partnerships.

TRAI should extend service quality and consumer protection standards to AI providers whose services are bundled with telecom offerings, following precedents like the Reserve Bank of India’s outsourcing rules for financial services. There must be shared oversight responsibilities between telecom operators and AI providers.

Technical safeguards matter too: Encryption in transit and at rest, environment segregation between network data and AI services, documented retention limits and genuine data minimization and storage protocols should be built into permits for bundling.

These arrangements will shape how we mature as an AI-ready nation. Get it wrong, and we entrench monopolies, erode privacy and leave consumers without recourse when AI systems fail. Get it right, and India can demonstrate how innovation can be balanced with protection.

AUTHORS & CONTRIBUTORS

TAGS

SHARE

DISCLAIMER

These are the views and opinions of the author(s) and do not necessarily reflect the views of the Firm. This article is intended for general information only and does not constitute legal or other advice and you acknowledge that there is no relationship (implied, legal or fiduciary) between you and the author/AZB. AZB does not claim that the article's content or information is accurate, correct or complete, and disclaims all liability for any loss or damage caused through error or omission.