India’s AI Wedding Buffet: Generous Portions, Political Economy Heartburn
India's AI Summit promises a revolution. The electricity grid, the tax code, and the literal ground beneath the chip fab have other plans.
The AI Summit 2026 in New Delhi is structured like a good Indian wedding buffet, which is to say, it tries to be everything at once. There’s the main event, the side events, the offsite roundtables, and whatever happens in the hallways between them. The world’s largest gathering of AI stakeholders descends on New Delhi from February 16-20.
I lead the Indian economy program at the Mercatus Center, which, by some logic I have yet to fully trace, means I am now an AI policy person. People whose calendars are normally defended by three layers of staff want to know what I think. While flattering and bewildering, the real reason is that understanding where India is in the AI race requires learning about non-AI related policy bottlenecks. Investors evaluating challenges and opportunities, ignore the broader political economy gridlock at your own peril.
In the tradition of Indian policy ambition, this Substack will attempt to cover far more than is realistic, even as a long read.
The TL;DR. India’s AI regulation (surprisingly light-touch and sensible), foundational models (promising but narrow), semiconductor ambitions (literally and metaphorically built on soft soil), energy constraints (a real political economy bottleneck), and startup ecosystem (world class talent), blossoming venture funding (burdened by tax uncertainty).
But before that, a short primer on the Summit for those who have been living under a rock.
The AI Summit 2026
The label says AI impact, but the real centerpiece is commercial. India is courting AI businesses and unlocking massive investment, with virtually every major AI, big tech, and semiconductor CEO in attendance. Investment focus was also true of the Paris summit last year, and you can expect a lot of flashy investment announcements and curtain raisers.
This is not surprising once you know that the Government of India is hosting the summit under the IndiaAI Mission, with Ministry of Electronics and Information Technology (MeitY) in a central role. MeitY is unlike most Indian government departments, in that it is more an investment promoter than regulator. It rolls out schemes to incentivize semiconductor firms to invest, or woos Apple into setting up manufacturing in India, in addition to regulatory functions like the Information Technology Act or the Digital Personal Data Protection Act. Compare this to the Ministry of Information and Broadcasting, whose main goal is not promoting broadcasting technology or equipment or innovation. They regulate what can be broadcast, whether television shows and channels and advertisers have violated the code.
So, MeitY is more strongly indexed towards fostering an AI and AI-adjacent industry in India. This includes manufacturing chips, developing foundational models, driving adoption across sectors, supporting startups working on enterprise integration. So, their theme for this summit is not AI safety, like past summits, or regulation and equity. It is the ambition to be spoken of in the same breath as the US and China, if not today, then within a decade. More on this in the following sections.
Second, there’s the geopolitical play. India is positioning itself as the voice of the global south, this time through AI diplomacy, with around 20 heads of state and government showing up to confirm it. This is not new territory. India has led the global south across a range of issues, from helping various countries conduct fair elections (most recently Bhutan, I think), to vaccine diplomacy during Covid, to exporting India’s digital public infrastructure to African countries. AI is the latest space, and given India’s talent pool and large startup ecosystem, it has the natural advantage. Whether it is space programs, or vaccines, or elections or digital infrastructure, India has demonstrated the ability to innovate frugally, at scale, and for contexts suitable for developing countries (remember how mRNA vaccines needed cold storage and were unfeasible in countries without electrification).
And third, finally, there is the policy agenda proper, which reads like someone emptied the entire AI discourse into one schedule: indigenous foundation models, model safety, bias frameworks, data governance, ethics, adoption, biosecurity, and the full equity spectrum from AI-for-women to AI-for-reviving-extinct-languages to AI-for-the-specially-abled. I’ll begin with the last part first.
How is India thinking about AI Regulation?
India’s AI Governance Guidelines, released by MeitY and the Principal Scientific Adviser in November 2025, are the country’s attempt to answer a question every major economy is fumbling with. How do you govern a technology that changes faster than your committee can meet? The EU went first and went heavy, a binding cross-sector AI Act with tiered risk categories, compliance obligations, and a governance apparatus that could employ a small city. China took the authoritarian-efficiency route. Regulate fast, regulate specifically, and make sure the state retains control over what models can say and do. The US, characteristically, has been light touch at the federal level, leaving governance to a patchwork of executive orders, state laws, and vibes.
India, with these guidelines, has landed somewhere interesting, closer to the US in its instinct to avoid a standalone AI law, but far more deliberate in articulating why it is choosing not to regulate horizontally yet. The framework’s core bet is that India’s existing legal infrastructure (the IT Act, the Digital Personal Data Protection Act, sectoral regulators like the RBI and SEBI) can handle most AI risks if enforced properly and updated where needed.
But the November framework did not emerge from a vacuum. Three months before MeitY published its national guidelines, the Reserve Bank of India had already done much of the intellectual groundwork. The FREE-AI report, published in August 2025 by a committee constituted in December 2024, addressed AI governance specifically within the financial sector; banks, NBFCs, fintechs, insurers, payment system operators. It is narrower in scope than what followed, but it is also, in several respects, the template. The seven “sutras” that anchor the national framework (trust, people first, innovation over restraint, fairness, accountability, understandable by design, safety/resilience/sustainability) originated here. So did the structural move of organizing recommendations under parallel tracks for innovation enablement and risk mitigation. The national framework heavily borrowed the RBI’s architecture.
Underneath the sutras and pillars and the usual fog of government jargon, both documents are actually quite sensible and with an innovation-first approach. It is almost as if the experts on these committees had to hide their light-touch instincts underneath the rhetorical scaffolding that Indian policy documents require to be taken seriously. The operational logic in both is straightforward. Do not regulate the technology itself, govern its applications through the regulators who already understand those domains. Build incident databases so you learn from failures instead of pretending to prevent them through preemptive compliance theater. Use “techno-legal” mechanisms (standards, system-architecture-level controls, provenance tools) so that compliance scales without armies of auditors. Create sandboxes so regulators can see what actually goes wrong before writing rules. The explicit preference for “innovation over restraint,” listed as a core principle, rejecting the EU’s precautionary posture. Both committees looked at Brussels and decided that regulating AI the way you regulate pharmaceuticals, before you know what the side effects actually are, is a bad trade for a country where AI adoption is still nascent and unevenly distributed.
One feature of the RBI report stands out as consequential and absent from most global AI governance frameworks. The committee takes the “free” in FREE-AI rather seriously; it dedicates attention not just to freeing the financial sector from reckless AI risk, but to freeing it from the timidity of not adopting AI at all. Most frameworks spend their energy on the risks of deployment. This framework asks, what happens when institutions do not adopt AI and fall behind on fraud detection, cannot counter AI-enabled cyberattacks, and fail to reach the underserved populations that voice-enabled multilingual AI could bring into the formal financial system. The committee explicitly recommended that regulators lower compliance expectations for AI-driven financial inclusion use cases, treating affirmative action through AI as a policy priority rather than a risk to be managed. The liability framework it recommended is graded. The regulated entity remains liable to consumers for any losses, but first-time failures where the entity followed prescribed safeguards and reported promptly would not automatically trigger full supervisory penalties. A rigid liability regime that punishes every probabilistic error will cause institutions to constrain AI capabilities to the point of uselessness. The national framework proposed a hub-and-spoke institutional structure to make this sectoral model cohere.
Both documents converge on deepfakes as an urgent and tractable governance problem. The national framework recommended C2PA-style content provenance standards. The RBI report addressed deepfakes from the financial sector’s angle, noting deepfake audio and video being used to impersonate executives, bypass video KYC, and authorize fraudulent transactions. The national framework recommended watermarking, traceability, and provenance standards. Both argued that existing law, primarily the IT Act and the Bharatiya Nyaya Sanhita, was sufficient if enforced and adapted. Neither proposed new legislation.
Three months after the national framework, MeitY implemented what both documents had prescribed. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on February 10 and effective February 20, are the first concrete, legally binding output of the “techno-legal” philosophy. Rather than creating a standalone AI statute, the government amended the existing IT intermediary rules to define “Synthetically Generated Information,” mandate visible labeling and permanent provenance metadata on synthetic content to the extent technically feasible, require platforms to deploy automated detection tools, and compress takedown timelines to three hours for identified deepfakes and two hours for non-consensual intimate imagery. The enforcement mechanism is the one both frameworks had identified as already available. Fail the due diligence on labeling, metadata, or takedowns, and you lose immunity and the platform becomes liable under the Bharatiya Nyaya Sanhita. The carve-outs for routine editing, accessibility tools, and academic use suggest someone on the drafting team understood that overbroad definitions would catch every color-corrected photograph in the country.
Yet for all the talk of avoiding the EU model, the deepfake amendment is closer to Brussels than it appears at first brush. Mandatory labeling, provenance metadata, automated verification, loss of immunity for non-compliance. These are binding, cross-cutting obligations imposed on intermediaries with tight timelines and real penalties. The difference is that India arrived here through subordinate rule-making rather than a parliamentary statute, and it did so without the institutional infrastructure the EU has built to support enforcement. The silver lining is that these rules are easier to amend than legislation, and therefore nimbler and easier to adapt when the implementation roadblocks emerge.
And that gap between ambition and capacity is where the trouble starts. A three-hour takedown window, down from thirty-six, is aggressive by any global standard, and it applies upon receipt of a government order, not just a court order. For platforms operating at the scale of YouTube or Instagram, this compresses the window for review, legal assessment, and action to something close to automated compliance, which raises its own risks around over-removal.
The free speech implications are obvious. When the penalty for missing a government-issued deadline is losing safe harbor entirely, platforms will err on the side of removing content first and asking questions never. Government inaction compounds the problem. Much of the infrastructure that both frameworks said was prerequisite, the AI Safety Institute’s testing benchmarks, the national incident database, the content provenance standards ecosystem, is not yet operational. The rules are live before the institutional scaffolding is in place. The mandatory automated detection requirement assumes technology that reliably distinguishes synthetic from authentic content at scale, a capability that does not yet exist with the accuracy this regime demands. And here is a question that neither framework addresses.
If the concern is that government-issued takedown orders might be used selectively or politically, would it not be worth exploring whether the notices themselves could be generated or at least triaged by an algorithmic system operating under transparent criteria? If India is serious about fairness and techno-legal solutions, letting an AI flag the deepfakes and issue standardized government notices to platforms would at least reduce the surface area for discretionary bureaucratic and political overreach. It would be a fitting irony; using the technology you are trying to govern to keep the governors honest.
Yet when you step back and look at the full sequence, there is a coherence that should not be understated. The RBI built the conceptual architecture in August 2025. MeitY generalized it in November. The February 2026 deepfake amendment demonstrated that the “existing laws plus targeted amendments” model could produce binding obligations quickly when political will existed. Premature regulation in a country where most sectors are still figuring out basic digitization risks locking in rules that are either unenforceable or counterproductive. The smarter move, which this sequence attempts, is to build institutions first, regulate iteratively based on what actually goes wrong, and use standards rather than statutes as the primary compliance mechanism.
In a paper Alex Tabarrok and I wrote on premature imitation, we argued that developing countries should not import regulatory frameworks before the harm is understood or the capacity to enforce exists. In a pleasant surprise, I was informed that the paper was sent for reading to some committee members. We have been making this argument about Indian economic policy for years, and while it has not always fallen on deaf ears, I have rarely seen the principle stated this plainly in an official document. That the idea may have landed most clearly in AI governance, of all places, is the kind of plot twist I did not see coming. Then again, AI may be the one domain where the case against premature regulation does not need an economist to make it. The technology moves fast enough to make the point on its own.
The most pressing problem will be AI agents. Both reports discuss it, but the RBI report does the real work. It states that entities deploying AI systems should be accountable for the decisions of those systems regardless of the level of autonomy. You deployed it, you own the outcome. More importantly, it thinks through what agent autonomy actually looks like in finance. It names the emerging Model Context Protocol and Agent-to-Agent communication frameworks and imagines AI agents representing borrowers negotiating with AI-enabled lenders across interoperable systems. It flags the risk of autonomous AI collusion, where agents in high-frequency trading or dynamic pricing environments optimize toward supra-competitive prices without any human directing them to do so, potentially breaching market conduct rules written for humans. Its recommendation for comprehensive governance across the full AI model lifecycle includes human oversight specifically for autonomous and high-risk applications. And its graded liability framework, which tempers penalties for first-time failures where safeguards were followed and reporting was prompt, matters precisely because without that concession no regulated entity will deploy an autonomous system that can surprise it.
The national framework is thinner on this question. It acknowledges that AI is now “probabilistic, generative, agentic, and adaptive.” It defines agentic AI in its glossary. It lists loss of control as a risk category. But its liability treatment applies identically to a chatbot answering customer queries and an agent independently executing trades. It does not distinguish between AI that assists human decisions and AI that acts on its own. The RBI report does. For financial institutions, individual regulators like RBI, SEBI, and IRDAI will each have to confront and fine-tune this question.
No jurisdiction has settled who pays when an AI agent causes harm. The EU distributes obligations across the value chain, with providers bearing the heaviest burden and deployers responsible for human oversight and incident reporting, but the proposed AI Liability Directive that would have created civil liability rules was withdrawn in October 2025. The revised Product Liability Directive classifies AI systems as products subject to strict liability, though it was not designed with autonomous agents in mind. The United States has no federal AI liability framework. Liability runs through existing tort law applied state by state, with courts only beginning to explore agency-law principles for AI.
India sits between the two. Indian courts are not run by agentic AI and are legendary for their decades-long human delays. Like the EU, these reports assign obligations before harm occurs rather than relying on post-hoc litigation. Like the US, India has chosen soft law over binding regulation. But the RBI’s position is more concentrated than either, and will set the tone for other regulators. The deploying entity is accountable, period. No role-shifting, no provider-deployer split, no complex allocation question. Strict liability here is the more sensible bright-line rule, reminiscent of Epstein’s Simple Rules for a Complex World or Rizzo’s Law Amid Flux frameworks, and will allow firms to price and adjust their exposure. I will have more to say on bright-line rules and AI agent liability later, like what happens if the entity that deployed the agents can’t be traced etc. But for now, government reports recommending simplicity is a feature when the technology is moving fast and the institutions enforcing the rules are still being built. Indian regulators still manage to botch things up with outrageous penalties, but hopefully good sense prevails.
Does India have Foundational models?
In June 2023, Sam Altman visited New Delhi and was asked whether an Indian startup with $10 million could build a foundational AI model. He said it was “totally hopeless” to compete with OpenAI on training foundation models. At that point, frontier models cost hundreds of millions to train. Ten million wouldn’t get you close. But the remark landed broader than intended. It became shorthand for a prevailing assumption: foundational AI was a game for a few well-capitalized American companies, and if you weren’t one of them, you were a consumer, not a builder. India’s IT minister pushed back. Others conceded the point quietly. India hadn’t built a global operating system, a browser, or cloud infrastructure. The gap in compute, capital, and talent was not a talking point. It was a fact. Altman later clarified he’d meant the $10 million budget specifically. But the underlying assumption had been stated plainly enough to stick.
That assumption cracked in January 2025. China’s DeepSeek released a reasoning model that, on some counts, matched or exceeded OpenAI’s o1 on key benchmarks at a reported training cost of under $6 million, using Nvidia chips that US export controls were supposed to have rendered insufficient. DeepSeek showed that algorithmic innovation, mixture-of-experts architecture, inference-time compute, could substitute for brute-force spending. The idea that only billion-dollar American labs could build frontier models became harder to defend overnight.
France had already been moving. Mistral AI, founded in Paris in 2023 by former DeepMind and Meta researchers, released open-weight models that were not frontier-competitive on every benchmark. They didn’t need to be. They gave European governments something they could run locally, fine-tune for their own languages, and audit. The French government and private sector backed Mistral and attracted data center investment from the UAE. Macron called it the “third way.” Not American, not Chinese. The point was never to beat GPT. It was to avoid permanent dependence on it.
Then everybody wanted one. South Korea picked five teams to build a national foundation model by 2027. Saudi Arabia created HUMAIN, a state effort to build Arabic multimodal models. The UAE partnered with Microsoft and OpenAI and declared it would become the world’s first AI-native government by 2027. Singapore and Japan started open-sourcing local-language models. The logic was the same everywhere. If your government runs on someone else’s AI, your government runs at someone else’s discretion.
Now back to India. Sarvam AI was founded in 2023 by Vivek Raghavan and Pratyush Kumar. The Indian government selected it under the IndiaAI Mission to build the country’s first sovereign large language model. In mid-2025, Sarvam released its first language model which was a fine-tuned version of Mistral Small. For a company backed by tens of millions in government funding, this was underwhelming. The criticism was sharp. The online ridicule was extensive.
By February 2026, the story changed. Not because Sarvam built a better chatbot. It didn’t try to. It built task-specific models for India’s messy, multilingual, paper-heavy reality.
The standout is Sarvam Vision, a 3-billion-parameter model built for document intelligence. On the benchmarks Sarvam reports, olmOCR-Bench, it scored 84.3 percent accuracy. Google Gemini 3 Pro scored 82.0. OpenAI’s GPT 5.2 scored 69.8. On documents with complex layouts and non-Latin scripts the gap widened. On OmniDocBench v1.5 Sarvam scored 93.28 percent, handling scanned pages, tables, and mathematical expressions better than models many times its size. In speech recognition, Saaras V3 hit a word error rate of 19.3 percent on IndicVoices, a benchmark covering the ten most spoken Indian languages. That beat Gemini 3 Pro, GPT-4o Transcribe, Deepgram Nova-3, and ElevenLabs Scribe v2. In text-to-speech, Bulbul V3 produces natural-sounding output across 11 Indian languages with over 35 voice options.
But these results need framing. Sarvam’s models are trained narrowly on documents and Indian languages. On general reasoning, coding, and world knowledge, ChatGPT and Gemini still outperform it comfortably. A technology publication tested Sarvam on translation and found factual errors in a Telugu news paragraph that ChatGPT and Gemini handled without trouble. Sarvam is very good at what it does. What it does is not what most people mean when they say AI.
Sarvam does not sit in the same category as GPT, Gemini, or Claude. It was never designed to. In OCR, document parsing, and Indic speech it is ahead, sometimes by wide margins. Everywhere else it is not in the conversation. The earlier criticism has faded, but the fundamental question remains open. The company’s next move is a 120-billion-parameter sovereign model trained on over 17 trillion tokens, with 17 to 20 percent Indian data. That model, not the specialized tools, is the real test.
What counts as foundational is decided by users and markets, not by researchers or ministers or even Twitter trolls. Until an Indian model is adopted globally and competes across general-purpose benchmarks, it will not be considered in the same league. That is not a judgment about Indian talent. It is how technology markets work. Nobody cares where a model was trained. They care whether it works. Sarvam’s success is real. But it is success in a niche. And niches, however valuable, do not rewrite the hierarchy of global AI.
That said, it is very early. India has a pattern of showing up in domains where it was written off. It produced vaccines at scale, and exported them to the global south, when Western pharmaceutical companies said it couldn’t be done and struggled with their own distribution channels in developed countries. India’s space program reached Mars for less than the production budget of a Hollywood film about Mars. Constraints force ingenuity. Small budgets force focus. And a large domestic market with high cell phone penetration and an indigenous tech stack, means Indian firms don’t need the world to validate their product before it becomes viable.
Is it possible that building highly specific, linguistically grounded, cost-efficient models turns out to be the smarter long-term strategy? India has the scale, the data, and the market to sustain that bet. Whether it pays off is a question the next two years will answer. But despite the fanfare and the launches at the Summit, Indian models are not there yet.
The next sections outline the constraints and challenges India faces to build firms that can compete globally. So, it is important to be clear eyed about the gap between India’s potential versus ambition.
The Chip Subsidy Nobody Can Claim
Last year, Tata Electronics discovered that the soil at its semiconductor fab site in Dholera, Gujarat, was too soft. The ₹91,000 crore ($10.11 billion) facility, the first large-scale chip fabrication plant under the flagship of the India Semiconductor Mission, had to be redesigned from the foundations up. The land was reclaimed. The soil was clay-heavy, saline, silty. It could not support a precision manufacturing facility that requires near-zero vibration. Tata brought in Fugro and two other geotechnical firms. Construction on the revised design began months late.
Why Dholera? Gujarat was the first state to announce a dedicated semiconductor policy, with generous subsidies on land and power. It promised reliable electricity and water. These are real advantages. But Gujarat is also the Prime Minister’s home state, and nearly every major semiconductor investment approved under the India Semiconductor Mission has landed there, in what many in the industry believe was a top-down decision. Tata, to its credit, chose Dholera in part for these practical reasons like the electricity subsidy. But a semiconductor fab is not a warehouse. It requires geological stability, ultra-pure water systems, and chemical supply chains that do not yet exist in Dholera. When political incentives determine where factories go, instead of engineers and entrepreneurs, you get buildings on soil that cannot hold them.
This is not a story about bad luck. It is a story about what actually constrains India’s AI ambitions. Not the things the policy conversation focuses on, chips import policy, model safety, bias frameworks, data governance, regulatory architecture, but the physical, institutional, and political economy problems that determine whether anything gets built at all.
The PLI scheme is India’s marquee industrial policy instrument. The government has budgeted ₹1.97 lakh crore ($21.89 billion) across fourteen sectors, offering production-linked incentives to manufacturers who meet investment and output targets. For semiconductors, the India Semiconductor Mission covers up to 50 percent of project cost for fabs and assembly facilities. On paper, this is generous. In practice, the money is not reaching firms. Through September 2025, actual disbursements stood at ₹23,946 crore ($2.66 billion), 12 percent of the total budgeted outlay. Some reports suggest that only 37 percent of production targets had been met by October 2024. Samsung waited years to receive ₹500 crore ($55.56 million) for FY2021 because of documentation discrepancies. By July 2025, the government stated that 72 startups had been approved, and that 23 firms/startups were sanctioned financial support under the Design Linked Incentive scheme, where again, details are murky and it seems only part of the funds have been disbursed.
The problem is not that firms do not want the subsidies. It is that the compliance machinery required to claim them is itself the barrier. Disbursements require quarterly review reports, project certifications, No-Lien Account management, and inter-departmental coordination that frequently breaks down. Even Samsung got tangled in the paperwork. MSMEs and startups have little chance. The government identifies a problem, creates an incentive, and then wraps it in procedural requirements so heavy that firms cannot access it. Raising the budget does nothing if the bottleneck is the application form.
The technical constraints go deeper. TSMC, which fabricates leading-edge AI chips, declined India’s invitation to build a fab. Dholera, even once the soil is sorted, will produce chips at 28nm to 110nm process nodes. These are mature nodes, useful for automotive and IoT but irrelevant to frontier AI, which runs on 3nm and 5nm silicon. India has no facility planned, announced, or remotely plausible for the chips that actually train large language models. The target for first commercial wafers has shifted to what company executives now describe as mid-2027 for trial production. Gartner analysts assess the fab is unlikely to reach full capacity by 2030.
The broader ecosystem is simply absent. Cutting-edge fabs need EUV lithography machines from ASML, which are subject to controls, specialized chemical supply chains, and thousands of process engineers with tacit knowledge built over a decade. The new PLI for electronic components, ₹22,919 crore ($2.55 billion) raised to ₹40,000 crore ($4.44 billion) in Budget 2026, targets PCBs, capacitors, and resistors. The lowest tier of the value chain.
So what should India actually do? Start with the binding constraints. Reliable electricity, industrial-grade water, and rational and certain tax policy, low friction land markets, are not AI and semiconductor problems. They are problems that affect every manufacturing sector in India, and they have been problems for decades. No amount of subsidy engineering will produce a manufacturing ecosystem on top of broken infrastructure. These are reforms that benefit every sector, which is precisely why they should come first. It’s a lesson for MeitY, there is only so much it can do alone, without coordinating with other ministries at the union level, and encouraging state level permitting reforms.
And then there is the thing India should be doing right now, with urgency, because it already has the advantage. India has 20 percent of the world’s semiconductor design engineers. AMD, NXP, Qualcomm, and Intel all maintain design centers here. This is not a marginal position. It is an enormous comparative advantage, and India is doing remarkably little with it. Thirty-two startups reached by the Design Linked Incentive scheme is not a rounding error in a country with this much talent; it is a policy failure.
India should be building partnerships that connect its design base to global fabrication capacity, creating commercial pathways for Indian chip design firms to tape out at TSMC, Samsung, and GlobalFoundries, and making it trivially easy for startups working on RISC-V architectures, AI accelerators, and edge computing chips to access capital and fab time. Design is where Indian engineers already operate at the frontier. The goal should be to turn that into Indian companies at the frontier, not wait a decade for a fab that produces chips three generations behind.
Can India Power Its AI Ambitions?
If there is a single domain where India’s AI ambitions will succeed or fail, it is energy. And energy in India is not a technology problem. It is a political economy problem, arguably the most intractable one the country faces.
India’s peak electricity demand hit 250 GW in May 2024, up from 143 GW a decade earlier. The IEA forecasts 6.3 percent annual growth through 2027, faster than any major economy. Cooling demand alone could reach 140 GW of peak load by 2030. One number captures the trajectory. For each incremental degree in daily average temperature, peak demand now rises by more than 7 GW. In 2019 the figure was half that. India is getting hotter, richer, and more electricity-hungry simultaneously.
So why not just generate more electricity?
Because the constraint is not generation. It is the institutional rot in the system that moves, or prevents moving, electrons from plants to people.
State-controlled distribution companies have accumulated $83.7 billion in debt because energy prices have been politically distorted for decades. Over 50 GW of renewable capacity sits underutilized. About 60 GW is stranded behind inadequate transmission. The shortage is financial and infrastructural, not resource-based. Without reforming distribution pricing, governance, and grid investment ($50 billion estimated by 2035), new renewable capacity will not become reliable electricity. It will become another line item on a DISCOM balance sheet no one wants to read.
India’s electricity reaches consumers through 72 distribution companies, 44 of them state-owned, collectively the most financially distressed utilities in the world. Accumulated losses stood at ₹6.92 trillion ($76.89 billion) as of March 2024, rising every year despite five government bailouts since 2002.
Three reasons, each reinforcing the others.
The first is political subsidies. State governments compete to offer free or cheap electricity to farmers and households. This is not a policy quirk. It is how elections are won. Of 26 states with subsidy programs, only 16 disbursed the full amount in 2023-24. The rest left DISCOMs to absorb the gap. Nominally independent state regulatory commissions routinely defer to state governments on price. When New Delhi pushes cost-reflective pricing and a state promises free power, free power wins. Some states have gone years without a tariff revision. The distribution deficit widened to ₹79,000 crore ($8.78 billion) in FY2023, nearly double the year before. The result is that electricity, when available, powers the least productive sector and depletes groundwater.
The next distortion comes with cross-subsidization. Industrial and commercial consumers pay well above cost of supply. The surplus subsidizes agriculture and households. NITI Aayog has found that in many states, cross-subsidies exceed the policy limit of ±20 percent of average cost. The customers who consume the most and pay the most per unit, exactly the category that includes data centers, face the highest price per unit, and have the strongest incentive to leave the DISCOM system entirely.
And finally, theft. Aggregate technical and commercial losses run 15 to 17 percent nationally against a global average of 8 percent. In Bihar, Jharkhand, and Uttar Pradesh, losses exceed 25 percent. Theft alone costs ₹1.32 lakh crore ($14.67 billion) annually, the highest in the world. In Punjab, 92 percent of agricultural consumers are unmetered. DISCOMs classify stolen electricity as “agricultural consumption” because without meters, nobody can prove otherwise.
Now add the demand from new data centers. Installed capacity is expected to grow from 1.5 GW in 2025 to 8 or 9 GW by 2030. Each gigawatt-scale facility consumes as much as an aluminum smelter and requires 99.99 percent uptime, a standard India’s grid has never been built to meet.
Data center operators have two alternatives to DISCOM supply. Open access means buying from third-party generators and wheeling power through the network. Captive power means building your own generation.
Open access should work in theory. Solar prices are among the world’s lowest, frequently below ₹3 ($0.03) per kilowatt-hour. A Mumbai data center could contract with a Rajasthan solar developer and wheel the power over. In practice, DISCOMs charge cross-subsidy surcharges on open-access consumers to recoup lost revenue. These were supposed to decline. They have increased. Transaction volumes have fallen relative to total generation despite growing participation. The policy meant to enable competition has been captured by the entities it was meant to discipline.
Captive power has become the escape route. About 70 percent of industrial exits from DISCOM supply go captive because Section 42(2) of the Electricity Act, upheld by the Supreme Court, exempts them from cross-subsidy surcharges. Captive runs roughly 30 percent cheaper. But each user must hold at least 26 percent equity in the generating plant, land near data center clusters is scarce, and rules differ by state.
For hyperscale facilities needing 200 to 500 MW of uninterrupted power, none of these paths is clean. DISCOMs have been insolvent for decades. Open access is surcharge-laden. Captive requires equity and land that may not exist nearby. Reliance’s Jamnagar campus plans to run on renewable hydrogen to bypass the entire system. Most operators do not have Reliance’s resources.
Some states have crafted targeted fixes. Karnataka offers industrial prices for data centers sourcing 30 percent renewables. Tamil Nadu exempts electricity duties on captive consumption for five years. Haryana exempts them for twenty. Maharashtra allows green energy distribution licenses within data center parks. But their very existence illustrates the problem. The binding constraint is not at the central policy level, where the 21-year tax holiday lives, but at the state level, where tariffs, surcharges, and grid connections are determined. A national AI infrastructure ambition that depends on 28 separate state electricity commissions, each reflecting different political pressures, is a system where what binds changes at every state border.
The 2026 Budget’s 21-year tax holiday is the most aggressive fiscal incentive in the global data center market, a real reform. But it does not fix DISCOM balance sheets, eliminate cross-subsidy surcharges, or build the transmission corridor from Rajasthan’s solar parks to Mumbai’s data centers. It lowers the cost of successful operation without changing the probability that 500 megawatts of reliable power can be delivered. Even in the West, where chronic shortages do not exist, communities are protesting data centers driving up electricity prices. In India, rationalizing electricity prices does not mean a letter to the editor. It means another farmers’ protest.
Can India Go Nuclear?
The answer should be nuclear energy. India has pursued civil nuclear power since the historic Manmohan Singh-George Bush deal. But a botched liability regime, public fear of nuclear accidents, and India’s lack of state capacity to regulate have kept it marginal. The tide may be turning.
The SHANTI Act, passed December 18, 2025, with Presidential assent two days later, is probably the most important single reform in India’s AI infrastructure story. It will take a decade to prove it.
The backstory matters. India’s nuclear sector had been a state monopoly since independence. Only NPCIL could build and operate commercial reactors. More critically, the Civil Liability for Nuclear Damage Act of 2010, enacted after the US-India nuclear deal, allowed operators to seek recourse against equipment suppliers. India was the only country in the world with this provision. It was born of legitimate post-Bhopal anxiety about industrial disasters. Its practical effect was to make India uninvestable for every major reactor vendor on earth. EDF’s six-reactor Jaitapur project stalled. GE stalled. Westinghouse stalled. For fifteen years, supplier liability was the binding constraint on Indian nuclear energy. Everything else, site preparation, fuel sourcing, grid planning, was moot because no foreign company would sell India a reactor.
The SHANTI Act removed this constraint entirely. It eliminates supplier liability. It replaces the flat ₹1,500 crore ($166.67 million) operator liability cap with a graded framework linked to reactor size, up to ₹3,000 crore ($333.33 million) for large reactors. It permits private companies to participate in plant operations and equipment manufacturing. It gives statutory independence to the Atomic Energy Regulatory Board. Six major industrial groups, Hindalco, Jindal, Tata Power, Reliance, JSW, and Adani, have responded to NPCIL’s first-ever Request for Proposals for private nuclear construction. A ₹20,000 crore ($2.22 billion) Nuclear Energy Mission funds small modular reactor R&D.
Nuclear matters for AI because it is the only proven source of firm, round-the-clock, zero-carbon baseload power at the scale data centers need. India’s current nuclear capacity is 8.8 GW from 24 reactors. The target is 100 GW by 2047.
The problem is what comes after the legislation. India’s nuclear execution record is among the worst of any nuclear nation. Construction of the Prototype Fast Breeder Reactor at Kalpakkam began in 2004 with a 2010 completion target. It began fuel loading in October 2025, fifteen years late. Construction timelines routinely double. Scaling from 8 GW to 100 GW means building 4 to 5 GW per year, a pace India has never sustained. The AERB, now tasked with regulating private nuclear operators, has spent its entire history as a modest intra-governmental body. Whether it can oversee multiple private companies deploying new reactor designs at unprecedented speed is genuinely unknown.
The SHANTI Act solved the constraint that had been binding for fifteen years. The moment it did, the next one appeared. Execution speed and regulatory capacity. This is what reforms do. They remove one set of constraints and you discover the next binding constraint.
India’s Startup Ecosystem
India has one of the highest‑funded tech startup ecosystems globally (ranked third in 2025 by equity funding). I know this firsthand through my work with Emergent Ventures. I supported space tech and deep tech startups five years ago, before it became fashionable and part of any government scheme or agenda. Those space startups emerged not because the government launched a new scheme but because it got out of the way and liberalized the sector to allow private entry in 2020. Startups want early investment that is quick, flexible, and unbureaucratic. Drown them in paperwork to get a grant, and you do more harm than good.
For startups, the entities that in every other major AI ecosystem do the most consequential foundational model work, India’s constraints go beyond talent. They are capital, compute, and a tax regime that punishes the upside for investors. Private capital and venture funding had driven the AI agenda, but in India, uncertainty looms large.
In January 2026, the Supreme Court ruled that Tiger Global’s $1.6 billion stake sale from selling its Flipkart stake to Walmart in 2018 was taxable in India. Tiger Global had routed the investment through Mauritius-based entities, as virtually every major foreign fund investing in Indian startups had done for two decades. The India-Mauritius Double Taxation Avoidance Agreement had historically exempted such gains. An amendment removed the exemption going forward from April 1, 2017, but grandfathered earlier deals. Tiger Global’s Flipkart shares were acquired before the cutoff. The Delhi High Court agreed in August 2024 that the gains were exempt. The Supreme Court reversed. It found that Tiger Global’s Mauritius entities lacked genuine economic substance, that real decision-making rested with individuals in the United States, not nominal directors in Mauritius, and invoked India’s General Anti-Avoidance Rules (GAAR) to hold that a valid Tax Residency Certificate is necessary but not sufficient to claim treaty benefits.
The legal reasoning is thinly defensible. Tiger Global’s Mauritius entities were flimsy structures. Substance-over-form is a legitimate principle in tax law. But the government did not simply close a loophole for the future. It taxed enormous gains made in good faith under rules that existed when the investment was made. In fact, it knowingly went after a very profitable firm that completed the sale within the grandfathered period. And it did not stop with Tiger Global. Within weeks, the Income Tax Department issued notices to at least seven other foreign VC and PE firms, seeking detailed information about their Mauritius and Singapore operations. The ruling could set a precedent for tax probes on high-frequency trading firms as well. Every major foreign fund now faces increased scrutiny on offshore holding structures and must demonstrate commercial substance beyond documentation.
Tiger Global was not a marginal player. It backed Flipkart as early as 2009, and between 2013 and 2021 it invested in Razorpay, Dream11, Groww, Meesho, ShareChat, and dozens of other Indian startups. As of late 2025, it held stakes in 20 to 30 active Indian companies valued at an estimated $2 to $4 billion. Mauritius share of FDI equity inflows to India between Jan 2000–Dec 2024 is 24.85 percent!
This is the pattern that now constitutes a known risk factor. One arm of the Indian government woos foreign investors. The headlines say India has changed, there is money to be made, there is ease of business, opportunity. The whole Prime Ministerial roadshow and trade machinery runs on this pitch. But the moment a fund or investor makes serious money in India, the tax department comes after the profits. If it goes to litigation, the courts apply what is legally defensible rather than what is economically sensible. By then a decade has passed and no one in government connects the court ruling to the original headline that brought the capital in. Except it has now happened often enough that everyone outside India can connect it.
It is the number one question I am asked about India by foreign investors.
The retrospective tax amendment of 2012 targeted Vodafone’s acquisition of Hutchison’s Indian telecom assets, imposing capital gains tax on a transaction that predated the law. It took nearly a decade, an international arbitration loss, and a 2021 legislative reversal to undo the damage. The angel tax, Section 56(2)(viib) of the Income Tax Act, also introduced in 2012 to combat money laundering, taxed startups on the difference between capital raised and the government’s assessment of fair market value. In 2023, the government extended it to foreign investors, hitting the ecosystem during the worst funding winter in a decade. It was finally abolished in Budget 2024, effective FY 2025-26. Each episode reinforces the same signal. India’s tax regime is unpredictable, and gains that appear exempt today may be taxed tomorrow.
Private venture capital built OpenAI, Anthropic, Mistral, and every other frontier AI company. That capital becomes harder to attract when the tax treatment of exits is uncertain. Indian startup funding fell to $10.5 billion in 2025, down 17 percent from the year before. The funding winter that began in 2022 has not fully thawed. Foreign funds that once wrote large checks into Indian AI companies now price in regulatory and tax risk that their competitors in the US, UK, and France do not face. The result is that Indian AI startups increasingly depend on the government for what private capital would otherwise provide.
The Indian government has built a two-channel public-capital stack. The first channel is the Startup India Fund of Funds 2.0, ₹10,000 crore ($1.11 billion) approved by the Union Cabinet as of mid-February 2026 to mobilize venture capital and support deep tech, tech-driven manufacturing, and early-growth startups. This sits under the Ministry of Commerce and Industry, not MeitY.
The second channel is MeitY’s own targeted programs. SAMRIDH, a co-funding model routed through accelerators that matches funding to startups up to ₹40 lakh ($44,444), is meant to de-risk early commercialization and make startups more legible to private investors. GENESIS is oriented toward scaling technology startups beyond metro hubs. The Electronics Development Fund (EDF), the most VC-native MeitY pathway, invests through venture funds rather than picking startups directly and has deployed capital across multiple funds supporting startups in AI, robotics, cybersecurity, and drones.
Beyond this, the IndiaAI Mission offers subsidies of 40 percent for general AI workloads and 100 percent compute for certain foundational model development. The GPU cluster has exceeded 38,000 units, available at ₹65 ($0.72) per hour, a fraction of commercial cloud rates. Initially four startups, Sarvam AI, Soket AI, Gnani AI, and Gan AI, were selected from 67 applicants (out of 506 proposals submitted) to build foundational models. Sarvam received 4,096 NVIDIA H100 GPUs and a compute subsidy of ₹98.68 crore ($10.96 million). By Feb 2026, the number of startups selected increased from four to twelve.
These are real resources. But the terms reveal the difference between government and private capital.
The IndiaAI Mission’s funding is structured as compute-for-equity, with a central government body taking an equity stake in Sarvam AI in exchange for compute resources. The model was initially not going to be open-sourced, raising the obvious question of whether public funds should produce proprietary technology, until public pressure from founders and open-source advocates forced a reversal. Access to the compute portal requires registration through government identity systems (DigiLocker, Parichay, ePramaan), submission of a project proposal to a Project Management Evaluation Committee, and approval based on criteria including “projects of national importance.” The Takshashila report warned that few projects would qualify and that bureaucratic friction would leave compute capacity underutilized.
Mistral raised €385 million in a Series A without a government committee evaluating whether its project served national importance. Anthropic raised $450 million Series C led by Spark Capital with Google without surrendering equity to a federal agency. These companies could do this because private capital markets in the US and Europe function with tax certainty. Investors know how exits will be taxed, treaty structures are respected, and gains are not retrospectively reclassified.
Startups will take grant money. Of course they will. But a ₹40 lakh ($44,444) SAMRIDH match or a compute subsidy does not shape a business. A venture fund does not require quarterly compliance reports or project proposals evaluated by committee. It does not take equity in exchange for cloud credits. It does not impose conditions about which datasets must be used or whether models must serve “national importance.” It writes a check, takes a board seat, and helps, or at the very least, lets the founders build. In a field where the technology changes every six months, the flexibility differential between government and private funding is the binding constraint.
Grants also quietly turn founders into part-time bureaucrats. The Financial Times reported companies spending up to 3,000 hours per application to access the EU Innovation Fund. The U.S. Government Accountability Office has warned that paperwork burdens redirect resources away from productive activity, especially for small businesses that lack spare staff to absorb compliance. The constraint is not ideas but attention. Worse, grant incentives push startups toward measurable proxies that committees can score, such as patents, certifications, and milestone narratives written for evaluators rather than customers.
India’s own startup schemes are a case in point. DPIIT recognition, state startup cells, and various MeitY programs all incentivize patent filing through fee rebates, expedited examination, and reimbursement of filing costs. The result is predictable. Startups file patents to satisfy grant criteria and then abandon them, because the patents were never meant to protect a product. They were meant to check a box.
A 2024 study of a Chinese tax incentive tied to “high-tech enterprise” certification found the same pattern, evidence of strategic patenting behavior around certification events, filings rising because filings were rewarded. Recent Harvard Business School research on the U.S. Small Business Innovation Research program emphasizes that SBIR-backed businesses pursue fundamentally different strategies than venture-backed firms, reflecting that the programs are designed around different frictions.
That difference can be fine for public goals like spillovers and national needs. But it means founders chasing grants drift into a parallel universe where the customer is the application reviewer. Grants are optimized for accountability to the state, not accountability to the market. The paperwork is the price, and the patent-chasing is the predictable gaming of whatever metrics the bureaucracy can count.
What shapes a business is talent, market, and certainty that when money is made it will not be expropriated. No grant program, however well designed, substitutes for the $50 million Series B that lets a foundational model company hire the researchers and buy the compute to actually compete. That money comes from private venture capital. And private venture capital requires something the Indian government has repeatedly failed to provide, which is predictability. Not low taxes, necessarily. Just the confidence that the rules in place when an investment is made will still be the rules when the returns come in.
India’s Got Talent
None of this means India lacks the talent. The opposite is true, and the advantage is more specific than people realize.
Indian IT services companies and engineers have not just built outsourcing operations. They have built globally competitive IT and SaaS businesses and, more importantly, they have spent decades doing the work that most AI commentary treats as an afterthought. Enterprise software integration. Connecting legacy systems, handling edge cases, building middleware, managing the messy plumbing between what a product does in a demo and what it does inside a bank or a hospital or a supply chain. This is the bread and butter of Indian engineering, and it is about to become the bottleneck for AI everywhere.
The pattern is now familiar. A new foundational model appears, benchmarks improve, commentators declare that everything will change. And then the model has to be deployed inside an actual enterprise, with actual liability exposure, actual regulatory constraints, actual legacy databases, and thousands of special cases that no training run anticipated. Deploying AI in the real world will require a vast amount of integration work, sandboxing for liability, niche customization, and human-in-the-loop oversight. And it is true that a lot of the integration will be code that humans write using AI models, we are not yet at a stage where AI writes the code for the product, and integrate it seamlessly, and troubleshoot. Every large company adopting AI will need people who can do this. India has more of those people, with more relevant experience, than any other country at comparable cost. This is not glamorous work. It rarely makes headlines. But it is the layer where most of the economic value of AI will actually be captured.
India is also exceptionally good at adoption. The digital public infrastructure story demonstrated this. When UPI, Aadhaar, and the India Stack went live, the Indian startup ecosystem did not wait for a government directive. It immediately started building on top of it, in service of it, and using it to reach hundreds of millions of users. There is no reason to believe the same is not happening with AI.
I see applications for Emergent Ventures that confirm this constantly. If anything, there are too many AI-application startups applying for a grant with the promise of transforming niche sectors. Not all of them will make money. Many are chasing the same narrow use cases. But the sheer volume and speed of experimentation is itself a signal. The talent is there. The startup ecosystem is there. The instinct to build on top of new infrastructure the moment it becomes available is there. This is what is missing in most other developing countries, and it is not something a government program can manufacture. It comes from having a deep bench of engineers who have spent years shipping production software for global clients, combined with a domestic market large enough and digitally literate enough to absorb new products fast.
A little less conversation, a little more action, please
For most developing countries trying to become AI powers, the binding constraint is obvious. They lack the talent, or the capital, or the market. India has all three. What it lacks is the ability to convert them into operating infrastructure, and that problem predates AI by decades.
Every section of this essay follows the same pattern. The government identifies a genuine need, commits real money and political capital, and the project stalls somewhere unglamorous that nobody in Delhi was paying attention to. The soil under the fab. The compliance process around the subsidy. The balance sheet of the electricity distributor. The tax ruling that arrives ten years after the investment. These are not AI problems. They are the problems India has been deferring across every sector, stunting its manufacturing and structural transformation, now converging on the one sector where speed matters most.
India’s AI regulatory framework is, against considerable precedent, sensible. Its talent pool is globally significant. Sarvam has shown that narrowly trained models built for Indian languages and documents can beat the frontier labs on specific tasks that matter most to Indians. The Summit this week will produce headline investment numbers and photo-ops with CEOs and commitments of capital. None of that is fake. The ambition is real, the money is increasingly real, and the government’s instinct to avoid premature regulation is worth more than most people in the AI governance world appreciate.
But ambition has never been India’s problem. The gap between announcement and implementation is the malady. Whether India becomes an AI power will not be decided by most of the discussions at the Summit. It will be decided by whether someone in a state capital fixes the electricity pricing structure that makes data center power uneconomical, whether the PLI compliance architecture gets simplified enough for firms to actually claim the subsidy, whether the next generation of researchers stays or leaves, whether foreign venture capital is welcomed. Boring, iterative, state-level, ministry-by-ministry reform. The kind that never makes the curtain raiser.




Outstanding article. The only elements I would add is a broken justice delivery system and insufficient legal safeguards to right of speech. These are necessary conditions to any story, and perhaps the AI one specifically.
An insightful article. My view is that the Indian government should exit the business of directly funding startups and instead focus on fixing the fundamentals of the ecosystem, such as ease of doing business, expanded capital sources (e.g. convertible debt), reliable infrastructure access including electricity, quality of research and talent, and overall quality of life. The reasons are straightforward:
1) As you noted, previously announced programs have delivered limited results.
2) There is no shortage of private capital globally for strong ideas and capable founders.
3) In a four trillion dollar economy that is now the fourth largest in the world, innovation should be driven primarily by the private sector through sustained R&D and reinvestment of profits rather than dividends and stock buybacks.
In the current environment, positioning the government as a savior and relying on micromanagement is a recipe for stagnation.
Again, a great article!