Earlier this month, Anthropic’s Claude Mythos triggered something close to institutional alarm in Washington.
The model proved so capable of autonomously identifying software vulnerabilities that its own creators deemed it too dangerous for public release. America’s Treasury Secretary summoned the largest banks for emergency talks.
The Pentagon had already intervened when Anthropic refused to allow its model in autonomous weapons systems. Opinion surveys now show seven in ten Americans believe AI will cost them their jobs. The backlash has turned political.
This is America reckoning with something America built. Its legislators can compel testimony. Its regulators can threaten intervention. Its treasury secretary can call the architects of this technology into a room and demand answers. The conversation, however late and however messy, is at least happening between people who made the decisions and people who can unmake them.
India made none of these decisions.
ChatGPT was built in San Francisco.
Claude was built in San Francisco.
The compute infrastructure that runs them is owned by American capital and, in growing measure, steered by Chinese ambition.
The safety standards, such as they exist, were written by researchers in California and London. The terms of deployment were set by boards that have never considered what population-scale adoption means in a country of 1.4 billion people with India’s specific economic structure, its specific security vulnerabilities, its specific civilisational stakes.
Open your phone. Claude answers your questions, drafts your emails, and summarises your contracts. ChatGPT tutors your child, writes your code, and generates your marketing copy. Gemini sits inside the tools your office already uses.
These are not future scenarios but present realities, spreading quietly across Indian institutions, Indian households and Indian workflows at a pace that no policy document has tracked and no parliament has debated with any seriousness.
When the consequences arrive - and they are already arriving - India will be managing a crisis it inherited. There is a particular kind of strategic helplessness in that position. Worth naming clearly before it becomes irreversible.
India’s development story has rested for two decades on a single structural bet that a young, large and increasingly educated workforce would find its way up a services escalator, from low-skill labour to mid-skill employment to the knowledge economy.
This escalator was always imperfect and slower than the demographic pressure demanded. AI threatens to dismantle it while it is still being climbed.
The entry-level coding job, the junior legal researcher, the back-office analyst, the customer service professional, all these are precisely the rungs by which India’s aspirational classes were supposed to ascend.
They are also precisely the categories that ChatGPT, Claude and their successors are most rapidly making redundant. The compression is not gradual and retraining programmes don’t get time to respond.
India’s demographic dividend, celebrated across three decades of economic planning as the country’s defining structural advantage, becomes a demographic burden the moment the economy cannot absorb the cohorts entering it. The National Education Policy speaks of AI literacy as an opportunity. It does not speak of AI displacement as an emergency. That silence needs to be replaced with a policy position.
Infrastructure In The Crosshairs
Claude Mythos alarmed Washington because it had become dangerous in a specific infrastructural sense. A model capable of autonomously finding and exploiting software vulnerabilities at machine speed is,in the hands of an adversary, a weapon. Banks, hospitals, power grids, water systems - the architecture of a functioning modern state - become vulnerable in ways that human attackers, constrained by time and cognitive bandwidth, never made them.
India’s digital infrastructure has expanded rapidly. Aadhaar links over a billion identities. UPI processes hundreds of millions of transactions daily. Digital health records, smart city systems and defence-adjacent networks are all deepening their digital footprint at pace. This is genuinely impressive. It is also a dramatically expanded attack surface, designed for a threat environment that predates AI-enabled cyber warfare.
We do not what India’s security establishment has assessed about its exposure. Whether scenarios involving AI-enabled attacks on the UPI stack, on Aadhaar-linked systems, on hospital networks or power distribution infrastructure have been seriously war-gamed is not something any official has addressed.
The absence of public discussion is not evidence of private preparedness.
The China We Can Not See
America’s anxiety about AI has a defined object. It knows what it built. It knows roughly what China is attempting to replicate and surpass. The competition is visible, documented and increasingly the organising principle of American technology policy. Washington may be late and imperfect in its response. It is at least responding to something it can see.
India’s position is more exposed. China’s AI development is not primarily oriented toward the consumer applications that have made Claude and ChatGPT household names. It is oriented toward strategic capability with surveillance infrastructure, autonomous systems, influence operations and cyber tools. How far that development has progressed, what has already been deployed and how much of it is pointed in India’s direction are questions without public answers. The Line of Actual Control has a digital dimension that receives almost none of the attention devoted to its physical one.
India fought a war of sorts with China in the information space during the border tensions of 2020. App bans followed. Restrictions on Chinese hardware in sensitive networks followed. These were responses to threats that had already manifested. In the age of AI-enabled influence operations and autonomous cyber tools, waiting to respond until the threat has manifested is not a strategy. It is a concession made before the game begins.
The Leapfrog Illusion
India has told itself a flattering story about technological leapfrogging. Landlines were skipped and mobile telephony arrived directly. Branch banking was bypassed and UPI emerged as a model others now study. The story is true. It has also calcified into a cognitive habit where an assumption is made that adoption can precede governance, that the risks of a transformative technology can be addressed after its benefits have been captured.
That assumption was defensible when the technology was a payment system or a telecommunications network. The downside of getting it wrong was inconvenience or at worst, financial loss. The downside of getting AI governance wrong is of a categorically different order. Infrastructure failure. Mass structural unemployment. Civilisational dependency on systems whose values, safety parameters and ultimate loyalties were determined elsewhere.
There is also a subtler danger in the leapfrog story. It contains within it a quiet abdication in an assumption that the hard work of governance will be done by someone else, upstream, and that India will arrive to find a stable and mature technology ready for responsible adoption. That assumption requires America to get its regulatory act together, requires the AI companies to behave responsibly under pressure, requires China’s ambitions to remain legible and contained and requires the technology itself to slow down enough for institutions to catch up. None of these requirements has any strong basis in current evidence.
What A Participant Does Differently
The difference between a market and a participant is operational, not rhetorical.
A participant shapes safety standards before products are deployed at scale in its territory. It insists on data governance arrangements that reflect its sovereign interests rather than the convenience of foreign platforms. It builds domestic capability to understand frontier models well enough to regulate, negotiate and, where necessary, reject them. It participates in the international conversations where the norms of this technology are being written rather than discovering those norms as accomplished facts.
India has the standing to do this.
A country that represents one of the world’s largest and fastest-growing user bases for these platforms is not without leverage. It simply has not chosen to use that leverage as a governance instrument.
The regulatory conversation with Big Tech has too often been about content moderation and data localisation — real issues but secondary ones compared to the foundational questions of AI safety, deployment standards and AI-enabled risks to critical infrastructure.
The Question That Remains
If America, with its institutional depth, its proximity to the technology and its direct leverage over the companies building it is only now, imperfectly and under political pressure, beginning to govern artificial intelligence, what exactly is India’s plan?
This is not a rhetorical question. It deserves a specific answer, from specific ministries, with specific timelines and specific accountability.
The consumer nation that waits for the product to mature before asking who governs it will find, when it finally asks, that the rules have already been written, the dependencies already embedded and the leverage already gone.
Bharat has navigated civilisational challenges before by absorbing, adapting and where necessary, resisting forces that arrived from outside its own choosing.
That capacity for strategic discernment is precisely what this moment demands. The age of artificial intelligence is here. The question is whether India enters it as a nation that shaped its terms or one that simply absorbed its consequences.
Disclaimer: The views expressed in this article are those of the author and do not necessarily reflect the views of the publication. |