Every Country Is Adopting AI. Very Few Are Asking What Kind.

Every great leap forward casts a shadow. We just rarely look at it.

History has a pattern. The printing press gave us the Reformation and the pamphlet. It also gave us the propaganda leaflet. The compass opened up trade routes and the age of discovery. It also opened up the age of colonisation. Gunpowder lit up fireworks and festivities. And battlefields. Transformative technologies always carry a shadow story, a disruption running quietly beneath the triumphant narrative. The people living through the transformation rarely see it. They are too busy celebrating the light.

We are, right now, in the middle of one such transformation.

Artificial Intelligence is having its summit moment. The India AI Impact Summit 2026 in New Delhi brought together the luminaries, the announcements, the investments in data centres and cloud infrastructure, the excitement of a country positioning itself as a global hub. All of it warranted. All of it real.

And running quietly alongside it — almost in the shadow of it — was a different kind of conversation.

In January 2026, a closed-door strategic dialogue was convened at the India International Centre in New Delhi. NatStrat, in partnership with Strategic Foresight Group and Founding Fuel, brought together voices from national security, science, policy and academia. Not to celebrate AI. To interrogate it.

The dialogue produced a special policy report — India’s AI Gambit: Navigating the Global Race — and a rich public record in video and writing. Founding Fuel has curated the most consequential ideas from this conversation in two pieces worth reading carefully: India’s AI Gambit: The Choices That Will Define Power and India’s AI Moment by Ambassador Pankaj Saran.

The shadow, it turns out, tells quite a bit about what is coming.

The Arguments Worth Sitting With

The central provocation from this dialogue is deceptively simple: every country is adopting AI, but very few are asking what kind of AI power they want to become.

Most of the public conversation — including most of what you will hear at summits — concentrates on the consumption side. AI for productivity. AI for inclusion. AI for efficiency. 

These are real and important. And they are not the whole story.

Ambassador Pankaj Saran, former Deputy National Security Adviser and Convenor of NatStrat, writes in India’s AI Moment: “In both the United States and China, the centre of gravity in artificial intelligence is moving beyond consumer-facing applications and productivity tools. Increasing emphasis is being placed on advanced AI systems designed to accelerate scientific discovery and strengthen national security.”

The US has its Genesis Mission, coordinated by the Department of Energy, using AI for discovery science with national security implications. China, through the Chinese Academy of Sciences and related institutions, is pursuing the same territory — science and security, tightly integrated. The race is not just about who has the better chatbot. It is about who rewrites the rules of biology, chemistry and strategic power.

Sundeep Waslekar, President of Strategic Foresight Group, put it plainly at the dialogue: “The US and China are building AI for scientific discovery — new rules in biology and chemistry. They fear falling behind in science. We think we are doing fine.”

That last sentence carries some weight. The assumption that we are doing fine, while others are asking harder questions, is precisely the kind of complacency that history tends to punish slowly and then all at once.

Why These Voices Matter

Here is what I want you to notice about the people in this conversation.

They are not selling anything. They are not vendors of technology with a price attached. They are not optimising for applause at a summit.

Alok Joshi, Chairman of the National Security Advisory Board, spoke about building “assurance systems” and the balance between sovereignty and trusted outreach in diplomacy. Air Marshal S. P. Dharkar (Retd.), former Vice Chief of the Air Staff, articulated the governance tension plainly: “Centralised control. Decentralised execution. Without coherence, we fragment. Without flexibility, we slow down.”

Dr. V. K. Saraswat, Member of NITI Aayog and former Director General of DRDO, noted: “Strategic autonomy does not mean isolation. AI thrives on collaboration. The challenge is balancing openness with control.”

These are people whose professional lives have been spent thinking about consequences — national, institutional, long-term. The kind of thinking that is rarely represented in the summit stage lineup, and almost never in the LinkedIn posts celebrating AI’s potential.

When people like this sit in a room and say we need to think more carefully, it is worth slowing down and listening.

The Scenarios Ahead

What happens if we do not ask these questions? A few possibilities, none of them dramatic, all of them consequential.

The first is the talent and capability gap. If the world’s leading nations are deploying AI for scientific discovery — new materials, new drugs, new energy sources — and we are primarily deploying it for productivity and use cases, the gap between AI-as-tool and AI-as-power widens. 

We become very good consumers of a future that others are building.

The second is the governance vacuum. AI is interdisciplinary, as Dr. Preeti Banzal of the Principal Scientific Adviser’s office noted at the dialogue. One rule cannot govern all of it. Sectoral regulators need to adapt, coordinated through a whole-of-government approach. Without this, the default is fragmentation — or worse, governance that arrives after the disruption rather than before it.

The third is the sovereignty question. Advanced AI systems have already demonstrated the ability to generate hazardous knowledge in chemistry and biology, to interact with critical decision-support systems, to reshape cyber resilience. As Ambassador Saran writes, “questions of control, accountability and unintended escalation become increasingly salient.” These are not alarmist scenarios. They are the sober assessments of people with clearances and consequences.

The disruption of lives that AI will bring is as certain as its benefits. The printing press did not ask permission before upending the church’s monopoly on knowledge. Gunpowder did not pause for governance frameworks. The question is not whether disruption will come. It is whether we will have thought about it before or after.

What You Can Do

If you are reading this as a professional, a leader, a curious person trying to make sense of the moment — here is what I think is worth your attention.

Read the primary sources. Not the summaries. Not the LinkedIn posts. Read the Founding Fuel pieces. Watch the full dialogue on YouTube. The policy report by Strategic Foresight Group is available as a PDF. These are not long. They are dense with things worth thinking about.

Ask the shadow question. Every time you encounter an AI claim — a product, a policy, a prediction — ask: what is this not saying? Who benefits? What gets disrupted? What governance is missing? This is not cynicism. It is literacy.

Count the costs alongside the benefits. The printing press, the compass, gunpowder — we inherited the full story, shadow and all. We are in the middle of writing this one. The people who shaped the earlier transformations were not the ones who celebrated the loudest. They were the ones who thought the hardest.

The AI summit had its lights. Bright and real and warranted.

The shadow conversation happened quietly, in a room at the India International Centre, with people who spend their lives thinking about what comes next.

Both deserve your attention.

The shadow, especially.


If this resonated, you might enjoy A Small Defence Of Thinking — on why slowing down to think is its own kind of courage.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.