OpenAI CEO on the exponential acceleration of Artificial Intelligence (AI), India’s employment anxiety and AI’s geopolitical faultlines. He was in conversation with Anant Goenka, Executive Director, The Indian Express Group
No. I think this is generally true for many companies — the person running the company gets way too much credit relative to the work everybody else does. But in our case, in particular, this has really been a story of a scientific discovery.
There were a handful of researchers who did something close to a miracle. They figured out something very deep about how the world works. What is so special about deep learning — and what has led it to be so general — is that this small set of researchers discovered an algorithm that can learn anything. And at scale, it just keeps getting better. Then you had a whole ecosystem of people who figured out how to deliver that scale — how to build data centres, how to optimise the training, how to handle the inference. And then, the world figured out how to build products around that. So if there’s a tectonic shift, it belongs first to the scientists.
Before talking about what’s changed in India, I think it’s important to talk about what’s changed in AI. A little over a year ago, AI could do high school math. That was incredible. It could do a very good — not perfect — job at Grade 11 mathematics. And people were genuinely in awe. Just a couple of years before that, it struggled with grade-school math.
By last summer, AI was competing in the hardest mathematics competitions in the world. And recently, mathematicians put out 10 unsolved research-level problems. Our latest model solved seven of them correctly. That’s not incremental progress. That’s a jump from ‘very bright student’ to ‘pushing the edge of human knowledge.’
The same thing is happening in physics. And this happened in about a year. Now to India, the biggest shift, I feel, is psychological. The last time I was here, India felt like a consumer of AI. People were using it, experimenting with it. Now the builder energy is off the charts. India is our fastest-growing market for Codex (two AI-assisted software development tools released by OpenAI). That’s remarkable. At IIT this morning, the energy was electric. It doesn’t feel like people asking, ‘What is AI?’ It feels like people asking, ‘How do we build with this?’ That’s a big change.
That comment was taken out of context. The question was whether you could build a frontier model — meaning the most advanced model in the world — for $10 million. Even then, I didn’t think you could. And today, it’s even more true. Frontier models are incredibly expensive.
But that’s different from saying India can’t build world-class AI companies. Of course, it can. And it is. There is a huge difference between ‘frontier model training at global scale’ and ‘deep, valuable innovation.’ Many of the narrow, specialised models and application-layer companies coming out of India are fantastic. India absolutely has the talent. The constraint isn’t intelligence. It is capital and infrastructure.
It’s going to change a lot. And it’s never helpful to pretend otherwise. The job of a programmer has changed more in the last year than in any year since I have been an adult. We have gone from autocomplete to ‘type an idea and get an application.’ But every step forward in computing has caused panic. And every time, the abstraction layer rises.
People move from writing assembly to writing higher-level languages; from writing functions to designing systems. Now they move from writing code to describing intent. The amount of software produced will explode. Expectations will rise. But as long as companies and countries adapt quickly, there will be plenty to do.
I’m not a jobs doomer. The promised leisure society never came. Humans always invent more things to want, more ways to express themselves, more problems to solve. India’s demographic reality makes this question more urgent. But urgency can be an advantage. Countries that adapt fastest win fastest.
When AI started generating images, people said, ‘Graphic artists, that’s over.’ That might be true for the kind of graphic artist job that was making someone’s birthday card invitation or something.
But for fine art, the price of AI-generated art is zero and the price of human-generated graphic art has continued to go up since this has happened. There are many things like that where we care about the person who does it. Another example is that I really cared about the nurse that was taking care of me when I was in the hospital recently. If that were a robot, I think I would have been pretty unhappy, no matter how smart the robot was.
A lot. The main themes I hear globally are infrastructure, jobs, distribution of benefits and safety. Everyone asks some version of: ‘What should my kids study?’
If you study history, especially primary sources from the Industrial Revolution, you see that people were spectacularly wrong in predicting future jobs. No one predicted the YouTube influencer. No one predicted the AI safety researcher.
So instead of guessing specific careers, I think about durable skills: adaptability, fluency with AI tools, resilience, creativity, collaboration. The change won’t happen overnight. Society has inertia. But eventually, it will be huge.
Whether you subscribe to a five-layer cake or a seven-layer cake, I think India should play at all levels. Vertical integration matters. For an economy of India’s size, it is important not to be dependent at critical layers — energy buildout, data centres, chips, models, applications. Different layers require different strengths. India already has world-class application-layer talent. It is building capability in semiconductors. Energy is improving. The Prime Minister is clearly motivated to compete at all levels. That ambition is important.
Not yet. If you ask people how many GPUs (Graphics Processing Units) they would like working for them — thinking about their problems, running their robots, writing their code — no one says less than one. Some say a thousand. Multiply that by eight billion people. We cannot deliver eight trillion GPUs; not on Earth. But that thought experiment shows the scale of ambition required. This will be the most expensive infrastructure buildout in human history. But AI and robotics will help us build it. It would have been impossible the old way. Now it’s just extremely hard.
Orbital data centres are not happening this decade. The launch costs alone make it impractical. And fixing broken GPUs in space is not easy. Space will matter eventually. But right now, terrestrial infrastructure is where the action is.
Government is important not just for for building infrastructure but just, given the level of impact this is going to have on society and the need to truly democratise this technology. Governments are going to have to be involved and companies like ours are going to have to partner with governments.
The tech industry started out as extremely libertarian — ‘we don’t need the government, the government doesn’t need us’ sort of view. That has changed a lot. Even before AI, in the last couple of decades, as the companies got bigger and more central to the economy and the way the whole world works, that has changed significantly but, maybe, never before has it been this important, just given the scale of the infrastructure that needs to happen.
I would say close in some ways and not not so close in others. There are some tight ties and then also, this administration has had some big criticisms of tech. Close cooperation between tech companies and the government is going to become increasingly important over time. It obviously won’t be a perfectly smooth relationship but the better it can be, the better for all of us.
I suspect AI will become one of the most important political issues in the world; one of the highest order bits of geopolitical tension and cooperation. But I don’t think it will be a fixed thing. As it develops, political alliances will shift over time.
China, I would say, is ahead in some areas and not ahead in others. In terms of manufacturing physical robots, it is clearly ahead and has a big edge on things like electric motors and magnets. It is clearly ahead on energy buildout as well.
But then, there’s places where I think we are ahead of them and my guess is that that’s always what it has been like and how it will continue to be. It’s hard to be ahead on everything. Maybe if you have the only super intelligence in the world, you could do it but that would actually be bad.
I don’t think there should be any single super intelligence in the world. There should not be any one person or any one country or any one company in charge of super intelligence, including the United States. The world is at its best when power is widely distributed, when people have a lot of different ideas and when there’s enough of a balance in power that we can keep each other in check. You don’t literally want one AI in charge of the world, no matter who has it.
That’s one of the most important questions of our time. You can imagine a world where AI massively concentrates power — one entity controlling it all. You can imagine chaos — everyone having superintelligence with no rules. Reality will be somewhere in between. I believe in broad distribution with guardrails.
The clearest signal already is this: one- or two-person startups now have extraordinary leverage. That was impossible a few years ago. AI lowers the cost of execution dramatically. That is decentralising. But frontier model training is capital-intensive. That concentrates. So both forces exist simultaneously. The outcome depends on policy, culture and how quickly tools become widely available.
You got to give the internet something to laugh at. It is definitely competitive.
It is. It is a weirdly small world for sure. I think it is very competitive commercially but among the groups building frontier models, there is also serious commitment to safety and alignment. Competition accelerates innovation. Cooperation is essential for safety. We need both. And the truth is, this technology is too important for any one actor to ‘win’ in the traditional sense.
I don’t really have that much more to add.
The shift from consumption to creation. India has the scale, the talent, the demographic energy. If India combines that with compute infrastructure and bold policy, it could surprise the world. The countries that adapt fastest to abstraction shifts win. Right now, India feels like it wants to adapt fast.
The first thing I admire is that Demis Hassabis and the Google team started working on AI long before it was fashionable and they did so with deep conviction. Without their early inspiration, I don’t think we’d be here.
The second thing is their recent execution. They were behind but they refocused, scaled aggressively and improved quickly. That ability to regroup and execute at scale is impressive.
No one knows yet. What I’m happy about is that different countries are experimenting. Over the next few years, we will see what works and what doesn’t.
Unless we push hard to democratise, the world needs to hold companies and governments to a high standard. If AI is going to reshape the world, it must be broadly accessible.
The idea that one ChatGPT query uses gallons of water is not connected to reality. It used be true when we used to rely more on evaporative cooling in data centres. Energy consumption, however, is real at a system level. We need to move faster toward nuclear, wind and solar energy.
It’s way less.
People measure how much energy it takes to train an AI model and compare it to the energy for one human answering one question. But humans require about 20 years of development — food, shelter, infrastructure — to become capable of answering questions. And that’s built on centuries of human evolution. The fair comparison is: once trained, how much energy does it take for AI to answer a question versus a human? Measured that way, AI is probably already competitive in energy efficiency.
True for some kids! There are kids who say, ‘This is great, I cheated my way through school.’ That’s worrying. But most kids say, ‘Look at what I can build now.’ They’re creating new workflows, learning faster, experimenting more. When Google first came out, teachers thought memorisation was dead. But education adapted. We will need new ways to evaluate learning and creativity. But overall, AI will increase what students are capable of.
Everywhere. As more people use the technology, fewer want to totally pause it. Instead, they ask, ‘What does this mean for us? Can it go slower? Can we have more input?’ That’s a healthier debate.
Move fast.
Move faster.
The imagined fear is humanoid robots marching through cities. The real fear is cyber conflict — AI being used to influence populations, hack infrastructure, manipulate information.
I’m super worried about that. Increasingly, fear of AI going wrong is used to justify surveillance. People haven’t fully thought through the downsides of a Surveillance State.
Democratising AI and staying at the frontier of research requires huge capital.
Research-first. It almost automatically creates a good product.
That was one of my dumbest decisions. We were a nonprofit, and I didn’t care financially. But it created unnecessary conspiracy theories. It wasn’t worth it.
I am going to think of something but give me a minute. He is extremely good at physical engineering and at getting people to perform incredibly well at their jobs.
Heavy restrictions make sense but not a total ban.
AI systems today are not reliable enough for war planning. But they can assist in analysing large volumes of information. There may be defence uses someday. For now, we must be cautious. That said, we certainly want to support the government and there’s a lot of things we can do already.
I just don’t know.
AGI feels close. If you had asked people years ago if systems capable of independent research, writing complex programmes and performing professional knowledge work existed, they would have called that AGI. We adapt quickly to new capabilities. Superintelligence may be closer than I once thought.
How to be happy. I would rather ask a wise person.
For therapy or life advice, AI can be useful. For deeper philosophy of life, I would still turn to humans.
Both are unlikely.
Relentless focus and optimisation. They just keep improving at every level.
Imagine technology that understands your life context, integrates naturally and isn’t intrusive. That’s the goal.
Focus on catastrophic risks first. Be more flexible about smaller issues until we understand them better.
I was in a meeting recently with a big company that was planning to spend 2026 strategising, 2027 getting the company ready and 2028 deploying. That may work for other kinds of technology. Apparently, if you do like a giant ERP (Enterprise Resource Planning) migration, that’s the kind of timeline it takes. Doing that for AI will be a catastrophic mistake. The nimbleness required, the speed, the commitment required is just totally different.
Democratise AI. Put it in people’s hands. No other strategy will work.
I never said India can’t, but the non-profit one.
OpenAI is described as a research-first company. Researchers are driven by breakthroughs and discoveries. Given AI’s growing power, shouldn’t responsible AI receive equal focus?
Yes, absolutely. Responsible AI has been part of our DNA from the beginning. As we move closer to extremely capable systems — potentially superintelligent ones — that responsibility becomes even more critical. What I’m proud of is that our researchers genuinely internalise this. The people who succeed at OpenAI aren’t just pushing capability forward; they are constantly thinking about safety and impact at the same time.
Rajesh Magow
Founder, MakeMyTrip
As AI advances, are your safety checks and balances evolving at the same pace?
Safety is a major focus for us — both as a company and for me, personally. There’s always tension. Sometimes we may be too conservative and restrict access more than necessary. Other times, critics argue we are moving too fast.
Balancing democratisation with safety is difficult. Our principle has been to start conservatively and then broaden access as we gain confidence. So far, that approach has worked well and we intend to continue refining it.
Hazel Siromoni
Pro-Vice Chancellor, Chitkara University
Which professions do you believe are most at risk because of AI?
Many professions, as currently defined, will largely disappear. For example, I trained as a software engineer. The way I learned to write software — manually coding line by line — is now largely obsolete.
That doesn’t mean software engineering disappears. It evolves. The work changes. There will be entirely new professions created. Some jobs will transform dramatically. Some may change very little. But large categories of work will need to adapt in fundamental ways.
Abhishek Khaitan
Managing Director, Radico Khaitan
As a creator, I use ChatGPT often. What concerns you more: AI becoming too powerful, or humans becoming passive and overly dependent on it?
I don’t think humans will become too passive. What I’m seeing, particularly among creators, is faster iteration. AI shortens the loop between idea and feedback. You try something, refine it, improve it, repeat. That produces better outcomes. When image generation first appeared, people predicted the end of creativity. We have seen more experimentation. I used to worry about passivity but that doesn’t seem to be the dominant pattern.
Stuti Gupta
Director, Terrasoul Polymer



