The AI Revolution Is Underhyped | Eric Schmidt | TED

Digital brain, glowing circuits, futuristic city, human silhouette.

The arrival of non-human intelligence is a very big deal, says former Google CEO and chairman Eric Schmidt. In a wide-ranging interview with technologist Bilawal Sidhu, Schmidt makes the case that AI is wildly underhyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explores the staggering opportunities, sobering challenges, and urgent risks of AI, showing why everyone will need to engage with this technology to remain relevant. This discussion was recorded at TED2025 on April 11, 2025.

The Underhyped AI Revolution

Eric Schmidt believes that AI is actually underhyped, despite all the talk about it. He points out that many people’s understanding of AI is limited to things like ChatGPT, which, while impressive, only scratches the surface of what AI can do. ChatGPT, for example, was a big moment for many because of its ability to write, even with its mistakes. But that was two years ago. Since then, there have been huge gains in areas like reinforcement learning, which allows AI to do complex planning and strategy.

Schmidt gives an example of how these systems can spend just 15 minutes writing deep research papers. This shows the incredible amount of computation involved. He explains that AI is moving from just language processing to sequence processing (important for biology) and now to planning and strategy. The ultimate goal is for computers to run all business processes, with different AI agents working together and communicating in natural language.

The Limits of AI: Energy and Data

One of the biggest challenges for AI is the sheer amount of power it needs. Schmidt mentions that the US alone would need an additional 90 gigawatts of power, which is like 90 nuclear power plants, just to keep up with AI’s energy demands. This is a huge national issue. Other countries are also building massive data centers, each needing as much power as entire cities.

While there are always improvements in algorithms that might reduce power needs, the demand for computation is still growing incredibly fast. Planning, for instance, requires a hundred or even a thousand times more computation than other AI tasks. This leads to the concept of "test-time compute," where AI learns while it plans, further increasing energy needs.

Another limit is data. We’re running out of unique data for AI to learn from, so we’ll need to start generating it. But Schmidt also brings up a deeper question: what’s the limit of knowledge? He wonders how AI can invent something truly new, like Einstein did, by seeing patterns in completely different areas. Today’s AI systems can’t do that yet, but if they could, it would require even more data centers and could lead to entirely new fields of scientific thought.

Navigating the Risks of Autonomous AI

The idea of autonomous AI, or "agentic AI," is a big topic. Some experts, like Yoshua Bengio, suggest halting the development of AI systems that can take independent action. Schmidt agrees that these concerns are valid, but he believes stopping development isn’t realistic in a competitive global market. Instead, we need to find ways to establish guardrails.

He uses an analogy: if humans were to start communicating in a computer language we couldn’t understand, we’d have to "unplug" them. Similarly, with AI, we need to be able to observe what it’s doing. Key concerns include:

  • Recursive self-improvement: When AI learns on its own without human oversight.
  • Direct access to weapons: AI controlling military systems.
  • Self-replication: AI reproducing itself without permission.

Schmidt emphasizes that stopping AI development isn’t the answer; instead, we need to focus on control and transparency. This leads to the dual-use nature of AI, meaning it can be used for both good (civilian) and bad (military) purposes.

The Geopolitical AI Race

Schmidt highlights the intense competition between the US and China in AI. The US is largely developing closed, controlled AI models, while China is leading in open-source AI. Open-source models can spread very quickly around the world, which can be dangerous at the cyber and bio levels. He uses a stark example to illustrate the potential for conflict:

Imagine two countries, one six months ahead in AI development. The country behind might resort to extreme measures, like trying to steal code, infiltrate systems, or even bomb data centers, to prevent the other from gaining an insurmountable lead. This kind of thinking, Schmidt says, is already happening in discussions among nuclear powers. He believes we have about five years to have serious conversations about these foreign policy implications.

Balancing Freedom and Safety

One of the biggest dilemmas is how to moderate AI systems at scale without creating a surveillance state. Schmidt stresses the importance of preserving individual freedom. While it’s easy to build systems that restrict freedom, it’s also possible to build ones that are liberating. He suggests that proof of identity might be needed to combat misinformation, but it doesn’t have to involve revealing personal details. Technologies like zero-knowledge proofs could allow for verification without compromising privacy.

The Bright Future of AI

Despite the challenges, Schmidt is cautiously optimistic about AI’s potential. He envisions a future where AI can:

  • Cure diseases: By identifying druggable targets and reducing the cost of drug trials.
  • Advance science: Helping us understand things like dark energy and revolutionize material science.
  • Personalize education: Providing every human with a tutor in their own language from kindergarten onward.
  • Improve healthcare: Giving doctors and nurses AI assistants to provide perfect healthcare, especially in underserved areas.

He believes that the arrival of general AI (AGI) and superintelligence is the most important thing to happen in human society in 500 to 1,000 years, and it’s happening in our lifetime. We just need to make sure we don’t mess it up.

Humans in an AI-Powered World

What will humans do when AI takes over many tasks? Schmidt doesn’t think we’ll all be sipping piña coladas on the beach. He believes humans will remain largely unchanged. Lawyers will have more complex lawsuits, and politicians will find new ways to mislead. The key economic shift is that AI will radically increase productivity. We’re facing a future where productivity could increase by 30% per year, something economists have no models for because it’s never happened before.

Riding the AI Wave

Schmidt’s advice for navigating this AI transition is to remember it’s a marathon, not a sprint. The pace of change is so fast that what was true two or three years ago might be irrelevant today. His main message is to ride the wave, and ride it every day. Don’t see it as a one-time event. Everyone, regardless of their profession—artist, teacher, doctor, business person—needs to adopt this technology, and adopt it fast. If you don’t, you risk becoming irrelevant compared to your peers and competitors. The flexibility and power of these new systems are changing business and life in profound ways, and these changes are happening every single day.

Key Takeaways

  • AI is underhyped; its capabilities extend far beyond current public perception.
  • The energy demands of AI are a major national and global challenge.
  • We need to develop guardrails for autonomous AI rather than trying to halt its development.
  • The geopolitical competition in AI, especially between the US and China, carries significant risks.
  • Preserving individual freedom is crucial as AI systems become more integrated into society.
  • AI has the potential to revolutionize healthcare, education, and scientific discovery.
  • Humans will remain relevant by adapting and integrating AI into their work and lives.
  • The pace of AI development is unprecedented, requiring continuous engagement and learning.

Tags
What do you think?
Leave a Reply

Your email address will not be published. Required fields are marked *

What to read next