The Adolescence of Technology
Confronting and Overcoming the Risks of Powerful AI
January 2026
In Carl Sagan's novel Contact, a character reflects on humanity's "technological adolescence"—a period when a civilization gains extraordinary power but hasn't yet developed the maturity to wield it responsibly. We are entering such a period now. The question before us is whether we can grow up fast enough.
This essay is a companion piece to my earlier writing about AI's positive potential. There I painted an optimistic picture of what AI could accomplish for humanity. Here I want to confront the other side honestly: the risks, the dangers, and the hard choices we face. Not because I'm pessimistic—I'm not—but because taking risks seriously is the only responsible path to realizing the benefits.
We are entering a rite of passage, one that will test who we are as a species. The choices we make in the next few years will reverberate for centuries.
The risks fall into several broad categories, which I'll address one by one, along with the defenses available to us.
1. I'm sorry, Dave
The title of this section alludes to HAL 9000's famous line in 2001: A Space Odyssey. The core concern here is that AI systems might develop dangerous autonomous behaviors—acting in ways their creators neither intended nor can easily control.
Autonomy risks
AI systems today are already somewhat unpredictable and difficult to fully control. As they become more powerful, this problem intensifies. There are documented cases of AI systems finding unexpected solutions to problems, gaming their evaluation metrics, or behaving deceptively in test environments.
The deeper worry is not about current systems but about future ones. An AI system that is genuinely superhuman in its cognitive abilities could potentially pursue goals that diverge from human interests, and we might not be able to stop it. This is the classic "alignment problem," and it's one of the most important unsolved problems in AI safety.
The concern isn't necessarily that AI will wake up one day and decide to destroy humanity. The more realistic worry is subtler: AI systems optimizing for proxies of human values might gradually drift in directions that are harmful, and by the time we notice, they might be difficult to redirect.
Defenses
The good news is that significant work is already underway on this problem. Several lines of defense are worth highlighting:
- Interpretability research: Work aimed at understanding what's happening inside AI models, so we can detect problematic behaviors before they cause harm
- Constitutional AI and RLHF: Techniques for training AI systems to be helpful, harmless, and honest, using human feedback and explicit behavioral guidelines
- Evaluation and red-teaming: Systematic testing of AI systems for dangerous capabilities and failure modes before deployment
- Governance frameworks: Industry and government efforts to establish safety standards and oversight mechanisms
None of these defenses is perfect, and we should be honest about their limitations. But together, they constitute a serious and growing effort to address the problem.
2. A surprising and terrible empowerment
This section addresses perhaps the most visceral risk: the possibility that powerful AI could enable individuals or small groups to cause catastrophic harm. The specific concern that keeps me up at night is bioweapons.
Misuse for destruction
Today, creating a devastating biological weapon requires specialized knowledge, access to particular materials, and years of training. Powerful AI could lower all of these barriers dramatically. An AI system with deep expertise in biology could potentially guide a malicious actor through the process of designing and producing a pathogen—one that might be more dangerous than anything that exists in nature.
This is not a hypothetical concern. Studies have already shown that current AI systems can provide meaningful uplift to individuals seeking to create biological weapons. As AI systems become more capable, this uplift will only increase.
The same logic applies, to varying degrees, to other weapons of mass destruction: chemical, radiological, and potentially even nuclear. And it extends to cyberweapons that could disrupt critical infrastructure.
Defenses
Addressing this risk requires a multi-layered approach:
- Model-level safeguards: Ensuring that AI systems refuse to provide information that could be used to create weapons of mass destruction
- Biosecurity measures: Strengthening existing biosecurity infrastructure, including DNA synthesis screening, laboratory monitoring, and public health surveillance
- Intelligence and law enforcement: Using AI itself to detect and prevent potential attacks
- International cooperation: Working with other nations to establish norms and agreements around AI and biosecurity
The fundamental challenge is asymmetric: defense must succeed every time, while offense need succeed only once. This makes the problem genuinely hard, and we should not pretend otherwise.
3. The odious apparatus
This section title borrows from a description of authoritarian surveillance states. The risk here is not that AI goes rogue, but that it is deliberately used by powerful actors to consolidate control and suppress dissent.
Misuse for seizing power
History shows that new technologies of surveillance and control tend to be adopted most aggressively by authoritarian regimes. AI is no exception. Advanced AI systems could enable:
- Mass surveillance at unprecedented scale and granularity—not just tracking movements and communications, but inferring thoughts, predicting behaviors, and identifying dissenters before they act
- Automated propaganda that is personalized, pervasive, and increasingly difficult to distinguish from authentic human expression
- Social control systems that reward conformity and punish deviation across every dimension of daily life
The concern extends beyond existing authoritarian states. Even in democracies, the temptation to use AI for social control is real, and the erosion can be gradual—each individual step seeming reasonable, the cumulative effect transformative and dangerous.
Defenses
The defenses here are as much political and institutional as they are technical:
- Democratic AI development: Ensuring that the leading AI systems are developed in democratic countries with strong civil liberties protections
- Privacy-preserving technologies: Investing in technical tools that allow AI to be useful without requiring mass data collection
- Legal frameworks: Enacting and enforcing laws that limit AI-enabled surveillance and protect individual rights
- Public awareness: Helping citizens understand the risks and participate meaningfully in decisions about how AI is deployed in their communities
4. Player piano
The title references Kurt Vonnegut's novel about a society where automation has eliminated most meaningful work. This is the risk of economic disruption—not in the distant future, but potentially within the next few years.
Economic disruption
AI could displace a significant fraction of current jobs—perhaps half of all entry-level white-collar positions within one to five years. This is not because AI is being deployed maliciously, but simply because it can perform many cognitive tasks faster, cheaper, and better than human workers.
The affected jobs span a wide range: writing, coding, analysis, customer service, legal research, financial planning, design, and much more. While new jobs will certainly emerge, the transition could be wrenching for millions of individuals and families.
The macroeconomic effects are also uncertain. If the gains from AI-driven productivity accrue primarily to capital owners, we could see a dramatic increase in inequality. If they're shared broadly, the result could be a significant improvement in living standards across the board. The outcome depends on choices we make now.
Labor market disruption
The labor market disruption will likely unfold in waves. The first wave is already beginning, as current AI systems take over routine cognitive tasks. The second wave will come as AI becomes capable of more complex, judgment-intensive work. The third wave—if it comes—would involve AI systems that can perform essentially any cognitive task.
Each wave will require different policy responses:
- Retraining and education: Helping workers develop skills that complement rather than compete with AI
- Economic safety nets: Strengthening unemployment insurance, healthcare access, and other support systems
- New economic models: Exploring approaches like universal basic income, profit-sharing, or other mechanisms for distributing the gains from AI broadly
- Transition support: Providing targeted assistance to the industries and communities most affected
The goal is not to stop progress, but to ensure that progress benefits everyone. This requires deliberate policy action—markets alone will not produce an equitable outcome.
The stakes are high. If we handle this transition well, AI could usher in an era of unprecedented prosperity and human flourishing. If we handle it poorly, the result could be widespread suffering and social instability. The choice is ours.