Artificial intelligence may soon become the most powerful force humanity has ever created—and the world is dangerously unprepared for it. This stark warning comes from Dario Amodei, chief executive of AI research company Anthropic, who says that advanced AI systems will “test who we are as a species” and pose a serious civilisational challenge if not handled with care.
In a 38-page essay titled The Adolescence of Technology, published on January 26, Amodei argues that humanity is entering a turbulent but unavoidable phase where AI capabilities could outpace social, political, and ethical systems. While AI holds enormous promise, he warns that its misuse or mismanagement could lead to unprecedented risks ranging from bioterrorism and mass unemployment to global authoritarian control.
A Turning Point for Humanity
Amodei describes the current moment as a “rite of passage” for civilisation. According to him, powerful AI systems—far more capable than today’s models—could arrive within the next one to two years. These systems may perform cognitive tasks at a scale and speed no human institution can match.
To illustrate the danger, Amodei uses a striking metaphor: imagine a “country of geniuses in a data centre.” This hypothetical nation would consist of tens of millions of AI entities, each more capable than the smartest human experts, able to think and act many times faster than people. Such power, he argues, would represent one of the greatest national security threats in modern history.
“This is something the best minds of civilisation should be focused on,” Amodei writes, stressing that the risks are not theoretical but fast approaching.
The Dark Side of Powerful AI
Among the most alarming dangers outlined in the essay is the potential misuse of AI in biology. Amodei calls this his biggest concern, warning that advanced AI could enable individuals with malicious intent to design biological weapons or engineer deadly pathogens.
In the past, such capabilities were limited to highly trained scientists working in secure laboratories. With AI assistance, however, those barriers could collapse. “A disturbed loner who could never build a nuclear weapon may soon have the capabilities of a PhD virologist,” Amodei cautions.
Beyond bioterrorism, he also highlights the geopolitical risks of AI. Countries that gain a decisive AI advantage could use it to dominate others, strengthen surveillance states, or entrench authoritarian rule. In the worst-case scenario, this could lead to what Amodei describes as a form of “global totalitarian dictatorship.”
Concerns Over Global Power Shifts
Amodei also reiterates concerns about AI development in authoritarian regimes, particularly China. While clarifying that his warnings are not driven by animosity toward any nation, he argues that combining advanced AI with an autocratic government and mass surveillance infrastructure could have dangerous global consequences.
He has previously opposed the sale of advanced AI chips to China, comparing such actions to handing over nuclear weapons to hostile states. According to him, unrestricted access to AI hardware could accelerate the creation of powerful AI systems with minimal ethical safeguards.
AI Companies Under Scrutiny
Importantly, Amodei does not place responsibility solely on governments. He stresses that AI companies themselves must be held to high standards. These firms control massive data centres, develop frontier models, and influence hundreds of millions of users worldwide.
Because of this concentration of power, Amodei believes AI companies deserve intense governance scrutiny. He advocates for voluntary safety measures, transparency in AI development, and internal controls to prevent misuse—especially in high-risk areas like biological research.
Job Losses and Economic Disruption
Another major risk highlighted in the essay is large-scale job displacement. Amodei has previously warned that AI could eliminate up to 50% of entry-level white-collar jobs within the next five years. This could dramatically reshape labour markets and deepen economic inequality.
He urges companies to think creatively about supporting workers—both by retraining employees in the short term and by exploring new economic models in the long term. In a future where AI generates enormous wealth, he suggests it may be possible to support human workers even if traditional employment declines.
Anthropic itself, Amodei says, is considering new pathways for employees and plans to share its approach publicly.
A Call for Careful Action
Despite the warnings, Amodei remains cautiously optimistic. He believes the risks of AI can be managed if governments, companies, and civil society act decisively and thoughtfully. Regulations, he argues, must be carefully designed to avoid stifling innovation or causing unintended harm.
“This is a serious civilisational challenge,” he writes, “but our odds are good—if we take it seriously.”
As AI rapidly advances, Amodei’s message is clear: humanity must wake up, move fast, and choose wisdom over complacency. The future of civilisation may depend on it.

