Can the US Prevent AGI from Being Stolen?

Securing AI weights from foreign adversaries would require a level of security never seen before.

Most Recent

America First Meets Safety First

President Trump vowed to be a peacemaker. Striking an “AI deal” with China could define global security and his legacy.

AIs Are Disseminating Expert-Level Virology Skills

New research shows frontier models outperform human scientists in troubleshooting virology procedures—lowering barriers to the development of biological weapons.

Smokescreen: How Bad Evidence Is Used to Prevent AI Safety

Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.

We Need a New Kind of Insurance for AI Job Loss

AI is poised to leave a lot of us unemployed. We need to rethink social welfare.

Exporting H20 Chips to China Undermines America’s AI Edge

Continued sales of advanced AI chips allow China to deploy AI at massive scale.

Apr 14, 2025

How Applying Abundance Thinking to AI Can Help Us Flourish

Realizing AI’s full potential requires designing for opportunity—not just guarding against risk.

Apr 9, 2025

Why Racing to Artificial Superintelligence Would Undermine America’s National Security

Rather than rushing toward catastrophe, the US and China should recognize their shared interest in avoiding an ASI race.

AI Risk Management Can Learn a Lot From Other Industries

AI risk may have unique elements, but there is still a lot to be learned from cybersecurity, enterprise, financial, and environmental risk management.

Apr 9, 2025
View all
Subscribe to AI Frontiers

Stay informed on the ideas shaping the future of AI.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
68092f598f827b0c0d1027f3
Reader's Poll
Are you for or against releasing the next generation of AI models (e.g., o4, GPT-5) open-weight all things considered?
  • For
  • Against
Thank you for voting.

Subscribe to AI Frontiers

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.