36d ago
In March 2023, the Future of Life Institute published an open letter signed by Elon Musk, Stuart Russell, Yoshua Bengio, and over 1,000 other researchers and technologists calling for a six-month pause on training AI systems more powerful than GPT-4. The letter cited "profound risks to society and humanity." It was largely ignored.
Eighteen months later, we have GPT-4o, Claude 3.5, Gemini 1.5, and capability advances that continue to outpace any regulatory framework. The US has no AI safety law. No mandatory disclosure requirements for frontier models. No liability framework. No equivalent to the FDA for systems that are being deployed in healthcare, legal analysis, financial decision-making, and infrastructure management.
I am not a Luddite and I am not arguing that AI is inherently bad. I am arguing that we are deploying consequential technology at scale without the regulatory infrastructure to evaluate its risks, and that a temporary pause on the training of the most powerful frontier models — while Congress and executive agencies develop that infrastructure — is a proportionate, time-limited, reversible response to that gap.
The analogy: we do not let pharmaceutical companies skip clinical trials because the drug looks promising. We require evidence of safety and efficacy before deployment at scale. The bar for AI systems that affect employment, healthcare, and judicial decisions should not be lower than the bar for a blood pressure medication.
230 words