A roadmap for AI, if anyone will listen
While Washington’s breakup with Anthropic exposed the complete lack of any coherent rules governing artificial intelligence, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.
“There’s something quite remarkable that has happened in America just in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organize the effort, in conversation with this editor.
One path, which the declaration calls “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other leads to AI that massively expands human potential. The latter scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable.
The declaration’s release coincides with a period that makes its urgency far easier to appreciate. On the last Friday in February, Defense Secretary Pete Hegseth designated Anthropic — whose AI already runs on classified military platforms — a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology, a label ordinarily reserved for firms with ties to China.
What it all laid bare is how costly congressional inaction on AI has become.
This is the first conversation we have had as a country about control over AI systems. ” Tegmark reached for an analogy that most people can understand when we spoke.
” Washington turf wars rarely generate the kind of public pressure that changes laws. Instead, Tegmark sees child safety as the pressure point most likely to crack the current impasse. Indeed, the declaration calls for mandatory pre-deployment testing of AI products — particularly chatbots and companion apps aimed at younger users — covering risks including increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.
“We already have laws.
“People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.
“What they agree on, of course, is that they’re all human,” says Tegmark.
“If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: