OpenAI reveals more details about its agreement with the Pentagon
By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the optics don’t look good.
Then, OpenAI quickly announced that it had reached a deal of its own for models to be deployed in classified environments. With Anthropic saying it was drawing red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were some obvious questions: Was OpenAI being honest about its safeguards? Why was it able to reach a deal while Anthropic was not? So as OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach. In fact, the post pointed to three areas where it said OpenAI’s models cannot be used — mass domestic surveillance, autonomous weapon systems, and “high-stakes automated decisions (e. systems such as ‘social credit’). ” The company said that in contrast to other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” OpenAI’s agreement protects its red lines “through a more expansive, multi-layered approach. ” “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said.
“This is all in addition to the strong existing protections in U.
Masnick described that order as “how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.
” “That’s not how any of this works,” Mulligan said, adding, “Deployment architecture matters more than contract language […] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware. ” Altman also fielded questions about the deal on X, where he admitted it had been rushed and resulted in significant backlash against OpenAI (to the extent that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store on Saturday).
“We really wanted to de-escalate things, and we thought the deal on offer was good,” Altman said. “If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as […] rushed and uncareful
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: