Politics US Military wants its AI companies to remove their safeguards (1 Viewer)

Welcome to our community

Be a part of something great, join today!

SlyPokerDog

Woof!
Staff member
Administrator
Joined
Oct 5, 2008
Messages
129,269
Likes
149,792
Points
115
The Pentagon asked two major defense contractors on Wednesday to provide an assessment of their reliance on Anthropic's AI model, Claude — a first step toward a potential designation of Anthropic as a "supply chain risk," Axios has learned.

Why it matters: That penalty is usually reserved for companies from adversarial countries, such as Chinese tech giant Huawei.
  • Using it to punish a leading American tech firm, particularly one on which the military itself is currently reliant, would be unprecedented.
Driving the news: The Pentagon reached out to Boeing and Lockheed Martin on Wednesday to ask about their exposure to Anthropic, two sources with knowledge of those conversations said.
  • Boeing Defense, Space and Security, a division of Boeing, has no active contracts with Anthropic, a spokesperson said.
  • A Boeing executive told Axios: "We sought their partnership [in the past] and ultimately could not come to an agreement. They were somewhat reluctant to work with the defense industry."
  • A Lockheed spokesperson confirmed the company was contacted by the Defense Department regarding an analysis of its exposure and reliance on Anthropic ahead of "a potential supply chain risk declaration."
  • The Pentagon plans to reach out to "all the traditional primes" — meaning the major contractors that supply things like fighter jets and weapons systems — about whether and how they use Claude, a source familiar told Axios.
The big picture: Claude is currently the only AI model running in the military's classified systems. It was used during the operation to capture Venezuela's Nicolás Maduro, through Anthropic's partnership with Palantir, and could foreseeably be used in a potential military campaign in Iran.
  • The Pentagon is impressed with Claude's performance, but furious that Anthropic has refused to lift its safeguards and let the military use it for "all lawful purposes."
  • Anthropic insists, in particular, on blocking Claude's use for the mass surveillance of Americans or to develop weapons that fire without human involvement.
  • The Pentagon insists it's unworkable to have to clear individual use cases with Anthropic.
Friction point: During a tense meeting on Tuesday, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline to agree to the Pentagon's terms: 5:01pm on Friday.

  • After that, Hegseth warned, the administration would either use the Defense Production Act to compel Anthropic to tailor its model to the military's needs, or else declare the company a supply chain risk.
  • While Anthropic could theoretically challenge it in court, invoking the DPA would let the military maintain access to Claude.
  • Wednesday's outreach suggests the military is leaning toward a supply chain risk designation.
What they're saying: An Anthropic spokesperson said the meeting between Amodei and Hegseth had been a continuation of the "good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."

  • The spokesperson did not comment on the potential supply chain risk designation.
  • The Pentagon told Axios it was "preparing to execute on any decision that the secretary might make on Friday regarding Anthropic."
  • Referring to the possible supply chain risk designation earlier this week, a senior Defense official told Axios: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Reality check: Asking suppliers to analyze their own reliance on Claude and report back to the Pentagon is a lot different than immediately forcing them to cut ties. It's possible this is more brinksmanship on the Pentagon's side to try to convince Anthropic to fold.

  • But Anthropic has been insistent up to now that it will not back down on surveillance or autonomous weapons, two areas Amodei has personally raised when discussing the dangers of AI.
The intrigue: Aside from the Pentagon feud, Anthropic has been on a hot streak: raking in new funding, elbowing out competitors and burrowing itself deeper into the workflows of major corporations.

  • The supply chain risk designation could be a significant blow if a number of companies that work with the government remove Claude from their operations.
  • However, Anthropic could see some benefit in being viewed by potential customers and staffers as the company that stood its ground amid concerns of an AI arms race.
What to watch: Elon Musk's xAI recently signed a deal to move into the military's classified systems, under the "all lawful use" standard that Anthropic has rejected.

  • Google and OpenAI, whose models are already available in unclassified systems, are also in negotiations about moving into the classified space.
  • One source familiar with those discussions described Claude as the most capable model in a number of military use cases, but described Google's Gemini as a strong alternative.
  • The Pentagon insists Google and OpenAI would have to lift their safeguards to get those contracts.
What's next: The Friday deadline is fast approaching.

 

Anthropic rejects latest Pentagon offer: ‘We cannot in good conscience accede to their request’​


Anthropic is rejecting the Pentagon’s latest offer to change their contract, saying the changes do not satisfy the company’s concerns that AI could be used for mass surveillance or in fully autonomous weapons.

The Pentagon and Anthropic are at odds over restrictions the company places on the use of Claude, the first AI system to be used in the military’s classified network.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei on Tuesday that if Anthropic does not allow its AI model to be used “for all lawful purposes,” the Pentagon would cancel Anthropic’s $200 million contract. In addition to the contract cancellation, Anthropic would be deemed a “supply chain risk,” a classification normally reserved for companies connected to foreign adversaries, Hegseth said.

Anthropic said in a statement that the Pentagon’s new language was framed as a compromise but “was paired with legalese that would allow those safeguards to be disregarded at will.”

In a lengthy blog post on Thursday, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”

Amodei said Anthropic understands that the Pentagon, “not private companies, makes military decisions.” But “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” He also said use cases like mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”

Amodei said the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.”

The Pentagon did not immediately respond to a request for comment.

 

Sam Altman says OpenAI shares Anthropic's red lines in Pentagon fight​



OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons.

Why it matters: If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work.




  • It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.
The flip side: Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts.

  • Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."
What he's saying: "[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance," Altman wrote Thursday evening in a memo obtained by Axios.

  • "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines."
The intrigue: ChatGPT is already available in the military's unclassified systems, and talks to move it into the classified space have accelerated amid the Pentagon-Anthropic fight, sources tell Axios.

  • But the Pentagon has insisted OpenAI and Google would have to agree the military can use their models for "all lawful purposes," the same standard Anthropic rejected since it didn't incorporate their specific guardrails.
  • Elon Musk's xAI recently agreed to those terms, but Grok is not seen as a wholesale alternative to Claude.
In his memo, Altman wrote that the military will need AI, and he hopes to "help de-escalate things."

  • "We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons," Altman said.
  • The Wall Street Journal first reported on the memo.
Between the lines: OpenAI's ideas for enforcing its red lines include preserving the company's ability to continuously strengthen its security and monitoring systems as it learns from real-world deployments, a source familiar told Axios.

  • The company also wants researchers with security clearances who can track how the technology is being used and advise the government on risks.
  • Finally, the source said, OpenAI wants certain technical safeguards — including confining models to the cloud rather than edge environments like autonomous weapons.
What to watch: Based on how Pentagon officials have described their position to Axios, those proposals could face the same resistance Anthropic encountered: too much private company influence over critical government work.

State of play: After Anthropic CEO Dario Amodei stood firm by his company's red lines, employees from OpenAI and Google signed onto a letter in solidarity on Thursday, pushing executives at their respective companies to resist "pressure" from the Pentagon.


  • While Anthropic said it intended to continue negotiations, a rupture appeared close. Emil Michael, the Pentagon official handling negotiations with Anthropic and the other major AI firms, denounced Amodei as a "liar" with a "God complex" who was "putting our nation's safety at risk."
  • Many others in D.C. and Silicon Valley praised Anthropic for taking a principled stand at the risk of a major financial hit.
  • Altman and Amodei are former colleagues at OpenAI who have become fierce rivals since the latter left to start Anthropic.
The other side: Defense officials contend they have no intention of conducting mass surveillance or swiftly deploying autonomous weapons.

  • Their primary objection is having a private company dictate how the U.S. government can deploy AI for national security purposes, particularly during a technological race with China.
  • Defense officials told Axios their interactions with Anthropic left them concerned the company might raise questions about the deployment of their technology at critical junctures. Anthropic denies that.
  • It's possible the negotiations with OpenAI will be less adversarial.
What to watch: "We have had some meetings to discuss this over the past couple of days, and will have more tomorrow with our safety teams before we decide what to do. We will also set up an all hands and office hours as soon as we can," Altman said, referring to those negotiations.

  • "This is a case where it's important to me that we do the right thing, not the easy thing that looks strong but is disingenuous. But I realize it may not 'look good' for us in the short term, and that there is a lot of nuance and context."


 

Users who are viewing this thread

  • Back
    Top