The Ethics of AI and War: Why the US Military is Pushing Users to Claude

A ‘Supply-Chain Risk’ label from Secretary Hegseth has users choosing Claude’s guardrails over ChatGPT
An abstract representation of the Anthropic Claude AI interface set against the official seal of the United States Department of War.
Text Size
- +

Toggle Dark Mode

Modern AI tools like OpenAI’s ChatGPT and Anthropic’s Claude aren’t just used by consumers; it’s a harsh reality that they’ve also become popular with the US Department of War in support of modern combat operations. However, the attractiveness of lucrative government contracts is hard for AI companies to ignore, particularly in the wake of their massive spending on power and computing resources.

It makes sense that the agency formerly known as the Department of Defense (DoD) and our troops should have access to the most modern technologies, particularly if they serve to protect American troops. In fact, War Secretary Pete Hegseth has prioritized military adoption of AI. At the same time, the undeniable power of AI means that a principled approach to its use in warfare with well defined parameters is also necessary — despite the hundreds of millions of dollars at stake.

This Limited-Time Microsoft Office Deal Gets You Lifetime Access for Just $39

Sick and tired of subscriptions? Get a lifetime license for Microsoft Office Home and Business 2021 at a great price!

Anthropic took this level-headed approach and arrived at two simple exceptions to the use of its AI technology by the Department of War. The first is a restriction on fully autonomous weapons, while the second limits mass domestic surveillance. The reason for these self-imposed limitations isn’t as altruistic as you might think. According to Anthropic, its technologies aren’t capable enough yet to be used for mass surveillance of Americans or fully autonomous weapons.

The Department of War feels Anthropic is overstepping its bounds by exercising veto power over decisions that should be left to the Department of War and compromising the safety of American troops on the ground.

Hegseth went on to immediately designate Anthropic a “Supply-Chain Risk to National Security,” barring any government contractor, supplier, or partner that incorporates Claude from doing business with the US military. More broadly, it prohibits contractors from conducting any commercial activity with Anthropic.

Anthropic immediately fired back with blog post from CEO Dario Amodei and a federal lawsuit, claiming the government is retaliating against Anthropic’s insistence that AI systems should “be the safest and most responsible.”

Anthropic was founded based on the belief that AI technologies should be developed and used in a way that maximizes positive outcomes for humanity, and its primary animating principle is that the most capable artificial-intelligence systems should also be the safest and the most responsible.

Anthropic also contends the government’s prohibition is overly broad and should only apply to contractors specifically incorporating Claude in use with the government, not from doing business with Anthropic altogether.

At the same time, OpenAI released a blog post stating it worked with the government to revise language in their agreement. More specifically, new language now clarifies OpenAI’s “AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals” and “…the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons and nationals, including through the procurement or use of commercially acquired personal or identifiable information.”

These latest developments have caused a sudden flood of users to adopt Claude as their go-to AI tool, presumably for the perceived guardrails it’s implemented. While Hegseth says that Claude is only being used by the US Department of War for the next six months to allow for a seamless transition, that could change if the company succeeds in its uphill legal battle.

Government agencies are traditionally given wide latitude in how they administer contracts. Despite any AI company’s publicized safeguards, discretion on how these tools are actually used ultimately rests with the government. We’ll likely never know exactly how the government puts these new tools to use in practice. One thing is for sure, they certainly don’t want any pushback.

Sponsored
Social Sharing