Wednesday, 2 July 2025

Anthropic announces specialized AI models for US defense and intelligence agencies

 The company announced a product developed specifically for the defense and intelligence agencies of the United States. Claude Gov models are free from strict moral and ethical restrictions and are better able to analyze classified information. The company said that the new products "have already been deployed by agencies at the highest level of US national security," and that access to these models will be limited to services processing classified information.

Claude Gov models are specifically designed to uniquely handle government requests, in particular threat assessment and intelligence analysis. And although the company stated that they "have passed the same rigorous security testing as all our Claude models," they have certain characteristics for working in the field of national security. For example, unlike the basic Claude models, they are not so scrupulous when interacting with classified information.

According to Anthropic, Claude Gov models have a better understanding of documents and context in the field of defense and intelligence, as well as excellent command of languages and dialects related to national security.

The use of artificial intelligence by government agencies is the subject of special attention for human rights defenders. The experience of using predictive algorithms in US law enforcement and social agencies indicates a high risk of unlawful arrests, bias and discrimination against certain groups of people. Until recently, large IT companies tried to avoid developing products for the military and intelligence.

Anthropic's basic AI policy requires the user not to use models to create and facilitate the exchange of "illegal or strictly regulated weapons or goods," including not using Anthropic products or services to "manufacture, modify, design, market, or distribute weapons, explosives, hazardous materials, or other systems designed to to cause harm or loss of life."

However, about a year ago, the company introduced a number of exceptions to its usage policy for several "carefully selected government agencies." Some things, such as disinformation campaigns, the development or use of weapons, the creation of censorship systems, and malicious cyber operations, will remain prohibited. But Anthropic may decide to "adapt the restrictions to the combat mission and the legal authority of a government agency," while adhering to the intention to please the customer and the desire to preserve its reputation.

The Claude Gov model was Anthropic's response to ChatGPT Gov, an OpenAI product created for U.S. government agencies and released in January.

The new model of the Anthropic, Claude Opus 4, could blackmail engineers during the tests, threatening to reveal their personal secrets, for example, adultery. That's how she reacted if she was going to be replaced by a new system. The AI's behavior was considered disturbing, and Anthropic introduced enhanced security protocols.

No comments:

Post a Comment

Random Messages

Featured Post

The sensations of the Medium are a secret initiative of the Bilderberg Club: the launch of the "Antichrist plan" and the global confrontation between Russia and China

  In the stream of endless news, fires, floods, resignations, appointments, attacks, and retaliatory raids, an attempt to strangle several m...

Popular Posts