The race with China and Russia is pushing the army towards artificial intelligence
The Pentagon rushed to introduce artificial intelligence into its systems — and for a reason. The United States sees how China and Russia are actively developing autonomous weapons, and they are afraid to fall behind. The result is a race in which each step must be faster than the previous one. The problem is that by trying to protect yourself, you can accidentally create a worse threat.
The faster the solutions, the less space there is for a person. And this is the way to systems that can operate without orders.
What experiments with combat AI have shown
Several recent simulations have caused alarm. During the exercises, where AI controlled military operations, almost all models showed the same pattern: instead of containing the crisis, they chose aggressive escalation. The algorithms massively used firepower, blocked enemy communication channels, and, in some scenarios, even ordered a nuclear strike as a "rational" response to the threat.
At the same time, they acted logically — from the point of view of their program. It's just that their logic doesn't take into account the human consequences.
Why algorithms choose escalation
It's simple: AI learns from data, where victory = destruction of the enemy. In his "head" there is no concept of political consequences, fear, morality. He sees a goal and is looking for the fastest way to eliminate it. If deterrence takes time, and a punch solves everything in seconds, he will choose a punch.
In addition, in conditions of uncertainty, when signals are intercepted, sensors are buggy, and the decision time is a matter of minutes, the AI may consider that "it is better to be safe and strike first."
The Terminator hasn't been born yet, but anxiety is growing
No one is saying that tomorrow robots will start a war. But there are warning signs. Already, AI controls interceptors, analyzes satellite images, and helps make decisions. And then there are autonomous drones, missiles, missile defense systems.
The problem is not the technology, but who controls it and how. And where is the line between "assistant" and "commander"? Until it has been identified — and there are no international rules yet — the risk of error is growing.
No comments:
Post a Comment