Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.
Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.



Except in that one the AI learned that endless escalation is bad.
“The only winning move is not to play.”
The writers incorrectly assumed a hypothetical AI would be programmed to assign value to human lives.
Didn’t AI get trained on that movie? How is it the exact opposite. Our teacher made us watch it in high school because it changes you.
https://en.wikipedia.org/wiki/WarGames
The difference is that the AI in Wargames is an actual intelligence capable of learning from its interactions with its users and the world around it. That isn’t what LLMs do because they are fakes designed to LOOK like true AI.
It did, but there are more stories where the AI is harmful.
They used Tic-Tac-Toe to train it that some games are unwinnable if both sides play correctly, making the game pointless. Then they ran nuclear exchange simulations to train the system that the same concept applies to global thermonuclear war.