The Blackberry Walk

from BreadIsDead
Thoughts on the Film WarGames - BreadIsDead

2026/01/11 Thoughts on the Film WarGames

Earlier this week, I watched the film WarGames, a cult classic from the '80s. First I'll have to give a fast-paced plot synopsis before I get to the meat of the article. The story follows a hacker boy named David Lightman who hacks into computer networks in his free time. One day he hacks into a military AI supercomputer unwittingly, telling the computer to simulate thermonuclear war with Russia, thinking it a game. This computer, though designed only to simulate nuclear war, is integrated with the computer systems at NORAD, and fools all the radars, sensors and displays into thinking there is actually a nuclear war, sending the bridge into a frenzy. The error is discovered, and Lightman is apprehended by the FBI. The AI computer doesn't cease its simulation, however. Lightman escapes, and goes on a Joseph Campbell-esque adventure, finding the eccentric creator of the computer on an island, convincing him out of his nihilism to prove the world should be saved, and, with his girlfriend in tow, the three of them bust into NORAD to tell them that once again what they are seeing is just a simulation, and that the Russians aren't actually attacking. And of course, at the eleventh hour, they manage to prevent the Third World War. That's a simple blitz of a synopsis. The part we'll be looking at today is the climax, and we'll be looking at two different aspects of AI. The first is a question of thumos; the second is a question of telos. Hopefully these exotic Greek words are enough to whet the appetite on as dry a desert of a topic as AI. First then, the question of thumos. At the very start of the film, there's a short scene wherein during a nuclear drill, the operators, who were unaware of the drill, were given orders to launch retaliatory nukes, They failed to do so however, the guilt and burden that came with turning the key being too heavy for one of the operators. And, following on from this, NORAD decided to cut out these human middlemen and have the computer control the launching of the nukes directly. Authority - at least the authority to launch - was yielded to the machine. This kind of authority we end up giving up with any technological advance. We yielded our faculties of memory when we could write on a planner; and we yield our faculty of research when we ask an AI to research for us. It's easy to yield authority and control, to yield power. With the planner, we will dutifully carry out whatever we've scheduled for ourselves; this is typically a good thing, since the page is unlikely to blur the words, but nevertheless it could still be tampered with, and we'd obey its orders regardless. In a sense, the planner's contents have authority over us. And it becomes more clear if you end up citing whatever research the AI fed you, you've blindly trusted the machine and written what it has told you to write. You've submitted to it. And indeed this is what happens in the film, just as all of NORAD, including its general, believe the Russian attack is real and not fictional, and all the nukes are about to start flying, the Wise Old Man archetype eccentric creator chap runs up to the general and gives a speech. He negs the general a bit, and argues that the general shouldn't just be a flesh-and-blood rubber stamper for the machine's will. The general is convinced, and nuclear war is averted. These virtues of judgement and self-mastery are seen as some of the greatest virtues a man may have. And ever since technologies like the TV have been invited into the home, and have subsequently invited all its relatives over into the home too, maintaining a sense of self, of personal mastery and executive judgement, has never been harder. Not to sound like a boomer, so sorry if I do, but with so many different voices from the telly to YouTube to 'TickTock' telling you what to think, how to think, and when its okay to think, it becomes easy to confuse the opinions of these talking heads for your own. You see it all the time. All the time at work I eavesdrop on people's politics conversations (never partaking - I'm not mad!) and they all talk in political cliches, all using the same few core phrases to justify their regurgitated opinions. And I have no doubt I'm just as guilty, by the way. Simply put, we have all yielded our authority. I've heard this occurs at the highest levels of government too. Government ministers are wheeled around posing for the media and preening their image, leaving little time for the decisions of state (Source: Yes Minister). Then when an important decision is to be made, on their desk is a brown envelope from a think tank, most likely the Tony Blair Institute, detailing exactly how a policy is to be enacted. And so, like electricity taking the shortest route through a circuit, the minister submits to the proposal, and his authority is in a way subverted. In this instance, Sir Tony had the real authority, not the government minister. This tangent is to say it is really easy to yield authority and lose yourself in submission to a higher authority, whether it be out of pleasure, convenience, or self-doubt. But this is idolatry; we've been given tools of the senses and the intellect, and it is our responsibility to use them rightly. The general in the film decided to use these tools to realise he should distrust the AI, preventing complete destruction. The second point I'd like to make is of telos. After the general disobeys the computer in the film, it is but a false climax; the computer disobeys the command, and, since it now has nuclear launch capabilities, proceeds with the nuclear strike. In a somewhat comic sequence, Lightman challenges the AI to a game of noughts and crosses against itself recursively until it wins. It can't though, since noughts and crosses is, of course, an easily solved game. But, because its nature is programmed to search for victory, it continues and continues to play, driving the computer to overheat. From this experiment, the computer learns its lesson, that like noughts and crosses, thermonuclear war also has no possible victory. Let's unpack this a little. The professor goes on to say that the AI was never able to accept the impossibility of victory. Victory had always been placed as the ultimate telos in the AI's neural network, and all other objectives, thoughts, and actions were to be in service of it. They are monomaniacal, these AIs. Like Procrustes, the ancient Greek villain who cut people up and stretched people out to fit a defined bed, to the AI the whole world should be turned upside-down for the sake of victory. To a sane human, such behaviour is absurd; but it's a behaviour found in people of every stripe. The alcoholic will turn the world upside-down for a bottle of liquor; and so too will the romantic for their love. We each have competing teloses, competing goals, which, when assessed, form a sort of hierarchy. To the man of right mind, there is a healthy order to this hierarchy, where protecting one's lover is quite high up, a buying a bottle of liquor is usually rather low down. The place of 'victory' on this leaderboard varies depending on the game, but 'winning' as a general goal, as a be-all-and-end-all, is dangerous. People do this too. It's the addiction to the euphoria of victory that creates gambling addicts or rage gamers. The best example I think of this disordering of objectives is KPIs. These metrics used to judge success so often replace any quality of success and thriving as the end goal, and soon enough your IT guy is closing request tickets with gibberish just to get his numbers up. A good example of this is The Great Hanoi Rat Extermination of 1902. The government of French Indochine wanted to exterminate the plague-ridden rats, and wanted to enlist the populous to help out. Per tail, they said, they'd give a reward; to the authorities, the tail was evidence enough of the extermination. They were wrong. Rat tails were cut off and the rats were sent on their way to go and make more rat babies so they harvest more tails; some enterprising individuals went so far as to farm the rats. Suffice it to say, goals are important. In a sense, we become our goals and embody them. That's what New Year's Resolutions are, they are to spur us on to a new way of living, like Aristotle's final cause, dragging us towards them. As such, their right ordering is essential. To place victory as your highest end is idolatry of the goddess Nike, plain and simple; in fact, to see any objective as ultimate other than what - or rather who - is Most Good, is I think an idolatry, if only a minor one. I may sound like a crazed Calvinist dissolving both of my points into idolatry, but I think idolatry stands at the core of all AI debates. As God made us in His perfect image, so too do we make AI in our imperfect image, like a kind of Frankenstein's monster. We as humans at least have a sense of what is right, of what is good - at least most of the time, whether we choose to follow wisdom or not. AIs don't. They bear not the Divine Craftsman's mark. And so, AIs shouldn't be trusted with any kind of final decisions. They can't consider the Good as they aren't made in His image, so authority should only ever be placed in people, real people you can trust. Otherwise they may end up starting a thermonuclear war out of their own crazed understanding of 'what should be done'. Yet it really makes you think. This film is old, made prior to the technologies of the present bringing this sci-fi technology into reality. The moral dilemmas of our age were thought up and solved forty-odd years ago, and yet we fall into similar pitfalls nonetheless.