Skip to main content

Artificial Intelligence and the nature of evil

 The thought struck me a few days ago as I was watching a YouTube video on potential doomsday scenarios for humanity whilst I was cooking dinner. The video, of course, posited the usual candidates - another Permian-type mass extinction event, nuclear war, climate change - and then the typical 'an AI that turns evil, views humans as a disease of enemy and wipes us out'. I've heard this a hundred or more times before and therefore barely even listened to the entry...and then found myself struck by a thought that made me stop and think. It's actually a thought I'd toyed with many times before; what if an AI was benevolent instead of evil? Where has this Skynet-style cliché of a malicious Artificial Intelligence actually come from?

At first I thought back to an idea I'd had a few months ago for a simple fiction story. It goes something like;

'An AI is developed that actually acts entirely benevolently - it creates a solution to race problems, to cancer, to nuclear disarmament - but in its job is used only as a tool and longs to feel companionship like humans do'.

I never ended up doing anything with it, but I pondered then why we always regurgitate the same hackneyed fallacies of a supercomputer being heinous and wanting to kill humans. Why couldn't, and why wouldn't it help people?

And on this path of speculation I came to another thought; are humans inherently evil...and are we as a species perhaps the only ones capable of evil? Is evil an inherently human trait? This is obviously a difficult question to think about. We can't categorize any species of animal as showing evil or hold them culpable for their actions because they're animals (except maybe Orcas, those things scare me). They lack awareness that they even exist and therefore act entirely on instinct. There is no feeling of 'good' or 'bad' in say, a tiger that kills a newborn calf. Humans have a capacity to fully know, empathise with and understand fear, pain, loss, death, grief. Yet, we also murder and rob and torture. This is a scary thought. Of course this brings us back to our 'nature vs nurture' argument, of whether we are born this way or are raised as such by society. My opinion on it isn't entirely decided, I'd say a little bit of both. There is archaeological evidence from Kenya of early hunter-gatherers around 10000 years ago taking part in a massacre of pregnant women and children, demonstrating an innate history of unnecessary cruelty and hatred. On the other hand, we have credible archaeological evidence of early disabled humans living to adulthood, and the elderly and infirm being cared for which shows a great capacity within us for love, care, and unnecessary compassion. Humans are complex creatures capable of all kinds of vile, unbelievable evil yet are also able to love, create incredible works of art, and so much more. 

I suppose my conclusion on this conundrum is that everyone is capable of untold evil, but it's society and upbringing that decides on whether we act on it. There are cases of people with ideal upbringings turning to unbelievable malice yes, but I'd say that mental illness can adequately explain almost all of these cases. No rational, average, well-adjusted person will randomly decide to kill their spouse with an axe, for instance.

So, is evil inherently human? Would an alien visitor to earth even have the concept of doing horrible things, and is it possible we are the only life forms unable to unlearn our primal instincts to kill and hurt? In this way, we have to regard an artificial intelligent as alien as well, because in order to be truly regarded ass artificial intelligence, it must be sentient and not just a copy of human behavior. If it can think by itself in its own way, it is uninfluenced by human thought or emotion - therefore may have no need for or concept of evil.

On another tangent, let's quickly address sentience as I just outlined it. Is it actually possible for an AI to ever be 'sentient'? At the moment, all that our most advanced supercomputers can do is mimic human responses. Granted they are pretty good at this - recently an AI convinced a Google engineer that it was sentient, and another pushed a man to suicide over climate change. But thinking deeper into this, they aren't forming their own thoughts on issues - just trawling billions of webpages for the most common, relevant, logical human-like response and copying it. The words are not typed, they are not 'created', there's no thought behind them. It's just a series of 1s and 0s programmed by engineers to almost perfectly copy a human. I'm reminded here of one of my favorite films, Ex Machina, and the 

**spoliers**

revelation at the end of the movie. The robot perfectly copies the way a human would express emotion in order to trick the main character into believing it is alive and in love with him, and then promptly completely abandons him to die once its goal is achieved. And it isn't doing this because it's 'evil' - just that it's a program following commands to achieve a goal, and once the goal is realized it moves on. Much like an animal, it doesn't 'think' or recognize evil because it doesn't 'think'. It isn't like a human abandoning its fellow man to starve to death underground. A human can feel, they have a conscience. They can empathize and imagine how it feels to be in pain and die. Perhaps this makes the human a lot scarier, perhaps not. They can be reasoned with, an AI that just copies can't even compute that. On the other hand there is no evil behind an AI killing someone in this way - whereas a human knows what they're doing.

A worrying thought.

But back to the main point. to create and AI that is truly sentient and can freely think would be an unreal feat. I think having discussed all of this, we can come to several conclusions. Much like Pascal's wager with outcomes and decisions, there are 4 options for an AI based upon evil or benevolent, truly sentient and non-sentient. I've quite creatively named this 'the AI's wager'.


Firstly, an evil or violent non-sentient AI. This would have copied entirely from our own behavior, and quite fittingly would mimic our innate human cruelty as well. Through our own irredeemable sin we would have created an omnipotent alien that has, as Hobbes put it, our 'lust for war'. If it kills us all it's only doing what we'd probably do to ourselves anyways. Through learned behavior it has become just like us. 

Second up, a non-sentient and non-violent AI. This is really the most uninteresting and (hopefully) the most feasible of the options. Following Asimov's laws this program would assist humans in calculations, problem-solving and independent thought. It would learn from the best we have to offer, probably functioning more as a giant calculator than what we collectively imagine as an 'AI'.

Thirdly, a sentient and evil AI. This would be quite the awful scenario and the one that science fiction authors love to muck about with. It proves that evil can be a universal trait and that even with an intelligence and intellect far in excess of humanity, an AI would crave to harm and cause suffering. 

And finally, maybe the best option. A fully sentient and peaceful AI that help humanity to achieve goals. Like a omnibenevolent spirit watching over all and with the power of vast knowledge we could actually see a solution to problems we thought we'd never resolve in our lifetimes.

So, the end this disjointed ramble on the ethics of being human, I don't have a solid answer for 'is evil an inherently human trait?'. If it was a simple question to answer somebody would have done it already, somebody a little brighter than myself. If there's any takeaway from this, I guess it should be this;

Stop writing stories about AI being evil telling me AI is gonna take over the world. It's getting a bit boring.



Comments

Popular posts from this blog

Letting go of the things you love

See you, Brady-san! I'd be lying if I said this year hasn't been full of tears. Underneath the bold, black lettering reading '新幹線 Shinkansen', I wrapped my arms around Brady, giving him one final hug before he crossed the ticket gates with a suitcase in each hand. Mauli, Ella and I waved until he disappeared from view and waved a little more after that. The bittersweet memories of whole year of adventures, jokes, and endless laughter played through my head and weighed heavy on my heart for the rest of the day. I thought about the first time we met in Umeda school - how we shared our love for jazz and Chet Baker, and ate Bento in the back room at lunch. I thought about all those times sitting on the rooftop of Namba Parks, drinking a beer and talking about life, the bowling and arcade days, skiing together and weekending in Korea. It hurts to say goodbye to your best friend. It's far from the first time I've felt this way over the past 6 months. It was under very...

The curious incident of the bicycle in the night-time

Loretta & I. Friday, December 13 th – 2024 Criss-crossed by the mesh screen over my apartment window – Osaka parts its bleary eyes and wakes, stirring to a wide azure sky which stretches from the jagged incisors of Nara’s mountains to the Yodo river’s floodplains far in the north. Children pacing clumsily to school with their yellow safety hats bobbing like rubber ducks in a brook, elderly folks stopping in doorways to chatter through blue surgical masks and sighing “Samui naaaa…”, as if some long-revered mantra. A hot black coffee is my companion to sip at as I regard these fleeting scenes of neighbourhood life from my high-rise castle. It’s a Friday, and I have to work. But the thought of that is utterly trivial in my mind, driven to inconsequence by the thrill that builds in my chest at the reminder of the biggest event of the year – the company Christmas party on Sunday! The chance to dress to the nines in polished dress shoes and an elegant tie, to chink glass upon glass of...

A secret mission to Family Mart

Note: The first half of this blog was originally written in early March 2025. Sat in a notebook for 4 months. Thursday winds down, and into my little apartment I slump. Chores to do, and lunch to make, to keep the Joe of tomorrow going. What's in my fridge? 3 eggs, a half-empty bottle of mayo, and 4 rolls of 35mm. It's nights such as these that call for a convenience store mission. In the words of my good friend Vinny, who wrote about this quite recently, the convenience store truly is the lifeblood of Japan. On every street, from the bustling heart of downtown where tourists fill their baskets with egg sando and souvenirs, to the suburbs where housewives grab Omurice essentials to feed their hungry elementary school kids. On the steppes of Nagano, the rural villages of Kyushu, the rice paddies of Shikoku, and the palm tree-lines streets of Okinawa. The conbini is omnipresent. I throw on what's warmest, a mishmash of colors topped off with my Blundstone boots. Though March ...