top of page

HAL 9000: Why it will never be the future of AI

Writer's picture: p__dimip__dimi

HAL 9000, as depicted in the film 2001: A Space Odyssey

You might have seen the 1968 Stanley Kubrick science fiction epic '2001: A Space Odyssey', and you may remember from it the infamous spaceship AI 'HAL 9000'. Even if you haven't seen the film, you probably encountered HAL 9000 in one way or another throughout your time online, most likely in the form of the same image that is the banner of this blog post. The image, and it's associated infamous AI, have become iconic, and pretty much analogous with the idea of a dystopian AI future, alongside such AI bandits like Skynet from 'The Terminator' and the Machines from 'The Matrix'.

But did you know that '2001: A Space Odyssey' is based on a novel under the same name?

And what more, that in the novel - rather than going rogue due to a programming malfunction, HAL actually starts killing the crew because it has a conflict in it's own objectives?


That's right. While in the film HAL becomes a rogue AI just because, in the source material HAL actually has a reason for his actions, and it is that reason that we will have a look at now.


Originally, HAL's objectives were twofold:

1. Ensure the progress of the spacecraft's mission to Saturn (Jupiter in the film), tending the crew and supplying them with all the necessary information

2. Withhold from the crew information about the true nature of the mission - that it is because of an alien artifact, and for the purposes of alien contact


Within the confines of these objectives, is where HALs reasoning became conflicted, and where he had began his actions of killing off the crew.

The conflict being - supply the crew with information, and lie to the crew - withholding (specific) information. Give information, and withhold information.

Despite the fact that HAL 9000 is supposed to be a "General AI" - an AI capable of performing any intellectual task at least as well as any human, it is locked within computer-stupid reasoning that the best way to achieve both it's objectives is to kill off all humans.

You see, if he kills all of them - he would not need to lie to them, and thus, well, there would be no conflict.


So what is it I argue in this post? Why will the future of AI not look like HAL 9000?


A general AI - AI that is on even (or on higher) footing with a human being regarding any intellectual task - surely would not have arrived at the same action as HAL.

Even a child, possessing less developed cognitive abilities than an adult, is capable of non-literal interpretation of objectives. Not being capable of flexible interpretation is a computer-stupid problem, one that only a stupid computer would encounter - a way of thought that's binary, and definitely not one that is on even footing with a (cognitively health) human. You see, binary thinking IS a computer behavior, and computers are dumb. They aren't capable of doing anything outside the scope of their specific orders, and should anything unknown to a computer's programming come in - it simply would malfunction. Kind of like HAL.

But HAL is not a computer - HAL is an AI, and even if it's simulated on a computer, the programming of an AI, within contemporary terms, is anything but computer-stupid.


AI today is mainly banking on Deep Learning, a field that's structured on the simulated components of a biological brain, and even though the field is still in toddlerhood regarding it's potential, already it exceeds computer-stupid thinking by a long way.


Of course, technology back in the 60's was not quite what it was today, and thus the very notion of AI must have been grounded in what was known about computers at the time. The fiction vision of AI must have been grand, but the very ideas AI was based on were still ideas that at their core knew only the computing of the time. After all, if all you ever see is a white wall - then how can you imagine mountains, forests or animals? If you were always deaf, then how can you imagine the subtle sound of rustling in the wind?

The simple truth is - you can't. And if all you've ever known about computers is based on how computers were back in the 60's and before, then any vision that you have about the future of AI, as grand as it can be, will always be rooted in those very notions that you've always known.


So then, why am I saying that the future of AI will never be like HAL 9000 was?

Simply put - because it already isn't like that in our present day. AI today surpasses computer-stupid thinking by a lot, and as for the future - AI is leaping forward in such huge steps, that I simply could not even begin predicting what will be.


And what do I think a general AI of the future would make of those same seemingly conflicting orders that HAL had?

Quite simple - it would conclude to share all information with the crew, except for what it is supposed to hide from them, arriving to this child-simple interpretation with it's flexible non binary cognition, without resorting to the backwards reasoning that killing the entire crew would be the best approach.


Have an excellent day!


22 views0 comments

Recent Posts

See All

Comentários


Rose-Tech looks at the future of Technology and AI through rose tinted glasses

Never miss a new post!

Join the mailing list for updates of new posts, thinkpieces and tutorials.

© 2018 Dmitri Paley

Proudly created with Wix.com

bottom of page