Kubrick’s “2001” Anticipated Human Reliance on Untrustworthy AI

Evan Steeg
Predict
Published in
2 min readDec 25, 2022

--

Will We Be Smart Enough to Put Adequate Safeguards in Place?

Image shows the “HAL 9000” computer interface, part of the set from Stanley Kubrick’s movie classic “2001: A Space Odyssey”.

“Open the pod-bay doors, HAL.”

“I’m sorry, Dave. I can’t do that.”

I design and build Artificial Intelligence systems, and advise organizations on AI/data strategies, among other innovation pillars. And when I have time, I like to try out the latest discoveries, algorithms, apps and toys to emerge from the broader AI/ML ecosystem.

Like the other million or so people who’ve already written or given a talk about ChatGPT, I find it intriguing as a “mirror” of human language and reasoning. And no, it’s NOT sentient. And it’s NOT very trustworthy.

At some point — probably within our lifetimes, maybe within a decade— an AI model will be deployed somewhere AS IF IT IS both sentient and trustworthy. In other words, somebody will place a lot of trust in it, put something valuable — like maybe our lives —in “its” “hands”.

To say this has serious legal, philosophical, political and security implications may be the understatement of the century.

Fortunately, a lot of good, smart people are working on “Ethical AI” legal, regulatory, and technical frameworks for managing these technologies and their risks. Let’s hope they’re sufficient to the task….

Anyway, watching old movies this weekend I was reminded of the Stanley Kubrick’s 1968 classic “2001: A Space Odyssey”. Among other cinematic breakthroughs, the film anticipated the human-computer interaction challenges around AI rather pointedly — decades before Neo battled The Matrix, Matthew Broderick played “War Games”, or “Ahnold” promised “I’ll be back” as a Terminator.

How does one win a battle of wills against something whose will is likely unfathomable to us?

Sure, it’s fiction; it’s “just” art. But great art often anticipates and helps us frame and process important societal trends, opportunities, risks and tradeoffs.

“The will to mastery becomes all the more urgent the more technology threatens to slip from human control.”

— Heidegger

What do you think:

  • When and where will AI first be deployed in a way that might endanger human life?
  • Or has that happened already with autonomous vehicles, or military applications, or medical applications?
  • What are the best ways to manage such risks?

Please share your thoughts in a comment.

Thanks and have a happy and safe holiday season!

(h/t Luc Lalande for the Heidegger quotation!)

--

--

Evan Steeg
Predict

AI & digital health innovator. Sci-fi & football fan. Eastern Ontario via NYC, CT, Toronto. Degrees in Math, CS, Bfx. Bikes, hikes, dives & bass riffs.