The insanely busy Dr. Stephen Hawking finally got around to answering his Reddit AMA questions from two months ago. He answered nine.
We’re not sure if we’re going to be getting more or if this is it. Probably the latter, as Hawking is potentially the most scheduled man on the face of the earth and is busy doing things like warning humans of the potential consequences of artificial intelligence. He also types at a rate of about one word per minute, due to his disability.
Here are six things we learned from Dr. Hawking.
“We may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.”
Robots Might Put Us All Out of Business
Technological unemployment is the phenomenon where jobs no longer exist because new technologies replace old ones, but don’t employ as many people. This isn’t a question of “if” but a question of “when and how much?” Hawking says that it’s entirely possible that machines will manufacture everything we need. That means we’ll all either be sitting around watching football all day or grinding out a miserable subsistence poverty existence. “So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.” Don’t get that new Ferrari just yet.
Robots Won’t Hate You, But They Might Kill You Anyway
When you hear about superintelligent robotics, you automatically think of Terminator 2: Judgment Day—specifically, that super creepy scene where everything gets nuked. The problem, however, isn’t that robots will hate and want to exterminate humans. Robotic nuclear holocaust will be less like T2 and more like Colossus: The Forbin Project. “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants,” said Hawking.
Being Prepared for AI Makes This Less Likely
Elsewhere, Hawking has said that the invention of AI might be humanity’s greatest achievement as well as its last. He doesn’t think there’s any consensus on when, if ever, it’s actually going to happen. In this AMA he describes the discovery of AI as “either the best or worst thing ever to happen to humanity” and that we have to “get it right.” That means ending the race to create some kind of AI and orienting research toward benevolent and beneficial forms of artificial intelligence. Real, meaningful AI might be a long way away, but preparing now makes it more likely that AI will seek to help us, rather than see us as ants in an anthill.
Yes, AI Will Be Smarter Than You
Some people have postulated that a computer can only be as smart as the person designing it. However, Hawking says this is incorrect. Humans evolved from less-intelligent primates, so the idea of a “child” being smarter than its “parents” is not unprecedented. The tipping point will come when AI is able to reprogram itself. At that point, “we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.”
AI Will Develop a Will to Survive
Not sure if that sent a chill down your spine the way that it did ours, but it’s kind of spooky to consider. While AI can have basically any “drive” that you program it to have, things change when the AI is able to program itself. At that point, the AI will probably have a very different set of goals. Hawking postulates that this will include a survival drive and the need to get more resources for ensuring its own survival. “This can cause problems for humans whose resources get taken away.”
Stephen Hawking Is More Like You Than You Think
He believes the greatest mystery is women, his favorite song is Rod Stewart’s “Have I Told You Lately That I Love You?” and he thinks The Big Bang Theory, which he watches online, is hilarious. He doesn’t remember watching Wayne’s World 2 in a video store 25 years ago, though.