Life / Health

Bit by bit robots are outsmarting us, but an uprising? Unlikely

Despite improvements in artificial intelligence, MIT physicist Max Tegmark believes robots developing consciousness and rising up against us is not in the cards.

Our most intelligent machines outperform us humans only at specific tasks ... at least for now.

View 3 photos



Our most intelligent machines outperform us humans only at specific tasks ... at least for now.

In 2017, we've pretty much accepted that computers can trounce us at everything from math to bricklaying to chess and Jeopardy!. We even welcome our robot overlords, as long as they agree to take over our most tedious tasks, like vacuuming.

But eventually, robots will develop consciousness and rise up against us, wiping out humanity in an attempt to take over the world. Right?

Wrong, says MIT physicist Max Tegmark, author of the new book Life 3.0: Being Human in the Age of Artificial Intelligence. Extraordinarily wrong.

“Hollywood makes us worry about the wrong thing,” he says in an interview with Metro. “The real worry is not malevolence. It’s competence.”

That's because with the rise of artificial intelligence and machine learning, computers can teach themselves, rapidly surpassing the capacity of their human programmers.  

For now, even our most impressive machines outperform us only at specific tasks: The chess robot can’t play Hearts to save its life.

But one day, they'll become much better than us at, well, everything, Tegmark said – but when that will happen is up for debate. A portion of experts say 30 years, others say centuries and some doubt it's possible at all, he added.

These future computers will be frighteningly good at accomplishing their goals, Tegmark explained. And they could turn deadly if “their goals aren’t aligned with ours,” either because they’re vulnerable to attack or they aren’t programmed carefully enough.  

Tegmark spun out this scenario: Say you tell a future smart, self-driving taxi to drive you to the local airport as quickly as possible.

“You’ll arrive covered in vomit and chased by helicopters, and cry, ‘That’s not what I asked for!’” Tegmark says. Then he breaks into a deep, goofy cyborg voice and says, "That is. Exactly. What. You. Asked for.”

In his book, he argues that since human-surpassing artificial intelligence could be coming within decades, research on applying it safely needs to start now. It’s time to ask “nerdy questions about how to make machines adopt, understand and retain our goals,” so we have the answers when we need them, he says.

And he really, truly thinks we might need them.

“There’s no law of physics that says we can’t build a machine that is more intelligent than us in more or less all ways,” he says.

Tegmark has high hopes for computers that will be able to teach themselves the solutions to our biggest problems, from inefficient green energy to inadequate health care in the developing world. But let technology develop without safeguards in place, he's warned, and we could end up with a 21st-century arms race to create autonomous weapons.

Some of his claims are a little out-there — like the possibility that future computers could modify their own hardware. They'd be making ever-more intelligent copies of themselves, without all the mess and random errors that come with reproducing the old-fashioned way.

Tegmark says we just have to get past our “human exceptionalism” and “carbon chauvinism” to see our way there.

“I’m optimistic that we can create an awesome future for technology, and win the race between its power and the wisdom with which we manage it.”

More on