The idea of artificial general intelligence as we know it today starts with a dot-com blowout on Broadway.

Twenty years ago—before Shane Legg clicked with neuroscience postgrad Demis Hassabis over a shared fascination with intelligence; before the pair hooked up with Hassabis’s childhood friend Mustafa Suleyman, a progressive activist, to spin that fascination into a company called DeepMind; before Google bought that company for more than half a billion dollars four years later—Legg worked at a startup in New York called Webmind, set up by AI researcher Ben Goertzel. Today the two men represent two very different branches of the future of artificial intelligence, but their roots reach back to common ground.

Even for the heady days of the dot-com bubble, Webmind’s goals were ambitious. Goertzel wanted to create a digital baby brain and release it onto the internet, where he believed it would grow up to become fully self-aware and far smarter than humans. “We are on the verge of a transition equal in magnitude to the advent of intelligence, or the emergence of language,” he told the Christian Science Monitor in 1998.

Webmind tried to bankroll itself by building a tool for predicting the behavior of financial markets on the side, but the bigger dream never came off. After burning through $20 million, Webmind was evicted from its offices at the southern tip of Manhattan and stopped paying its staff. It filed for bankruptcy in 2001.

But Legg and Goertzel stayed in touch. When Goertzel was putting together a book of essays about superhuman AI a few years later, it was Legg who came up with the title. “I was talking to Ben and I was like, ‘Well, if it’s about the generality that AI systems don’t yet have, we should just call it Artificial General Intelligence,’” says Legg, who is now DeepMind’s chief scientist. “And AGI kind of has a ring to it as an acronym.”

The term stuck. Goertzel’s book and the annual AGI Conference that he launched in 2008 have made AGI a common buzzword for human-like or superhuman AI. But it has also become a major bugbear. “I don’t like the term AGI,” says Jerome Pesenti, head of AI at Facebook. “I don’t know what it means.”

Photograph of Dr. Ben Goertzel
Ben GoertzelWIKIMEDIA COMMONS

He’s not alone. Part of the problem is that AGI is a catchall for the hopes and fears surrounding an entire technology. Contrary to popular belief, it’s not really about machine consciousness or thinking robots (though many AGI folk dream about that too). But it is about thinking big. Many of the challenges we face today, from climate change to failing democracies to public health crises, are vastly complex. If we had machines that could think like us or better—more quickly and without tiring—then maybe

Read More

 

————

By: Will Heaven
Title: Artificial general intelligence: Are we close, and does it even make sense to try?
Sourced From: www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai/
Published Date: Thu, 15 Oct 2020 10:23:49 +0000

 

 

 

Did you miss our previous article…
https://www.mansbrand.com/inside-singapores-huge-bet-on-vertical-farming/

Comments

0 comments