Technology
Google’s Andrew Moore says AI disasters will cause lockdown of robotics research
AP Photo/Laurent Cipriani
- Three top researchers have begun sounding warning bells about the possible negative impacts of AI.
- Andrew Moore, the newly named head of AI at Google Cloud, said he has conceived of “horrible scenarios” that could potentially lead to a public backlash against the technology.
- Other researchers predict AI will lead to more automation in the workplace and reduce job opportunities for humans; and AI will one day enable more intrusion, even into our thoughts.
Some people quake at the idea of artificial intelligence. It’s safe to say that sci-fi films featuring malevolent machines have helped poison much of the public’s perception of AI.
But three experts in the field say there’s good reason to feel uneasy. While killer robots are still many years away, AI in some cases could pose an enormous threat to humans, according to Andrew Moore, the dean of the Computer School at Carnegie Mellon University who will soon take over as head of Google Cloud AI.
In November, Moore gave the keynote address for the Artificial Intelligence and Global Security Initiative, and said: “If an AI disaster happens, and that would for instance be an autonomous car killing people due to serious bugs, and frankly I believe that’s already happened, then at some point AI systems are going to be locked down for development, at least in the US.”
“There are some even more horrible scenarios — which I don’t want to talk about on the stage, which we’re really worried about — that will cause the complete lockdown of robotics research,” Moore continued.
‘There are some even more horrible scenarios’
Moore’s statements come during a time when billions of dollars are being invested in AI.
Google, Microsoft, IBM, Intel, Amazon, Facebook, and others are pushing hard to develop the technology. Sundar Pichai, Google’s CEO, once called AI more fundamental to human development than electricity or fire.
Nonetheless, a handful of researchers are urging caution and warning that AI misuse could bring serious consequences.
Last week, Kai-Fu Lee, the former president of Google China, predicted that AI will revolutionize many industries and generate a lot of wealth, but most of it will land in the pockets of a relatively small number of people. He warned that with the help of AI, business owners will automate more and more tasks that were once performed by humans.
“So it’s actually having a doubling effect…creating new AI tycoons at the same time taking away from the poorest of society,” Lee said at the Artificial Intelligence 2018 Conference in San Francisco.
At the same conference, Meredith Whittaker, cofounder of the institute AI Now at New York University and a leading Google researcher, outlined how technologists, including Elon Musk and Mark Zuckerberg, are searching for a means to enable humans to control devices via their thoughts. She predicted that in the not-so distant future, the technology will exist that can interpret and store our thoughts.
Whittaker then posed a question to the audience: What would happen if the authorities went to a company who had warehoused its customers’ thoughts and subpoenaed a person’s thought records? She called this possibility “creepy.”
Along the same “creepy” vein is the increasing ability of AI systems to read human emotions. This is something that Moore said is on the way.
“Up until three or four years ago, the advances in computer vision and speech processing were around recognizing people, recognizing objects, and transforming spoken words into underlying written sentences,” Moore told Forbes magazine last year. “Now we realize we can go farther than that. For example, the cameras in modern cell phones have such high resolution that they can see little imperfections in the face and use them to track all the parts of skin as they move around the face.
“From tracking all the bits of the skin, you can work out what the muscles are doing under the face,” Moore continued. “From what the muscles are doing, using previous knowledge from psychology, you can detect facial action units and micro expressions to get information that we as humans are not even consciously aware of.
“This means that when in dialogue with a person, we can capture when they are excited, when they are happy, when there are fearful, or when there is a showing of contempt.”
-
Entertainment7 days ago
Earth’s mini moon could be a chunk of the big moon, scientists say
-
Entertainment7 days ago
The space station is leaking. Why it hasn’t imperiled the mission.
-
Entertainment6 days ago
‘Dune: Prophecy’ review: The Bene Gesserit shine in this sci-fi showstopper
-
Entertainment5 days ago
Black Friday 2024: The greatest early deals in Australia – live now
-
Entertainment4 days ago
How to watch ‘Smile 2’ at home: When is it streaming?
-
Entertainment3 days ago
‘Wicked’ review: Ariana Grande and Cynthia Erivo aspire to movie musical magic
-
Entertainment2 days ago
A24 is selling chocolate now. But what would their films actually taste like?
-
Entertainment3 days ago
New teen video-viewing guidelines: What you should know