http://www.livescience.com/
It's a Saturday morning in June at the Royal
Society in London. Computer scientists, public figures and reporters have
gathered to witness or take part in a decades-old challenge. Some of the
participants are flesh and blood; others are silicon and binary. Thirty human
judges sit down at computer terminals, and begin chatting. The goal? To
determine whether they're talking to a computer program or a real person.
The event,
organized by the University of Reading, was a rendition of the so-called Turing
test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent
behavior indistinguishable from that of a human. The recently released film
"The Imitation Game," about Turing's efforts to crack the German
Enigma code during World War II, is a reference to the scientist's own name for
his test.
In the London
competition, one computerized conversation
program, or chatbot, with the personality of a 13-year-old
Ukrainian boy named Eugene Goostman, rose above and beyond the other
contestants. It fooled 33 percent of the judges into thinking it was a human
being. At the time, contest organizers and the media hailed the performance as
an historic achievement, saying the chatbot was the first machine to
"pass" the Turing test. [Infographic:
History of Artificial Intelligence]
![]() |
Adicionar legenda |
Decades of research and speculative fiction have led to today's
computerized assistants such as Apple's Siri.
Credit: by Karl Tate, Infographics Artist
Credit: by Karl Tate, Infographics Artist
When people
think of artificial intelligence (AI) — the study of the design of intelligent systems and machines
— talking computers like Eugene Goostman often come to mind. But most AI
researchers are focused less on producing clever conversationalists and more on
developing intelligent systems that make people's lives easier — from software
that can recognize objects and animals, to digital assistants that cater to,
and even anticipate, their owners' needs and desires.
But several
prominent thinkers, including the famed physicist Stephen Hawking and
billionaireentrepreneur Elon Musk, warn that the development of AI should be cause for concern.
Thinking
machines
The notion of
intelligent automata, as friend or foe, dates back to ancient times.
"The
idea of intelligence existing in some form that's not human seems to have a
deep hold in the human psyche," said Don Perlis, a computer scientist who
studies artificial intelligence at the University of Maryland, College Park.
Reports of
people worshipping mythological human likenesses and building humanoid
automatons date back to the days of ancient Greece and Egypt, Perlis told Live
Science. AI has also featured prominently in pop culture, from the sentient
computer HAL 9000 in Stanley Kubrick's "2001: A Space Odyssey" to
Arnold Schwarzenegger's robot character in "The Terminator" films. [A Brief History of Artificial
Intelligence]
Since the
field of AI was officially founded in the mid-1950s, people have been
predicting the rise of conscious machines, Perlis said. Inventor and futurist
Ray Kurzweil, recently hired to be a director of engineering at Google, refers
to a point in time known as "the singularity," when machine intelligence exceeds human intelligence. Based on
the exponential growth of technology according to Moore's Law (which states
that computing processing power doubles approximately every two years),
Kurzweil has predicted the singularity will occur by
2045.
But cycles of
hype and disappointment — the so-called "winters of AI" — have
characterized the history of artificial intelligence, as grandiose predictions
failed to come to fruition. The University of Reading Turing test is just the
latest example: Many scientists dismissed the Eugene Goostman performance as a
parlor trick; they said the chatbot had gamed the system by assuming the
persona of a teenager who spoke English as a foreign language. (In fact, many
researchers now believe it's time to develop an updated Turing test.)
Nevertheless,
a number of prominent science and technology experts have expressed worry that
humanity is not doing enough to prepare for the rise of artificial general
intelligence, if and when it does occur. Earlier this week, Hawking issued a
dire warning about the threat of AI.
"The
development of full artificial intelligence could spell the end of
the human race," Hawking told the BBC, in response to a question about his new voice recognition system,
which uses artificial intelligence to predict intended words. (Hawking has a
form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou
Gehrig's disease, and communicates using specialized speech software.)
And Hawking
isn't alone. Musk told an audience at MIT that AI is humanity's "biggest
existential threat." He also once tweeted, "We need to be super
careful with AI. Potentially more dangerous than nukes."
In March,
Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the
company Vicarious FPC, which aims
to create a working artificial brain. At the time, Musk told CNBC that he'd like to "keep an eye on what's going on with artificial
intelligence," adding, "I think there's potentially a dangerous
outcome there."
Fears of AI turning into
sinister killing machines, like Arnold Schwarzenegger's character from the
"Terminator" films, are nothing new.
Credit: Warner Bros.
Credit: Warner Bros.
But despite
the fears of high-profile technology leaders, the rise of conscious machines —
known as "strong AI" or "general artificial intelligence" —
is likely a long way off, many researchers argue.
"I don't
see any reason to think that as machines become more intelligent … which is not
going to happen tomorrow — they would want to destroy us or do harm," said
Charlie Ortiz, head of AI at the Burlington, Massachusetts-based software
company Nuance Communications."Lots of work needs to be done before
computers are anywhere near that level," he said.
Machines with
benefits
Artificial
intelligence is a broad and active area of research, but it's no longer the
sole province of academics; increasingly, companies are incorporating AI into
their products.
And there's
one name that keeps cropping up in the field: Google. From smartphone
assistants to driverless cars, the Bay Area-based tech giant is gearing up to
be a major player in the future of artificial intelligence.
Google has been a pioneer in the use of machine learning — computer systems that can learn
from data, as opposed to blindly following instructions. In particular, the
company uses a set of machine-learning algorithms, collectively referred to as
"deep learning," that allow a computer to do things such as recognize
patterns from massive amounts of data.
For example,
in June 2012, Google created a neural network of 16,000 computers that trained
itself to recognize a cat by looking at millions of cat images from
YouTube videos, The New York
Times reported. (After all,
what could be more uniquely human than watching cat videos?)
The project,
called Google Brain, was led by Andrew Ng, an artificial intelligence researcher at
Stanford University who is now the chief scientist for the Chinese search
engine Baidu, which is sometimes referred to as "China's Google."
Today, deep
learning is a part of many products at Google and at Baidu, including speech
recognition, Web search and advertising, Ng told Live Science in an email.
Current
computers can already complete many tasks typically performed by humans. But
possessing humanlike intelligence remains a long way off, Ng said. "I
think we're still very far from the singularity. This isn't a subject that most
AI researchers are working toward."
Gary Marcus,
a cognitive psychologist at NYU who has written extensively about AI, agreed.
"I don't think we're anywhere near human intelligence [for
machines]," Marcus told Live Science. In terms of simulating human thinking,
"we are still in the piecemeal era."
Instead,
companies like Google focus on making technology more helpful and intuitive.
And nowhere is this more evident than in the smartphone market.
Artificial
intelligence in your pocket
In the 2013
movie "Her," actor Joaquin Phoenix's character falls in love with his
smartphone operating system, "Samantha," a computer-based personal
assistant who becomes sentient. The film is obviously a product of Hollywood,
but experts say that the movie gets at least one thing right: Technology will
take on increasingly personal roles in people's daily lives, and will learn
human habits and predict people's needs.
Anyone with
an iPhone is probably familiar with Apple's digital assistant Siri, first introduced as a feature on the iPhone 4S in October 2011. Siri
can answer simple questions, conduct Web searches and perform other basic
functions. Microsoft's equivalent is Cortana, a digital assistant available on
Windows phones. And Google has Google Now, an app for the Web browser Chrome as well as Android or iPhones, which
bills itself as providing "the information you want, when you need
it."
For example,
Google Now can show traffic information during your daily commute, or give you
shopping list reminders while you're at the store. You can ask the app
questions, such as "should I wear a sweater tomorrow?" and it will
give you the weather forecast. And, perhaps a bit creepily, you can ask it to
"show me all my photos of dogs" (or "cats,"
"sunsets" or a even a person's name), and the app will find photos
that fit that description, even if you haven't labeled them as such.
Given how
much personal data from users Google
stores in the form of emails, search histories and
cloud storage, the company's deep investments in artificial intelligence may
seem disconcerting. For example, AI could make it easier for the company to
deliver targeted advertising, which some users already find unpalatable. And
AI-based image recognition software could make it harder for users to maintain
anonymity online.
But the
company, whose motto is "Don't be evil," claims it can address
potential concerns about its work in AI by conducting research in the open and
collaborating with other institutions, company spokesman Jason Freidenfelds
told Live Science. In terms of privacy concerns, specifically, he said,
"Google goes above and beyond to make sure your information is safe and
secure," calling data security a "top priority."
While a phone
that can learn your commute, answer your questions or recognize what a dog
looks like may seem sophisticated, it still pales in comparison with a human
being. In some areas, AI is no more advanced than a toddler. Yet, when asked, many AI researchers admit that the day when machines
rival human intelligence will ultimately come. The question is, are people ready for
it?
In the film
"Transcendence," Johnny Depp's character uploads his mind to a
computer, but it doesn't end well.
Credit: Warner Bros.
Credit: Warner Bros.
Taking AI
seriously
In the 2014
film "Transcendence," actor Johnny Depp's character uploads his mind
into a computer, but his hunger for power soon threatens the autonomy of his
fellow humans. [Super-Intelligent Machines: 7
Robotic Futures]
Hollywood
isn't known for its scientific accuracy, but the film's themes don't fall on
deaf ears. In April, when "Trancendence" was released, Hawking and
fellow physicist Frank Wilczek, cosmologist Max Tegmark and computer scientist
Stuart Russell published an op-ed in The Huffington Post warning of the dangers
of AI.
"It's
tempting to dismiss the notion of highly intelligent machines as mere science
fiction," Hawking and others wrote in the article."But this would be a mistake, and potentially our worst mistake
ever."
Undoubtedly,
AI could have many benefits, such as helping to aid the eradication of war,
disease and poverty, the scientists wrote. Creating intelligent machines would
be one of the biggest achievements in human history, they wrote, but it
"might also be [the] last." Considering that the singularity may be
the best or worst thing to happen to humanity, not enough research is being
devoted to understanding its impacts, they said.
As the
scientists wrote, "Whereas the short-term impact of AI depends on who
controls it, the long-term impact depends on whether it can be controlled at
all."
Follow Tanya
Lewis on Twitter. Follow us @livescience, Facebook &Google+. Original article on Live Science.
Nenhum comentário:
Postar um comentário
Deixe aqui o seu comentário.