ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) makes it possible for
machines to learn from experience based on what it was programmed what to
do. It adjusts to new inputs and performs
human-like tasks. Most AI examples that you hear about today from chess-playing
computers to self-driving cars and operating machines, small and large and it relies heavily on deep learning and natural language processing. Using these technologies, computers can be
trained to accomplish specific tasks by processing large amounts of data and
recognizing patterns in the data.
AI research in the 1950s explored topics like
problem solving and symbolic methods. In the 1960s, the US Department of
Defense took an interest in this type of work and began programing computers to mimic basic human reasoning. For
example, the Defense Advanced Research Projects Agency (DARPA) completed street
mapping projects in the 1970s. And DARPA produced intelligent personal
assistants in 2003, long before Siri, Alexa or Cortana became household names
This early work paved the way for the automation
and formal reasoning that we see in computers today, including decision support
systems and smart search systems that can be designed to complement and augment
human abilities.
While Hollywood movies and science fiction novels
depict AI as human-like robots that take over the world, the current evolution
of AI technologies isn’t that scary – or quite that smart. Instead, AI has
evolved to provide many specific benefits in every industry.
AI is different from hardware-driven, robotic
automation. Instead of automating manual tasks, AI performs frequent,
high-volume, computerized tasks reliably and without fatigue. For this type of
automation, human inquiry is still essential to set up the system and ask the
right questions that will provide the answers searched for.
Making IA more intelligent is not unlike students
going to school You need lots of data to train deep learning models because
they learn directly from the data. The more data you can feed them, the more
accurate they become.
For example, our interactions with, Google Search
and Google Photos are all based on deep learning and they keep getting more
accurate the more people put information in them.
Data is all around us. The Internet and its sensors
have the ability to harness large volumes of data, while artificial
intelligence (AI) can learn patterns in the data to automate tasks for a
variety of industrial and business benefits. Some we can use at home.
The Vivint Outdoor
Camera Pro can sense when someone is on your property. It will then play a
whistle sound and snap a picture when the intruder looks at the camera.
There is a washing machine that knows how much
detergent you need. With a fill capacity for up to 40 loads, the Whirlpool Smart Front Load Washer
automatically senses load size. Called Load & Go, it’s curiously
omniscient. There’s also a matching dryer.
There is a Google-powered clock that knows how to wake you up
gradually. Connected to the lights in your home, the alarm clock can slowly
increase the volume and lights as you wake up over a 30-minute period. The
Google Assistant can even read the current news to you.
Real-world artificial intelligence has appeared culturally
familiar, even cliché, long before it became real, as in the antique horrors of
Doctor Frankenstein animating his monster, or the stilted behaviour of Star
Trek’s Lieutenant Commander Data, or the fairy tale of Mister Gepetto making
his marionette Pinnochio into a real
boy.
However nowadays, science is catching up to fiction, and
unforeseen moral problems might actually arise in the future for
which the familiar storylines about soulless cyborgs offer limited guidance.
Artificial intelligences are no longer simply devices programmed for certain
tasks. They are learning machines, able to teach themselves based on
experience, and to act upon their newly formed knowledge. As such, they
threaten to escape the control of their creators.
Artificial Intelligence is more than self-driving cars and
personal assistant robots that control our appliances and order our groceries.
Its intuition can be deeper and more perceptive than the suggested replies in
Gmail. Its applications are increasingly diverse and invisible, from banking,
health care, education, aviation, agriculture and climate change to infrastructure
management, cultural promotion, and bringing to us every kind of entertainment.
Artificial Intelligence will eventually touch or transform every sector
and industry. The government of Canada
said in a news release in mid-May,2019 when
it named 15 experts to a new advisory
council on artificial intelligence that will focus on ethical concerns. Their goal will be
to “increase trust and accountability in AI while protecting our democratic
values, processes and institutions and to ensure this, Canada has a
“human-centric approach to AI, grounded in human rights, transparency and
openness.
That could certainly apply in the future if
police officers become robots as often described in futuristic movies. Don’t
laugh. That could happen. They laughed when people were told that they someday
in the future would be flying in the air
in planes. Some people laughed at me in the 1950s when I told them that
someday, we will buy things with bank
cards. My teacher in grade five laughed
at me when I suggested in my essay that the day would come when planes will get
in the air by simply rising directly upwards
instead of using runways. Such planes actually exist.
It is a curious project, helping computers to be
more accountable and trustworthy. But here we are. Artificial intelligence has
disrupted the basic moral question of how to assign responsibility after
decisions are made, according to David Gunkel, a philosopher of robotics and
ethics at Northern Illinois University. He calls this the “responsibility gap”
of artificial intelligence.
Google’s AlphaGo, a computer program that has
beaten the world’s best players at the famously complex board game Go. Go has
too many possible moves for a computer to calculate and evaluate them all, so
the program uses a strategy of “deep learning” to reinforce promising moves,
thereby approximating human intuition. So when it won against the world’s top
players, such as top-ranked Ke Jie in 2017, there was confusion about who
deserved the credit. Even the programmers could not account for the victory.
They had not taught AlphaGo to play Go. They had taught it to learn Go, which
it did all by itself.
That is scary. Think about this next scenario. A
robotic police officer that is operated under artificial intelligence control
mistakes a suspect’s move when he reaches into his back pocket to bring out his
wallet and the robot shoots the man because it believed that the man was going
to pull out a gun from his pocket.
A human being has brains that think and yet, a
police officer actually shot a deaf man when he reached into his back pocket to
being out a card that said that he was deaf.
Corporations that stand to benefit
most from artificial intelligence have taken keen notice of possible failures.
Driven by unease over public-relations disasters like Facebook’s alliance with
political research firm Cambridge Analytica, or the death of a pedestrian in
Arizona in 2018 who was hit by a self-driving Uber vehicle, they have made
steps to get ahead of the problem of AI’s ethical failures.
But in some cases, the effort had failed before
it even began. In March 2019, the US Department of Defense, Google launched an
Advanced Technology External Advisory Council of eight experts to guide the
“responsible development and use” of AI in its products. A few days later, in
April, that board disbanded amid controversy over its members, one of whom is
“vocally anti-trans, anti-LGBTQ, and anti-immigrant,” according to a successful
petition, and another of whom runs a drone company.
Something else is likely to take its place, of
course. The problems and risks are not going anywhere, and Canada’s is not the
only government studying the issue. In March 2019, , the European Union
released a report, Ethics Guidelines for
Trustworthy AI, that listed seven factors, with emphasis on human oversight
and accountability. In this environment, ethical AI has become a buzzword,
referring both to ethical applications of AI that remain under human control,
but also AI that aims to be ethical in its own behaviour, free of human
guidance. This latter goal is where it gets especially tricky.
How do you teach a machine right from wrong, not
just in a particular case, but in general? Can a machine learn a virtue just as
well as it learns a board game, or how to fly a plane?
A human can learn how to be honest and considerate
but how do you program a artificial intelligence robot to have these
virtues?
If an artificial Intelligence robot can adjust its
feeling on its own, what is going to prevent it into being a murderous robot
that has no scruples? That is really
scary.
I have a gadget on my desk that gives me the
current weather or any other info I seek but that information is programmed
into the gadget. But suppose it calculates the weather on its own and when I ask
the gadget what the weather is, instead of it telling me that particular
information I seek, it says “Drop dead,
Batchelor!”
The possibility of anything that is programmed to
reprogram itself is really SCARY.
No comments:
Post a Comment