Trending Topics

  1. Real VS pretentious data scientists Troy Sadkowsky 01-Mar-2017

Latest News from DS

From Bhutan 19-Feb-2017

Can we unlock the Deep Learning black box?

Troy Sadkowsky - Wednesday, June 28, 2017

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #0000ff; -webkit-text-stroke: 0px #0000ff}

Can we unlock the Deep Learning black box?

Artificial intelligence has experienced a boom in recent years due to the increasing automation and generation of big data. Due to deep learning and artificial neural networks, machines are acquiring new perceptual abilities, such as recognizing images, speech, or reading handwriting, paving the way for an infinite number of new applications in human lives. However, this leap forward in AI processing comes at a cost – the reasoning behind the choices that deep learning networks make has become inscrutable to the engineers that built them.

Progress in automated vision greatly benefits from deep learning because general visual processing is far too complex to code by hand. A program that can learn by observing human example or through complicated training sets, and can generate its own algorithms through an interconnected network of a dozen to several hundred layers of simulated neurons, positions us that much closer to a self-driving car, or other AI programs capable of sophisticated automated decision-making. But if we cannot understand a program’s individual decisions – why it chose to drive into a tree, why a patient’s medications should be changed, why one individual was hired for the job and another flagged as a terrorist – what ethical and moral boundaries are we crossing? And if an artificial intelligence program is not 100% accurate, as they rarely are, what are the dangers of putting ourselves at the mercy of a machine’s mistakes?

According to an article in MIT Technology Review (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/), trust will be a key factor in future applications of deep learning. Once machines can explain their reasoning to humans, we will better be able to learn from and act on their insights. Research is underway to develop tools for machine learning programs to be able to explain themselves. Regina Barzilay, at MIT, is developing a system that can collaborate with doctors by extracting snippets of text that represent patterns it has discover, and Carlos Guestrin’s system, from the University of Washington, highlights significant keywords or parts of an image to support a particular decision.

However, interpretable deep learning is far off, and even then, explanations offered by machines will always be simplified to a degree, which will always make trusting machine learning programs controversial. As a consequence, the European Union might soon pass legislation to make explanations for automated decisions a fundamental legal right, required of the companies that implement AI machine programs.

Jeff Clune, an investigator at the University of Wyoming testing deep neural networks, offers that, just as with human intelligence and decision-making, it is perhaps “the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable”. As deep learning offers more appealing possibilities in medicine, technology and other industries, when and should, society take the deep learning leap of faith?

Comments
私のサイトです。きてください
Shirley commented on 19-Jul-2017 01:41 PM
Hi, i think that i saw you visited my weblog so i came to ?return the favor?.I'm attempting to find things to improve my web site!I suppose its ok to use a few of your ideas!!


Here is my blog post:costume jewelry
Ranking szamb betonowych commented on 19-Jul-2017 11:04 PM
It's a pity you don't have a donate button! I'd definitely donate to this brilliant blog!
I suppose for now i'll settle for book-marking and adding your RSS feed to my
Google account. I look forward to brand new updates and will talk
about this site with my Facebook group. Talk soon!
skyrim commented on 21-Jul-2017 02:26 AM
Have you ever thought about creating an e-book or guest authoring on other blogs?
I have a blog based upon on the same topics you discuss and would love to have you
share some stories/information. I know my subscribers would appreciate your work.
If you are even remotely interested, feel free to send me an e-mail.

http://theelderscrolls5skyrimevolution225.ru/
skyrim
[url]http://theelderscrolls5skyrimevolution225.ru[/url]

Post a Comment




Captcha Image

Trackback Link
http://www.datascientists.net/BlogRetrieve.aspx?BlogID=10882&PostID=711829&A=Trackback
Trackbacks
Post has no trackbacks.

Can we unlock the Deep Learning black box?

Troy Sadkowsky - Wednesday, June 28, 2017

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000} p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Times; color: #000000; -webkit-text-stroke: #000000; min-height: 14.0px} span.s1 {font-kerning: none} span.s2 {text-decoration: underline ; font-kerning: none; color: #0000ff; -webkit-text-stroke: 0px #0000ff}

Can we unlock the Deep Learning black box?

Artificial intelligence has experienced a boom in recent years due to the increasing automation and generation of big data. Due to deep learning and artificial neural networks, machines are acquiring new perceptual abilities, such as recognizing images, speech, or reading handwriting, paving the way for an infinite number of new applications in human lives. However, this leap forward in AI processing comes at a cost – the reasoning behind the choices that deep learning networks make has become inscrutable to the engineers that built them.

Progress in automated vision greatly benefits from deep learning because general visual processing is far too complex to code by hand. A program that can learn by observing human example or through complicated training sets, and can generate its own algorithms through an interconnected network of a dozen to several hundred layers of simulated neurons, positions us that much closer to a self-driving car, or other AI programs capable of sophisticated automated decision-making. But if we cannot understand a program’s individual decisions – why it chose to drive into a tree, why a patient’s medications should be changed, why one individual was hired for the job and another flagged as a terrorist – what ethical and moral boundaries are we crossing? And if an artificial intelligence program is not 100% accurate, as they rarely are, what are the dangers of putting ourselves at the mercy of a machine’s mistakes?

According to an article in MIT Technology Review (https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/), trust will be a key factor in future applications of deep learning. Once machines can explain their reasoning to humans, we will better be able to learn from and act on their insights. Research is underway to develop tools for machine learning programs to be able to explain themselves. Regina Barzilay, at MIT, is developing a system that can collaborate with doctors by extracting snippets of text that represent patterns it has discover, and Carlos Guestrin’s system, from the University of Washington, highlights significant keywords or parts of an image to support a particular decision.

However, interpretable deep learning is far off, and even then, explanations offered by machines will always be simplified to a degree, which will always make trusting machine learning programs controversial. As a consequence, the European Union might soon pass legislation to make explanations for automated decisions a fundamental legal right, required of the companies that implement AI machine programs.

Jeff Clune, an investigator at the University of Wyoming testing deep neural networks, offers that, just as with human intelligence and decision-making, it is perhaps “the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable”. As deep learning offers more appealing possibilities in medicine, technology and other industries, when and should, society take the deep learning leap of faith?

Comments
私のサイトです。きてください
Shirley commented on 19-Jul-2017 01:41 PM
Hi, i think that i saw you visited my weblog so i came to ?return the favor?.I'm attempting to find things to improve my web site!I suppose its ok to use a few of your ideas!!


Here is my blog post:costume jewelry
Ranking szamb betonowych commented on 19-Jul-2017 11:04 PM
It's a pity you don't have a donate button! I'd definitely donate to this brilliant blog!
I suppose for now i'll settle for book-marking and adding your RSS feed to my
Google account. I look forward to brand new updates and will talk
about this site with my Facebook group. Talk soon!
skyrim commented on 21-Jul-2017 02:26 AM
Have you ever thought about creating an e-book or guest authoring on other blogs?
I have a blog based upon on the same topics you discuss and would love to have you
share some stories/information. I know my subscribers would appreciate your work.
If you are even remotely interested, feel free to send me an e-mail.

http://theelderscrolls5skyrimevolution225.ru/
skyrim
[url]http://theelderscrolls5skyrimevolution225.ru[/url]

Post a Comment




Captcha Image

Trackback Link
http://www.datascientists.net/BlogRetrieve.aspx?BlogID=10882&PostID=711829&A=Trackback
Trackbacks
Post has no trackbacks.