The Dark Secret at the Heart of AI

I strongly recommend this article

Here are the first 2 paragraphs:

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a  human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

And there is a lot more of interest — including DARPA work.

Read and be amazed.





1 Comment

Filed under Uncategorized

One response to “The Dark Secret at the Heart of AI

  1. Bill Brandt

    Last year our car club took a tour of Diamler’s Sunnyvale, California facility. It is I believe one of two facilities in the world where they decide what electronic technologies to integrate into future automobiles.

    We saw their self driving car. Interesting dilemma with this is there so many decisions a human can make that programmers did not account for.

    Consider the following scenario: somebody wants to carjack you and is armed.

    If you saw a man with a gun right in front of you in a threatening manner the proper thing to do would be to swerve around him or even hit him if necessary to avoid him taking control.

    However the self driving cars of the present ignore that subtle but lethal difference. They’re programed to simply stop with the “assumption” that it is a pedestrian.

    With a full disclaimer for Diamler I do not know if this is specific to their car or I read it in general. I certainly can’t speak for Diamler but I know that their car has had some subtle issues too.

    As far as the “deep learning” I view that is simply faulty programming

    If the car is starting to do things for which the designers have no explanation it’s time to go back and rewrite the code. The other side of this issue is there are so many parameters for which it is very difficult to foresee.

    Or perhaps one could use the rationale that like human beings sometimes they react in the right way and sometimes they react in the wrong way. But I don’t think that rationale should be applied to code.

    Long and the short of it is they are a lot more complex issues to self driving cars then designers originally anticipated.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s