Prof. Amnon Shashua at “AI Automotive” Munich 2017

Prof. Amnon Shashua at “AI Automotive” Munich 2017


Thank You mr. Becker so thank you all for giving me the opportunity to speak out talking about I’ll talk about AI but I’ll talk about the problem that sounds a bit boring about safety well I’ll try to convince you that it’s really fascinating it’s not only fascinating if we don’t solve this problem and what I’ll propose is kind of a starting point to solve this from if you don’t solve this problem I don’t see an industry of autonomous cars okay so let’s let’s start with a short clip just to give context so so the context I want to build say in the first 10 minutes is the motivation why are we solving the problem that I want to show you just before we get into get into the details so it all started at the last SES so January this year we built a fully autonomous car together with Delphi that drove for route of about 10 kilometers inside the city of Las Vegas you know very very challenging maneuvers it did about 300 rounds a day for 4 days day and night and it was really perfect made no made no mistake you know people who sat in the car reporter as customers that they came to us and say this is the best demonstration that we have seen so first let me show you the clip does this clip is by one of the reporters and then I’ll explain why I’m showing this clip I’m here at CES where I’ve just taken a ride in the latest version of Delphi the autonomous research vehicle this is an Audi sq5 the Delphi is fitted with radar lidar and what’s new for this generation is a camera system from partner mobile that means 9 cameras around the vehicle to give this car a better sense of its surroundings during the drive on a set route this car acted very naturally it was aggressive enough but safe enough it felt like a human was behind the wheel there’s a display in the car that showed me what the car was seen I could when it could see pedestrians cross Watts traffic lights it really had a great sense of its surroundings one thing that really impressed me is while we’re in a left turn lane another car cut in front of us and the Delphi car behaved perfectly another time we also went through a fairly long tunnel the car lost its GPS connection but still stayed on course and one final thing that really impressed me is that this car uses crowdsourcing to determine its path down the road it sees the path that similarly equipped cars have taken before it and so it follows that path as well as lane lines so at that point it dawned on me that from a demonstration perspective we are there with the big guys now Google all those companies that are showing you know advanced a demonstration of a car driving autonomously and in city traffic but I told myself this is not ready for production right this is there is no way to build a business around it this is really a science project and the question is is what do we need to do in order to move from a science project into mass production and there are certain industries like in pharmaceuticals where you reach a point which is called the valley of death you have you have a drug tested on mice it works and then then you have this huge abyss this huge value of death phase 1 phase 2 phase 3 and most likely the chances of it passing through all these stages is very very small and eventually you don’t you do not succeed so these kinds of demonstrations from from my perspective it’s a science project how do we go from there – to mass production because before we start investing billions of dollars I need to make sure that we know what we are doing because mobile I was always about building a business it’s not you know a Research Institute building a business we were very successful in building a business of driving assist and we want to make sure that we’ll be successful in building a business of autonomous driving and therefore answering this question became became an obsession of mine for a number for a number of months and there were two – elements that we identified that I’m missing is about economic scalability there is a kind of a sense of brute force brute force in terms of the amount of silicon that you need to do the cost of the sensor is the cost of the computing the cost of building maps that are all good for demonstration but it’s not good for building a business this is one thing I’m not going to talk about that today the second is about safety and then when we told when we thought about safety we taught ourselves you know these cars they should negotiate in traffic like humans because if you don’t act like a human you’ll start obstructing traffic if you are too conservative you drive too too slowly you are very very cautious in in maneuvering and the city is dense with other with other cars you’ll simply get stuck and not move and and no city will want you driving in their city it’s it’s one thing that you have one or two cars like this but you have ten thousand cars blocking a city so on one hand you want human level negotiation on the other hand what about accidents right we don’t want accidents but you know if there are few accidents then what do we do who is liable for all of this then we talked about this this is a big issue it is an issue that could really kill the industry and I’ll explain why it can kill the industry and then what I’m going to explain is we published after six months of work we published a scientific a very mathematical paper about safety and again safety sounds like a boring thing my goal in the next 20 minutes is to convince you it’s really it’s really fascinating and now and I’ll try to explain what what is behind it so so the first thing that we ask yourself if there’s an accident what are the alternatives today and I see two alternatives and both of them are very bad the first alternative the manufacture of the AV car goes to court and is being treated just like a human so when we go to court our interpretation of traffic laws are being questions our actions are being questions you know I did a lane change the prosecutor would claim that my lane change was reckless I’ll claim that it was not reckless the prosecutor would claim that I misjudged the understanding of the other drivers I would claim that I did not miss judge them and back and forth back and forth this is this is bad because we are now in front of jurors and lawyers and judges each one has their own biases and you know an accident especially fatality between a machine and a human it’s just like a man biting dog that’s a rare event dog biting man is happens all the time but not one that will create a lot a lot of media attention and it could kill the could kill the industry so this is a bad alternative the second alternative is also bad second alternative I’ll come to court and also look unfortunate things happen but my technology is the best all elements of my technology is the number of sensors the amount of computing the amount of simulations I’ve run billions of kilometers of simulations the amount of validation and real driving I’ve driven the largest number the largest number of kilometers you will not find any competitor on any aspect of the technology that is better than me okay therefore now somebody died somebody night okay but I’m the best why is this bad because it leads to an over engineering over engineered solution if my competitors either the sensor even if that sensor doesn’t do anything right because I’m thinking what will happen when I come to court then I’ll also add another sensor so we’re creating an over engineered solution and we’re not solving the problem were not really guaranteeing safety we’re just protecting ourselves against liability we’re not doing anything that is really guaranteed to solving a real problem which is how do I guarantee safety so so we took inspiration for today how a be autonomous emergency braking and driving assist is being tested and that there’s something interesting about how it is being tested so we think about a/b the car is braking before an accident applies the brakes before an accident you need to reach a balance of false negatives and false positives or false negatives is the probability that you missed the target you were supposed to apply the brakes but it didn’t apply the brakes and false positives is that there was a false actuation you are not supposed to apply the brakes and you applied the brakes the regulator tested testers tests only the false negatives they create a test track in the test track they will have vehicles they’ll have pedestrians and your car needs to drive towards the vehicles and pedestrians and apply the brakes and you are being tested whether the car actually does what it’s supposed to do the regulator does not test false positives question is why there are two reasons one it’s very very difficult to test false positives you cannot test false positive on a test track you need to collect a lot a lot of data covering day night dusk different geographies weather conditions bla bla bla bla very difficult to test faults positives second there’s no need to test false positives because the assumption is that the car manufacturer would want to reduce the false positives otherwise the customer will complain alright if I’m driving leisurely and all of a sudden my car breaks then I’ll be scared I’ll take the car back to the dealership I’ll say I don’t want anything to do with this car and the car manufacturer that’s the interest of the car manufacturer to make sure that the level of false positives is very very low so the regulator needs only to test that the system is doing something which is the false negatives so now the question is how do we take this inspiration and map it onto how do you go and test autonomous cars there should be something that the regulator does and there should be something that is on the responsibility of the manufacturer just like with the false negatives false positives so when we look at what causes an accident I’ll say that there are two sources one I would call it sensing mistakes I have sensors and these sensors did not interpret the environment correctly there was a vehicle they missed the vehicle or they have the wrong measurement of the vehicle the sensing mistake was responsible for the accident there was a vehicle there I didn’t see it and I hid the vehicle let’s call that a sensing mistake not all not all sensing mistakes lead to an accident but some but some do I would say later that this part is is the easy part then another source of mistakes is miscalculating planning decisions I decided to change lane I changed lane in a reckless manner and because of that there was an accident and this is where I want to focus and there’s a good reason why I want to focus on on that let’s see what is first a traditional wisdom the traditional wisdom is that you do sensing and planning together you conflate them you try to do an end-to-end validation and the end-to-end validation is basically statistically driven the more kilometers I drive and in those kilometers I have no accidents the more the higher the maturity of the system is and if you go for example two-way nose website they will say we have driven four million kilometers and you know we are good driven four million kilometers and maybe their competitor will come and say later we have driven five million kilometers and then we are then we are better so the first thing that I need to convince you why this is wrong why not simply use this approach to drive more as much kilometers as as you can and use that to validate the system so don’t separate sensing from planning it’s a mistake just drive enough number of kilometers and try to flush out all these mistakes and then go and then show that you know the probability of a mistake is very very small to an acceptable level so we can do the following thought experiment to see how ridiculous this is so the follow this thought experiment is let’s take human driving there are known statistics about human drivings one of them is that the probability of a fatality of somebody getting killed in an accident for one hour of driving is around 1/2 a million now 1/2 a million sounds sounds good but let’s see what what it means let’s take in the US the US are about 25,000 people getting killed every with accidents now clearly if we come to society and say we’re going to replace humans with machines that machines are going to kill 35,000 people it’s not going to work but even if we do with 10,000 people that’s good not not going to be accepted there has to be a huge gap has to be three orders of magnitude going from 35,000 to 35 that that can work so we’re talking about we need to build a system in which we can guarantee a probability of a fatality for one hour of driving – one to a billion ten to the minus nine now in machine learning one can prove bounds and one can prove the following bound that if you have an event for an hour of driving at a probability P the amount of data that you need to collect in order to validate is 1 over T so if we’re talking about 10 to the minus 9 probability we need to collect 10 to the power of 9 hour of driving 10 to the pet assume an hour of driving is about 30 kilometers so are talking about 30 billion kilometres so these are not the fork for million kilometers that we see it is 30 billion 30 billion is not it’s not feasible I have a slide showing the cost and so forth about 2 trillion dollars and it’s it’s not it’s not so it’s not possible ok so so this statistical approach by looking at just what is the probability of making a mistake that will create a fatality just by collecting data it’s not it’s not feasible so what are we proposing so we’re proposing first we decouple sensing and planning we are doing to a sensing one can show a claim that if we have three sensor modalities the amount of data that we need to collect for this probability P is 1 over square root of P which means an upper bound of about 100,000 hours of driving which is about 3 million kilometers 3 million kilometers is is reasonable – to collect and if you can do a more thorough study that we do together with our Orion partners you know this is an upper bound probably you can do much less than 100,000 hours of driving so this is just to validate the probability of a sensing mr. when we come to planning mistakes what we want to do here is to provide a guarantee we’re not talking about something statistical but a guarantee that our planning decisions our decisions our actions will never create our autonomous vehicle to create an accident of its blame and the question is whether this can be done and the point is yes this can be done but it’s not data driven it’s not just by collecting kilometers it’s by building a model and what that model is we’re doing something that would either sound trivial or too difficult depending on on your on your philosophy what we are trying to do here is formalize human judgement formalize the human judgement of the common sense of driving and we claim it can be done first we claim it’s difficult and it can be done and use that in a way and use that in order to guarantee not in a statistical sense in absolute sense that you’ll never create you’ll never create a planning decision that will create an accident of your blame and you’ll still be neat to still be able to negotiate like a human driver so the first when you start thinking about it that you know safety is now model-based it is not a statistical based so we’re talking about artificial intelligence this is old-school artificial intelligence today when you say artificial intelligence today what you mean is deep learning you mean machine learning you mean data-driven processes because rule-based artificial intelligence which was back in the 70s 80s failed will not be able to do complicated things by defining rules to do those complicated things even detecting a face in an image by by a rule-based approach you know scientists failed to do it and this is why machine learning came to the rescue but for doing safety I’m going back to old school artificial intelligence is really understanding human judgment and creating a formal of human judgment a rule-based formal model of human judgment okay so it’s old-school artificial intelligence and I’m training that I’m claiming that this is this is necessary because the data-driven approach the amount of kilometers you’ll need to collect is unfeasible second this black box approach would would create a backlash from society because if you god forbid killed someone you need to explain why you cannot say this was a black box validated over 30 billion kilometers and now it killed him I don’t know why right it has to be a model has to be interpretable and explainable but then we said where that absolute safety is not possible I cannot guarantee they’ll never be an accident take for example there’s this example let’s assume that I’m the center vehicle here I’m trapped if this vehicle now starts hitting me here and there’s no way to escape so I cannot I cannot guarantee absolute safety but what I current D is that I’ll never be responsible for an accident that whatever I do will not create an accident of my blame and now we need to define what the blame is need to define the sin in a formal manner and and and these kinds of things are missing today if you look at traffic laws traffic laws are for humans it provides guidelines but it’s not precise are not precise definitions for example ask yourself is there a precise definition of when a cut-in is reckless or not you cut in an accident happen then you go to court and you start arguing who was responsible and whether it was reckless or not reckless there’s no precise definition there isn’t even a precise definition of who’s responsible for the lateral maneuver because the cuttin is not only you are driving straight and somebody comes from here cut in could be something like this you are moving laterally the other one is moving laterally who is responsible for the lateral maneuver all these definitions are not there in traffic laws so what we need to there are two criterias that we need to develop here one is soundness soundness meaning is that if our model states that the agent the autonomous car is not to be blamed then this should match the common sense of humans if there is a scenario in which an agent was involved in an accident and our model says that this agent is not to be blamed it has to match common sense the other way around doesn’t need to happen we can be more conservative if there is kind of a fuzzy situation and we are signed blame even though a human would say maybe I think it’s not it’s not his responsibility it’s okay being more conservative is okay but the other way around cannot happen right we cannot clear the blame when a human judgement says it’s not it is to be blamed so this is one thing to create a model that is sound and then I’ll show you how we have validated soundness the second one is more with more twiki’s usefulness usefulness meaning that i can take this model and actually implement it right because you’ll have absolute safety if you don’t drive at all right there simply don’t drive right now why is this tricky because you need to you need to prove that your decision-making is not based on that does not create a butterfly effect that means I’m doing an innocent action now and then the other cars are doing their actions I’m doing another action another action and then there’s going to be catastrophe at the end because if that is the case it means it’s like playing chess I need to open up the tree of all possibilities and that’s going to explode exponentially from a computer from a computing point of view and that’s not it’s not useful I need to prove that making a local decision making a decision based on what I see as the present is sufficient to guarantee I’m gonna be saying guarantee long term effects and this is tricky and so how you build the finishings on one hand match human judgment on the other hand are useful you can build an algorithm that you can prove will not create an accident and this is this is what we have done we call this responsibility sensitive safety our recess and idea as first we set the rules of blame in advance we create definitions of what does it mean to be blamed is responsible for what and then part of this is formalizing the common sense of human judgment and it’s not it’s not formalizing the law because you’re allowed to violate traffic laws for example in Finland in order to escape from an accident I’m allowed to violate traffic laws for example cross a solid line right so it’s not it’s not formalizing the law it’s formalizing human judgement and then we develop a a concept of safe state safe state meaning that if you are in that position no matter what other cars do cars or pedestrians no matter what they do you are never going to hit them this is a safe state and then a method for verifying that the agent the autonomous vehicle transition only from safe state to save state so if you are transitioning only between safe state of save state you’ll never cause an accident and here one needs to prove that this transition is sound that means you are not creating butterfly butterfly effects okay so so it looks like maybe two too ambitious so I’ll try to give you some intuition behind it first of all there are only few common-sense rules it’s surprising there are only few common-sense rules and all the rest is derived from it one when someone hits you from behind it’s not your fault but if you perform a reckless cut in and then someone hits you from behind it is your fault so right-of-way is given not taken this is another principle and the fourth one is you have to be careful when you have limited their sentencing some folks are my child jumping from from an occlusion boundary so if you look at it start with very very simple safe distance right if if I’m if I’m the rear vehicle here I have enough information to calculate since I’m a machine I have enough information to calculate what should be the safe distance such that if the current front brakes with a 1g force I’ll not hit it we have we know what the road conditions we know our response time because we are a machine we know what is our braking power again we machine so we can calculate what is the safe distance and these safe distances are reasonable numbers you know if we are driving the same speed it’s only five meters if there’s a a 50 kilometer for our difference it’s only 30 meters so these are a distances that we humans are even more conservative than then it if you’re looking about intersection so one way if you are intersecting into another cars corridor there is a you want to make sure that the longitudinals you have a safe distance longitude you know then there are all sorts of considerations of lateral velocity and position to determine who is responsible for this maneuver and it turns out that this is very very tricky it’s not just the lateral velocity who has the higher lateral velocity who has the more center position it is much more trickier than that so the notion of who is responsible for a lateral maneuver requires very very careful a consideration it’s part of our model in addition these blame definitions make assumptions of the thick two three more minutes my time is up these these were no words make assumptions that we as humans make for example when I want to change line if I’m assuming that all other vehicles are moving in their same speed I’ll never be able to change lane unless the road is clear what we humans do we assume that the car in the next line will slow down when I’m trying to squeeze in so we are making assumptions about response time we are making assumptions about what is the braking force a reasonable braking force not an emergency braking force of the car in the next line all of these are parameters and the model that needs to be agreed together with regulatory bodies then dealing with limited sensing then dealing with all sorts of what we call a route priority two-way traffic and so forth and so forth it turns out that the number of scenarios the number of principles that you need here are not that big and you can cover you can cover something that is sound and how do we prove that that we covered something that is sound one moment I’ll skip this just go to the sound so what we did in order to to prove that to our server this is sound we took the Nitza crashed apology report so what needs a did they took six million crashes divided it into 37 scenarios and they claimed that those 37 scenarios cover 99.4% of the crashes we took all those scenarios and run them through our model and the model agrees with with all of them so this shows that at least with the data that is available today the model agrees with with human intuition with human judgement and then there’s an algorithm around those definitions that guarantee that were transitioning from safe state to safe state with formal proofs that there’s no butterfly effect so if you agree with the definitions of what blame is then the model using those definitions will never cause an accident and how would then a regulator go and validate autonomous car let’s go back to the inspiration from a tanam from AE B I would say that you know there’s a the regulator will certify the planning module of the car so and the RSS definitions is a good good point too to start a good starting point and then the cautious commands these commands that move us from safe state to safe state the method will be open sourced this is our intention so that’s available to the entire industry and then the certification is the validations only by simulation going over those 37 scenarios or whatever number of scenarios eventually would be compiled and making sure that that particular planning software satisfies the RSS definitions and the method for doing that is already would be open sourced and unpublished and this is not data driven what is left is validation of sensing and this we would say is the responsibility of the car manufacturer and this would be the last slide responsibilities of the common factor because the risk and the probability of making a sensing mistake is something that is calculated you can calculate and you know how much mileage you need in order to verify that in our work we show that with three sensor it’s the upper bound is 10 to the power 5 hours of driving 3 million kilometers so now it depends on the OEM if you have big balls like Tesla you’ll put on the cameras one there’s no there’s no redundancy okay and you do your calculations and you say okay this is the risk that I’m taking because if I’m involved in an accident and that accident is because of a sensing mistake I’m liable if you want to take less risk you’ll put more sensors you’ll create redundancy but it’s your decision to make the planning part has been certified you cannot put a Kong on the road if the plank part has not been certified and here is the model the method all open-source sensing mistakes it’s your business you want to be liable you want to pay a lot of damages but you know less redundancy you want to take less risk it will be more cost put more redundancy right it’s it’s your business just like the false positives and aiibi case you want to put a crappy system just to pass the test track but then it will create a lot of false positives be my guest do it you’ll lose the business very very quickly right so it’s it’s the same it’s the same idea and this is data driven but we know how to propel the amount of data that needs to be that needs to be done there ok so as a summary now unlike empirical models this is about guarantees it’s not about formalizing the law it is about formalizing human judgement because again you can violate the law under this model and you know the point is that traffic laws are meant for humans machines need formalism and we can if we can leverage that you know to solve a very very difficult problem which is how do we guarantee his safety and we’re talking also about a formal context for ethical dilemmas this model also when you talk about ethical dilemmas it gives you a good context about ethical dilemmas for example in our model if you see an accident coming on you going to be an accident not of your fault but still you want to evade the accident you’re allowed to do whatever is possible to do including violating the law as long as don’t create another accident now why we don’t allow you to create another accident because we say that your judgment of what is the severity of an accident is subjective there could be all sorts of hidden parameters you are not aware of for example in the other accident there is a baby in the backseat and you don’t know that therefore you cannot mitigate one accident with another accident but let’s assume that the regulator comes and say now I want to mitigate one accident with another accident under certain conditions it’s possible it’s possible to add this into the model and create what is what we call blamed transitivity if I’m now going to create another accident because I want to evade an accident which is not my fault the blame would go to the guy who started the chain why not allowing it in our model but it’s possible to add it so it provides the context for for ethical dilemmas and of course all of this requires intimate collaboration with industry players we already began working intimately with at least five car manufacturers some of them are here in the room representatives of those common factors with deep dive their workshops and and the responses is you know 100% embracing of the concept because this is a concept that it is clearly not benefiting anyone’s technology it’s not a model that we put together in order to benefit our chip in order to benefit our software in order to benefit the way we think about cameras versus radars versus others this is this is a model that is intended to solve the problem which is good for all of us in order to create to create an end in industry so first talking with industry players and then moving to having a conversation with regulatory bodies and I don’t expect an uphill battle there because again we’re solving a problem which regulators are really looking for answers and today they simply don’t have the tools for these answers therefore they’re only talking about ethical dilemmas because they don’t have anything better to talk about this will give them really the formal model of how to resolve this issue of safety so I’m ending here what is underlying it which I did notice right is significant mathematical formalisms and proofs and artificial intelligence but it’s not the kind of artificial intelligence that you are used to here the machine learning type of artificial intelligence it’s the rule-based artificial intelligence how to take human judgment and codify it in a set of definitions and and rules and surprisingly it can be done it is not not something that means data driven approaches thank you [Applause] [Music]


Leave a Reply

Your email address will not be published. Required fields are marked *