FUSE - FUnctional Safety and Evolvable architectures for autonomy

Self-driving cars have immense potential for reducing the number of serious traffic accidents. This is because human error is far and away the most frequent cause of accidents today. For this to succeed, we need to ensure that self-driving vehicles are always safe, despite the fact that we know they will encounter situations we have not anticipated.

The challenge for our research project FUSE (FUnctional Safety and Evolvable architectures for autonomy), completed in 2016, was to arrive at an answer to this paradox. FUSE ran for three years and was one of many research activities centred around Volvo Cars' major Drive Me initiative. RISE (formerly SP) was the coordinator for the project and, apart from Volvo Cars, the participants were Qamcom, Semcon, Comentor and KTH Royal Institute of Technology in Stockholm.

 

FUSE logo
Project logo: FUnctional Safety and Evolvable architectures for autonomy.

Pioneers in methodology

Not only did the project deliver a number of results. It also pioneered a methodology based on determining which subsidiary issues needed to be resolved in order to demonstrate the safety of a self-driving car. The results included general systematic methods for ensuring that all possible risks have been allowed for and that all safety requirements have been fully refined in the electronic architecture that implements the autonomous drive intelligence.

Counter-intuitive results

Some of the project's results run counter to what the general public perceives as accepted truths.
The car does not have to tell the driver which automation mode it is in at a given time (self-driving or conventional).
The car and driver must be in agreement, so handover needs to follow a multi-step procedure where the agreement is clearly established for both parties. Let the driver be the party that executes the last step in this procedure and therefore the party that tells the car that mode changeover is definitely happening. Read more about safe handover here.

The trolley problem

The moral dilemma termed "the trolley problem"* does not create problems for self-driving cars. On the contrary, the moral dilemmas that already confront the safety systems of today can now be resolved. For more information on this, see the explanation in the footnote below or the article "Disarming the Trolley Problem - Why Self-driving Cars do not Need to Choose Whom to Kill".

The sensor systems do not have to focus on reporting the type of object they see, but rather on the nature of the particular object whose presence they are observing at this very moment.

Different kinds of objects differ as to how dangerous they are if the car collides with them. A collision with a vulnerable road user would be extremely dangerous even at low speeds, as would a collision with a car at higher speeds, for instance. But a collision with a mouse? Practically never. The area in front of the car is rarely completely empty (insects, small birds, etc.). From a safety perspective, the most important thing is to ensure no dangerous collisions take place; i.e. depending on speed and distance, to be certain about the presence of certain types of object. For more information, see the project's homepage and "The Need for an Environment Perception Block to Address all ASIL Levels Simultaneously".

Major national and international breakthrough with a project extension already underway

The project has attracted a lot of attention at both national and international level. Among other things, this has involved 16 academic publications and around the same number of invited guest presentations. In addition, the project has been involved in organising seven different workshops where issues and results were debated with other actors.

You can read all about this on the FUSE project homepage, where you can also download a small brochure summarising all the results. In 2017, together with Volvo AB, Autoliv, Delphi and Systemite, the original project participants started the ESPLANADE project extension, with the aim of being able to demonstrate the safety of self-driving vehicles to allow them to be placed on the market.

*The trolley problem – a dilemma of moral philosophy

Imagine that two children jump out into the path of your self-driving car from behind another parked car. They are chasing a ball and appear so suddenly that it is too late to be able to brake. However, the car can choose to steer to the side, but it would then run over an elderly person on a mobility scooter instead. It is therefore too late to save everybody, but there is time to choose whether we should save two children at the expense of one old person. Everything happens very quickly, but the cars' electronics respond at lightning speed. We should have made a choice when we programmed them from the start. Is it reasonable that we allow a few individual engineers to make decisions on life and death on their own initiative when programming self-driving cars?

This is roughly how a variant of what is termed "the trolley problem" can be formulated. It crops up here and there on the Internet where some impossible situation for self-driving cars is addressed, often with an ensuing discussion full of moral indignation.

It is worth remembering that this is a hypothetical problem designed to facilitate discussion as to how different moral principles can come into conflict with each other. One principle is that one should always act for the greater good of the greatest number of people. According to this principle, it would better for just one person to die than for several. The second principle is that one should never intentionally commit an act that may cause the death of another.

There is no obvious right or wrong to this type of problem. Its purpose is to allow us to discuss what we see as our moral priorities. The task of the moral philosopher is to formulate the problems and make us aware that they exist.

Back to the self-driving cars

If we return to our self-driving cars, the problem arises in many different guises. Sometimes it is rendered rather more incisive: the car can save everyone outside, but at the expense of the person in it. We could imagine we might save the children at play if we swerved off the road completely – only to drive over a precipice and die.

There are moral philosophers who go so far as to ask whether we are ready for utilitarian cars (i.e. the principle that one should always act for the greater good of the greatest number of people) and therefore also ready to accept that one may be killed on purpose by one's own self-driving car. This is indeed a thought-provoking question as far as it goes. The problem is that it is entirely superfluous.

The reasoning as to why this should be so is not actually that difficult. The only thing we need to program our car to do, is to think in the same way as all of us are taught when we are prepared for our driving test. You should always bear in mind what might be concealed by something that is blocking your view. There may be a car coming towards you just over the crest of the hill. A car may turn out from behind the next corner. A child may jump out from behind a parked car. You need to adapt your driving behaviour to be able to deal with such surprises. We tell teenagers this when they are learning to drive, and it sounds quite reasonable in that situation. So why not tell our cars the same thing?

The solution is for the cars to embody safe driving behaviour

We need the cars to be able judge how far they are completely certain that everything is safe. Then we need them to adapt their driving behaviour to be able to deal with a situation where something might suddenly appear in the next millisecond at the limits of their own certainty. In this way, we can get the cars to guarantee they will never be surprised in a manner that precludes safe alternatives.

Instead of programming the cars to handle difficult moral situations, we program them to drive in a manner that guarantees they never end up in a moral dilemma. This does not have to mean they drive a lot slower than human beings would. It is entirely dependent on how good their sensors are at seeing, how quick they are at reacting and how quick they are at braking/steering. By adapting the self-driving car's own decisions about its driving behaviour to its own capability, we can avoid morally problematic situations.

 

 

Photo source: Volvo Cars

ESPLANADE - Efficient and Safe Product Lines of Architectures eNabling Autonomous DrivE

Self-driving cars have immense potential for reducing the number of serious traffic accidents. Before our road network is ready for self-driving vehicles in all contexts, we can introduce them gradually into traffic and create trust among all parties.Read more...
RISE Research Institutes of Sweden, Phone 010-516 50 00, E-mail info@ri.se

The RISE institutes SP, Innventia and Swedish ICT have merged in order to become a stronger research and innovation partner for businesses and society.
During 2017 sp.se will be one of several websites within RISE. Please visit ri.se for more information about RISE.