By Dr David Jackson, Global Technical Director in the Technology & Innovation Center at Altran. Through the example of cars, his article illustrates at which point digital transformation will impact the products we use each day. It also highlights the challenges it creates.
Much discussion has, tragically, been provoked by recent accidents. In these circumstances, it is appropriate to consider the nature of the technologies promoted for autonomous vehicles, and the questions of whether, and how, they can be demonstrated to be acceptably safe.
Human error as the dominant factor of road accidents
Despite its impressive progress regarding safety, the automotive industry has not been able to put an end to human errors, the last main factor of road accidents. Fortunately, autonomous cars could be a game changer.
Driving machines: the solution to further reduce road accidents?
The automotive industry has made great strides in safety, as can be seen in the reduction of road deaths in recent decades, to the point where human error is now the dominant factor in the root causes of road accidents. This is not surprising – research has long shown that the reliability of human operators in carrying out even well practiced actions is low, and our ability even to recognize unusual situations cannot be relied on. It is reasonable to ask whether machines can augment or replace human activities in the driving task.
The Potential of Machine Learning techniques to reduce errors
Facing more efficiently the challenges of Machine Learning is key to better leverage the capabilities.
AI vs humans: who is the best at recognition?
Automated driving systems can take advantage of sensing technologies beyond human perception (such as radar and lidar) and mechanised processing is particularly suited to tasks which must be carried out continually and to a high, consistent, standard – such as detecting hazards on the road.
Machine learning techniques specifically achieve high levels of performance in tasks such as recognition which are a particular source of problems when driving.
We can recognise that ML algorithms will be challenged by the highly complex automotive environment, but also note that human reliability in recognition and control tasks is degraded by novel circumstances and time pressure.
AI and autonomous vehicles are still facing issues
Such statistical results may be impressive, but fail to answer the question of whether we could or should allow high levels of automation in driving: that requires developing confidence in the performance of machine learning systems, and communicating that confidence in a way which allows regulators, legislators, jurists and the public to be comfortable with the results. There is evidence that we are failing to meet that challenge.
As the accidents mentioned above show, the consequences of failure in an autonomous vehicle can be severe, but the operation of a machine learning function is hard to evaluate.
By definition, such systems are trained, rather than designed, to carry out specific tasks: there is no program source code to capture the design intent of the software, and variations in training data may result in unintended consequences – the field has plenty of anecdotes about expensively constructed models that ended up classifying images based on weather conditions visible in the background rather than the foreground objects that the developers intended.
Extensive testing to build trust
Whether the previously described situations arise from bias in the data sets, or poor choices of the reward functions, the results can be offensive, and seemingly hard to fix.
We must also consider the effects of malicious action: attackers can target specific behaviours of machine learning models, and the ability to cause significant disruption makes them an attractive target. Robust validation, plus exploitation of multiple independent sources of information in operation, will be required.
New testing approaches are required
In other critical domains, such as aviation, rail transport, or nuclear power, software intensive systems are widely developed and deployed, having been assessed against very stringent failure rate targets.
But these levels are achieved by obsessive pursuit of deterministic behaviour, rigorous elimination of possible defects, and setting very high standards for test coverage. Such means will not work when no precise definition of the system’s environment – the road & road users – is available.
At a minimum, the amount of testing appropriate to establish safe operation to such levels is large; similar considerations apply to testing of autonomous vehicles.
Reliable technology for an increasing safety
In spite of this difficulty, there are some encouraging factors, relating to the implementation environment, the social and political environment, and the technologies of machine learning itself.
The development of self-driving vehicles: driving factors and opportunities
The complexities of the driving environment, and the complexity of an autonomous vehicle, lead us to demand extensive testing with large data sets. For reasons of robustness and security, we may also demand an adversarial approach to validation and a high degree of independence – separate activities focused specifically on exploiting the weaknesses of a proposed solution.
Such requirements increase the need for tests and for test data still further. Such test programmes would have been impracticable in the past, but recent advances in storage, communication, processing and the management of massive data sets allow extensive campaigns (100000-1000000km) to be considered.
If it becomes acceptable to monitor for unusual behaviour across all the vehicles of a type (and predictive maintenance as a service would give the end user a reason to agree to this) then many millions of driven km/miles can be captured – to some level – in a short space of time.
Our ultimate goal is to achieve an acceptable level of safety – where acceptance is ultimately a political (or at least legal) criterion, not a technical one.
This brings a responsibility to communicate clearly on the features provided by automated driving and on their intended manner of use, but opens opportunities for a positive argument in favour of autonomy in vehicles:
The benefits to society of motor transport, and the limited impact of a typical single accident, allow us to consider statistical measures of safety – the number of road accidents in a period of time is likely to be large enough that a reduction in the rate of accidents will be attractive, even if we can’t guarantee their elimination.
This could even be seen as a moral imperative opposed to the traditional ethical arguments about autonomous vehicles – if we will statistically save a significant number of lives by adopting new technology, is it ethical to delay?
There are also technological factors:
Machine learning techniques are proposed that might offer rationales or justifications that could at least be reviewed or validated before deployment. We might also consider the fundamental science behind our computational models – traditional verification approaches to high-assurance systems are based on rigorous mathematical logic; new models may become available that would allow rigorous reasoning about a wider class of systems – Professor Valiant’s work on Probably Approximately Correct algorithms, perhaps. But theoretical advances might be decades away from practical application.
None of these approaches will completely address the question of AV acceptability, containing, as it does, regulatory, legal and political aspects.
Resolution will require a range of skills and perspectives, not only advances in machine learning algorithms and parallel processor design.
The potential benefits – in environmental impact, transport capacity and in the possible overall reduction in death and injury make the investigation worth the effort.
The Digital Shift has happened
A transformation in three dimensionsLearn more