Changing Machine Learning Methods to Operate in Safety-Critical Domains

Day
Time
Session ID
Location
Feb 6, 2025
11:30am–1pm
Track 02
CC2
Abstract:

The impressive new capabilities of systems created using deep learning are encouraging engineers to apply these techniques in safety-critical applications such as medicine, aeronautics, and self-driving cars. This talk will discuss the ways that machine learning methodologies are changing to operate in safety-critical systems. These changes include (a) building high-fidelity simulators for the domain, (b) adversarial collection of training data to ensure coverage of the so-called Operational Design Domain (ODD) and, specifically, the hazardous regions within the ODD, (c) methods for verifying that the fitted models generalize well, and (d) methods for estimating the probability of harms in normal operation. There are many research challenges to achieving these. But we must do more, because traditional safety engineering only addresses known hazards. We must design our systems to detect novel hazards as well. We adopt Nancy Leveson’s view of safety as an ongoing hierarchical control problem in which controls are put in place to stabilize the system against disturbances. Disturbances include novel hazards but also management changes such as budget cuts, staff turnover, novel regulations and so on. Traditionally, human operators and managers have provided stabilizing controls. Are there ways in which AI methods (such as novelty detection, near-miss detection, diagnosis and repair) can be applied to help the human organization manage these disturbances and maintain system safety?

Speakers: