Safety Architectures towards Reliable Learning-based Controllers in CPS
Modern Cyber-Physical Systems (CPS) are large, complex systems of systems where arguments about the behavior of the whole system rely on guarantees about the individual components. Individual components, however, may be designed using machine learning methods such as neural networks that are currently not amenable to formal analysis, or the components may be too large and complex for complete verification. Hence, ensuring CPS's safety and security is increasingly challenging. To cope with these challenges, we explore possibilities of using physics models to guide machine learning to improve safety assurance and new sandboxing mechanisms to design software architectures to achieve system-level safety.
Phy-DRL Architecture
Deep reinforcement learning (DRL) synthesizes control policies from interacting with the environment to tackle the non-linearity and uncertainties in complex control tasks, achieving impressive performance. However, applying DRL to safety-critical autonomous systems remains a challenging problem. A critical reason is that the control policy of DRL is typically parameterized by DNNs, whose behaviors are hard to predict and verify, raising concerns about safety and stability.
In practice, in most autonomous systems, it is common to have access to approximations of the nonlinear system dynamics through the process of reasonable linearization. Using these approximations, the model-based controller can be derived for controlling the system with verifiable properties. Inspired by the residual control diagram, we propose a novel physics-model-regulated DRL framework to guide and regulate the pure data-driven approach using model-based knowledge, as shown in the Figure (left). Specifically, we leverage Lyapunov stability theory with a linearized model to design a Lyaponov-like reward function that can encourage the DRL to learn to stabilize the system and stay in the safety envelope. Furthermore, we mathematically derive safety and stability testing conditions using model knowledge for guaranteeing safety. At last, we make the model-based controller and DRL work under the residual control diagram to output more robust control commands.
In this direction, we aim to explore more potentials of using prior knowledge to make learning-based techniques more data-efficient during training and more reliable for deployment on physical systems.
Safe-visor Architecture
In modern cyber-physical systems, many high-performance controllers, such as DNNS, are deployed to deal with complex tasks. Those controllers usually can not be directly verified, potentially introducing safety risks. We propose a Safe-visor Architecture to improve system-level safety. At the core of this technology, there are a Safety Advisor and a Supervisor (Safe-visor). On one hand, the Safety Advisor only focuses on the safety of the system, which should be treated as a fallback in case the unverified controller is trying to perform some harmful actions. On the other hand, the unverified controller is designed for functionality; i.e., it is expected to realize some tasks that are much more complicated than only keeping the system safe. To ensure a specific level of safety probability for the controlled physical system, the supervisor specifies verifiable safety rules for the unverified controller to follow. The control inputs of the controller fed to the system are checked at run-time and can only be accepted when they are not disobeying the safety rules defined in the sandboxing mechanism.
In general, the Safe-visor architecture can be used to sandbox any types of unverified controllers to guarantee the safety of the controlled physical system. In summary, we are able to use an unverified controller for realizing complex tasks while preventing the system from being threatened by its harmful behavior, if any. Compared to existing literature that focuses on similar problems, a safe-visor architecture aims to provide safety guarantee while avoiding restricting too much the capability of unverified controller. From a security stand point, Safe-visor acts like a “last line of defence” mechanism by providing integrity and assurance, but it doesn’t preserve confidentiality. That means an intruder might still be able to observe the protected CPS, however, he/she cannot damage its physical components. Moreover, the proposed Safe-visor is complementary to, and works well with, existing methods that provide confidentiality, like encryption. Finally, this research focuses on stochastic Cyber-Physical Systems modeled as controlled Markov Process, which enables us to reason about the statistical properties of CPS influenced by sensor and input noise.
As part of this research, we also want to look at more general safety specifications (e.g., those described by temporal logic) and more general system models (e.g., partial observable Markov Decision Process). Moreover, we want to develop software for CPS controllers’ synthesis and their code generation.