Jun 16, 2025
Designing for Safety
Platforms
Autonomous Systems Design: Charting a New Discipline
Audio Note
4
Min Read
Abstract
Autonomous systems are increasingly integral to various domains, transforming industries with their ability to operate independently and adapt to dynamic environments. This paper provides a comprehensive survey of emerging research directions in autonomous systems design, emphasizing the integration of technologies such as artificial intelligence, connectivity, and real time sensing. It explores the challenges of designing and verifying systems that evolve during operation, necessitating a new engineering discipline that transcends traditional system engineering processes. The paper highlights the need for interdisciplinary collaboration across fields like cyber-physical systems, artificial intelligence, self-aware computing, communications, and electronic design automation to address the complexities of ensuring safety, reliability, and adaptability in autonomous systems.
Summary
The design of autonomous systems represents a significant evolution in engineering, moving beyond the development of individual components to the creation of architectures and mechanisms that enable self-governance and adaptability. Authored by Selma Saidi, Dirk Ziegenbein, Jyotirmoy V. Deshmukh, and Rolf Ernst, the paper "Autonomous Systems Design: Charting a New Discipline" outlines the foundational concepts, challenges, and research directions for this emerging field. Published in IEEE Design & Test, it argues that autonomous systems design requires a paradigm shift, integrating insights from multiple disciplines to address the unique demands of systems that operate in unpredictable environments.
Autonomous systems are defined by their autonomy, the ability to make decisions to achieve goals based on their understanding of the environment, the capacity to self-manage and maintain operations despite failures. These systems intersect with automation, artificial intelligence, and cyber-physical systems, as illustrated in Figure 1 which combines the structured task execution of automation, the intelligent reasoning of AI, and the real time dependable control of cyber-physical systems. Unlike traditional systems, autonomous systems must function in environments that are not fully known at design time, such as the operational design domain in autonomous vehicles, requiring flexibility and adaptability.

Figure 1. Autonomous systems at the intersection of other domains.
The paper organizes the functionality of autonomous systems into two feedback loops: an external loop that interacts with the environment through sensing, modeling, and acting, and an internal loop that monitors and adapts the system itself. These loops converge in a central "perceive-reason-decide-control" function, which operates across various time scales, from rapid reactive control to slower mission planning and self-optimization. The paper also explores multi-agent systems, where autonomous agents collaborate to achieve shared goals, and self-aware computing, which focuses on systems that reflect on and adapt their behavior.
A significant challenge in autonomous systems design is ensuring functional safety, particularly given their evolving behavior. Traditional safety standards, such as ISO 26262, assume complete specifications, but autonomous systems operate in underspecified environments, necessitating new approaches like the safety of the intended functionality standard. These systems require runtime assurance through multilayered architectures, where simpler components monitor complex behaviors to ensure safety. The concept of safety element out of context is proposed as a model for developing reusable components with assumed safety requirements, adaptable to evolving contexts.
The architecture of autonomous systems as illustrated in Figure 2 is described as multilayered, comprising autonomous function components for perception and decision making, and autonomous supervisory components for monitoring and adaptation. These architectures must support self-monitoring, self-adaptation, and model management while ensuring reliability and security. The interaction between components, particularly in multi-agent systems, requires robust interfaces and communication protocols to handle dynamic environments and potential failures.
Verification of autonomous systems poses another critical challenge due to their complexity and emergent behaviors. Traditional verification methods, such as model checking and theorem proving, struggle to scale with the size and adaptability of these systems. The paper advocates for compositional verification using design contracts, where component behaviors are guaranteed under specific environmental assumptions. Runtime verification and enforcement are also essential, with monitors ensuring that assumptions hold during operation and enabling safe responses to detected issues. The paper highlights the need for formal specification languages tailored to autonomous systems, particularly for perception and probabilistic reasoning, to address the uncertainty inherent in their environments.

Figure 2. Autonomous systems architecture.
Design automation plays a pivotal role by extending traditional tools to support both the lab design and autonomous operation phases. In the lab phase, tools facilitate the formulation of behavioral goals, enabling designers to define the intended actions and constraints of autonomous systems, and synthesize monitors to detect anomalies during operation. In the autonomous operation phase, tools enable runtime adaptation, allowing systems to adjust to changing conditions, and online verification, ensuring ongoing compliance with safety and performance requirements. These tools leverage design time data, such as system models and behavioral specifications, to enhance resilience against failures or environmental shifts. The paper proposes an enhanced V-model design process, as illustrated in Figure 3 which significantly augments the traditional V-model by incorporating feedback loops to support the self-adaptive capabilities of autonomous systems. This enhanced process, visually contrasted with the conventional V-model in the figure, introduces additional steps for safety assurance during the conceptual design phase and runtime assurance during operation. It integrates dynamic self-verification, executed during the operation phase, to continuously validate system behavior against evolving conditions. The figure highlights how testing and verification are expanded to include runtime monitoring of assumptions, ensuring that the system’s self-adaptive features, such as self-configuration and self-optimization, are controlled and safe. This approach maintains compatibility with industrial design practices while addressing the unique demands of autonomy, balancing flexibility with rigorous safety and reliability requirements throughout the system’s lifecycle.

Figure 3. Traditional design process compared with new safety-assured, verification-aware design process required for autonomous system design.
In conclusion, the paper positions autonomous systems design as an emerging discipline that requires interdisciplinary collaboration and innovative methodologies. By addressing challenges in safety assurance, architecture, verification, and design automation, it aims to establish a framework for designing systems that are both flexible and dependable. This work underscores the importance of controlled autonomy, where design-time decisions and runtime mechanisms work together to ensure safe and effective operation in complex, evolving environments.