Está en la página 1de 9

Autonomous Civil Unmanned Aircraft Systems

Software Quality Assessment and Safety Assurance


(Web Version 2, September, 2007)

Keywords: autonomy, autonomous systems, unmanned aircraft (UA), Unmanned Aircraft System (UAS), System Safety Assessment (SSA), RTCA/DO-178B, software development assurance, software verification, software validation, software quality 1. Introduction:

This document discusses methods used by the civil aviation community to assure the quality and safety of airborne software. Our objective is to consider these methods within the context of designing and achieving regulatory acceptance for autonomy software in safety-critical systems of civil Unmanned Aircraft Systems (UAS). We begin by noting that the methods we will consider here have been developed to support airworthiness certification for conventional manned civil aircraft. In their current form, these methods anticipate neither UAS nor autonomy software. Section 2: Presents a definition of system autonomy that is suited to the Unmanned Aircraft systems context, and highlights the fact that system autonomy is realized in software; Section 3: Outlines the civil aircraft methodology of RTCA/DO-178B for regulator acceptance of airborne software. Developed for manned aircraft, these methods are relevant to UAS; Section 4: Relates conventional software Verification and Validation (V&V) processes to the DO-178B approach; Section 5: Raises issues relating to the application of DO-178B to autonomy software, and points to a major revision of DO-178B, soon to be published - possibly as DO-178C.

2.

Definition of System Autonomy

Readers will be forgiven for questioning the need for another definition of autonomy in the UAS context, since various definitions already in use by the UAS community cover a wide range of meanings. We hope to demonstrate the soundness and utility of our definition. A consistent feature of all of the system-autonomy definitions is the understanding that autonomy is implemented in software; therefore, at the most fundamental level, system autonomy comprises sets of software programs. The following definition of system autonomy acknowledges that we are dealing with software, and it associates autonomous function with decision-making, specifically, performing decision tasks that human operators and software systems must perform to complete a UAS mission.

AeroVations Associates

August, 2007

(a) Autonomy: An Unmanned Aircraft system exhibits autonomy when the system software is capable of making - and is entrusted to make - substantial real-time decisions, without human involvement or supervision. In this definition statement: substantial decisions are decisions that could affect the safe operation of the Unmanned Aircraft and would normally be made by human operators or mission managers; and Unmanned Aircraft system, depending on context, can mean an individual sub-system of the UAS (for example, the autopilot or the health-monitoring system), or the complete UAS. This definition is consistent with dictionary meanings of autonomy, which emphasize self-governing independence. In the near-term and medium-term future of civil UAS, such unqualified autonomy will be very scarce, and the term is likely to be more useful as a theoretical extreme than as a practical operational architecture. (b) Qualified Autonomy: (i) Supervised Autonomy: The term Supervised Autonomy is used to depict an autonomous system operation that is observed by a human, who has some visual representation of the system situation or state. In a UAS context, the supervision would be safety-related, and would imply potential human control over the UAV in the event that a situation threatened the UAS or other aircraft - or persons or property on the ground. If the human supervisor has sufficient control of the vehicle to intervene in the threatening situation the most likely arrangement then the supervised autonomy is not autonomous at all! It is more accurately described as independent system control with an off-board (human) safety pilot. (ii) Shared Decision-Making: Figure 1. depicts an operational UAS architecture that involves decision responsibility shared among aircraft, ground control computers and human system managers. The arrangement is often referred to as shared autonomy, but is more accurately called a shared-decision configuration. Shared decision-making is likely to be the standard configuration for all future civil UAS, perhaps with the exception of the very small and very simple systems.

AeroVations Associates

August, 2007

UNMANNED AIRCRAFT SYSTEM GENERAL Intelligent Shared Autonomy Software

DATALINK Human Intelligence Intelligent Software UNMANNED AIRCRAFT

Mission Manager

Laptop Computer

TARGETS CONTROL STATION

Conceptually, the degree of autonomy sharing or sharing of decision-making responsibility - between human operators and system software, may range from: the case where human managers make all substantial decisions and the unmanned aircraft responds to the human demands; to the converse case where the human managers authorize decisions when requested, and the UAS makes all mission decisions and initiates related actions.

3.

Airborne Software Safety Assessment

In manned civil aviation, system safety is addressed formally during airworthiness certification or type certification of the aircraft. In current airworthiness practice, airborne software is generally not certified as a separate entity, but is considered to be part of the physical system in which it is embedded; nevertheless, acceptance by the Regulator of flight critical software is a distinct process. In their Advisory Circular 115B (Reference 1.), the U.S. Federal Aviation Administration recognizes RTCA document DO-178B (Reference 2.) as an acceptable means of compliance when securing FAA acceptance of software in airborne systems and equipment. DO-178B addresses Software Considerations in Airborne Systems and Equipment Certification. (See also References

AeroVations Associates

August, 2007

3,4.) We will describe briefly how the guidance of DO-178B is applied, and what this might mean if applied to UAS autonomy software. DO-178B Methodology The stated purpose of RTCA DO-178B is: to provide guidelines for the production of software items for airborne systems and equipment that perform their intended functions with a level of confidence in safety that complies with airworthiness requirements. (Quoted from Reference 2.) When applying these methods to UAS, we would be concerned not only with airborne software, but also with Control Station software that may affect the air vehicle safety-of-flight. We can begin our understanding of DO-178B by considering the failure condition categories and required software levels that provide the foundation for safety assessment of software elements: (a) First, DO-178B provides Failure Condition Categories, describing the effects of the failure or anomalous behaviour of software items using the following scale: Catastrophic: would prevent continued safe flight and landing; (Note that continued safe operation may be accomplished by an auto-flight system, without human interaction or supervision.) Hazardous: would cause large reduction in safety margins and functional capabilities; Major: would cause significant reduction in safety margins and functional capabilities; Minor: would not significantly reduce aircraft safety; No effect: would not affect capabilities. (b) Secondly, a Software Level is associated with each of these Failure Effect Categories, specifically: Software Level A: o Is software whose anomalous behavior would cause or contribute to a failure of system function resulting in a Catastrophic Failure Condition; Software Level B: o Is software whose anomalous behavior would cause or contribute to a failure of system function resulting in a Hazardous failure condition; Software Level C: o Is software whose anomalous behavior would cause or contribute to a failure of system function resulting in a Major failure condition; Software Level D:

AeroVations Associates

August, 2007

o Is software whose anomalous behavior would cause or contribute to a failure of system function resulting in a Minor failure; and Software Level E: o Is software whose anomalous behavior would have no effect on UA operational capability condition.

For a specific airborne software module, therefore, we must be able to answer the question: Would the failure or anomalous behavior of this software item cause or contribute to a failure of system function resulting in a Catastrophic, Hazardous, Major, Minor or No Effect condition? The answer is found by determining the consequences of failure of the physical systems that interface with and are activated by the software item under consideration. In the certification process, this determination is a result of a System Safety Assessment or SSA. (See Reference 5.) The SSA determines, first, the probability-per-flight-hour that a specific physical system might fail or malfunction; and secondly, the probability-per-flight-hour that failure of that system will affect safety-of-flight. (c) Finally then, using the SSA results, DO-178B assigns a Required Software Level to the airborne software module under consideration. Software Production and Management With each airborne software item categorized by Software Level, DO-178B guidance ensures the required quality and safety of the software by demanding disciplined care, structured management and thorough documentation at each stage in the development and use of the software item. The rigor of the objectives and outputs prescribed by DO178B for the associated processes increases as the Software Level progresses form Level D ( Minor) to - Level A (Catastrophic) as we would expect. DO-178B objectives and outputs are specified in ANNEX A of DO-178B, entitled, Process Objectives and Outputs by Software Level. This is the heart of DO-178B. For each software level, DO-178B ANNEX A provides tabulated guidance concerning process management and documentation throughout each process of software production and use. Implications for Unmanned Aircraft Systems UAS applications of DO-178B must address anomalous behavior of some Ground Station software items - those that would affect flight safety as well as airborne software. The extension to Ground Station software will present no fundamentally new challenges.

AeroVations Associates

August, 2007

Furthermore, we believe that, in general, the DO-178B approach to software development assurance can be adapted to UAS software quality and safety assurance. Although some UAS software programming techniques and languages are not anticipated in the current DO-178B, UAS designers and regulators will, in due time, accommodate the new software architectures to DO-178 guidance. (More on this subject in section 5.)

4.

Verification and Validation Does DO-178B Validate?

Both verification and validation are generally considered essential elements of software quality assessment. In the simplest terms, system designers distinguish verification and validation in the following way: Verification answers the question have we built the system right; whereas Validation answers the question have we built the right system. Clearly, we must be able to affirm both of these rights. In a review of the DO-178B processes, although we see many references to verification techniques and processes, validation is nowhere indicated. This would appear, at first glance, to be an anomaly in the guidelines. Specialists familiar with the development and use of DO-178B point to a validation equivalent that is integral to the guidelines. Specifically, validation is inherently performed by the use and management of system requirements. To illustrate this point, we note that software developers define two levels of software requirements, defined in DO-178B as: High-Level Requirements: these are software requirements that are developed from the system requirements, safety-related requirements and system architecture. Low-Level Requirements: these are software requirements derived from high-level requirements and design constraints from which software code can be directly implemented. Throughout DO-178B, emphasis is placed on these requirements, to ensure at every stage that they are correct and complete. Clearly, assuring the correctness and completeness of the High-Level system requirements - assures that we are building the right system. This is validation by another name! Planning, Development and Verification 5. Applying DO-178B to UAS Autonomy Software

We have painted a rosy picture here: autonomy is an emergent property of what we have called autonomy software. Ostensibly, the quality and safety of UAS autonomy software will be assured by applying the SSA methodology of SAE ARP 4761 and the Software Development Assurance methods of DO-178B. In fact, however, the applicability of DO-

AeroVations Associates

August, 2007

178B to some system-autonomy architectures is problematic. Quoting from Reference 6, Handbook for Object-Oriented Technology in Aviation, (October, 2004): When DO-178B was published in 1992, procedure programming was the predominant technique for organizing and coding computer programs. Consequently, DO-178B provides guidelines for software developed using a functional technique and does not specifically consider software developed using Object Oriented Technology (OOT) Reference 6 describes a study that the FAA, NASA and several industrial users of DO178B undertook to identify concerns associated with assessing Object-Oriented Technologies in airborne software modules. Reference 6 gives extensive insight into the dangers inherent in adopting OOT for airborne systems software development. This subject is still in an unsettled state, and the future may hold more troubles for the DO-178B Software Development Assurance methodology. Some software practitioners in the Artificial Intelligence (AI) community are generating UAS autonomy software using cognitive-modeling to implement autonomous functions. The software in these cases is likely to be programmed using what is called Agent-Oriented Software or AOS. (See Reference 7.) In a 2006 example, a small UAS was widely touted as having made the worlds first truly autonomous UAS flight. The autonomy software in this case was developed using Jack, Intelligent Agent (See Reference 8.), a so-called Belief, Desire, Intention, or BDI, cognitive-model architecture. Reference 9 provides a link to the World First announcement. Whether this autonomous mission claim has merit or is wildly exaggerated may be argued, however the event does draw attention to the fact that system autonomy projects are using cognitive-modeling and Agent Oriented Software. These models attempt to reproduce specific human characteristics such as beliefs, desires, intentions, reasoning and learning. The question is: Can autonomous system software that is based on a cognitive-modeling architecture be verified in the DO-178B sense? Could AgentOriented Software be accepted by airworthiness regulators using DO-178B methodology?. These are questions that may be answered in the planned new DO-178C. The subject matter would certainly fit within the following DO-178C Objectives, quoted from website - www.rtca.org (See link at Reference 10.): OBJECTIVES OF DO-178C: (a) To promote safe implementation of aeronautical software; (b) To provide clear and consistent ties with the systems and safety processes; (c) To address emerging software trends and technologies; (d) To address an approach that can change with the technology.

AeroVations Associates

August, 2007

Items (c) and (d) would appear to place UAS and Agent-Oriented Software on the Subcommittee revision agenda. Reference 10 provides a link to the RTCA DO-178C project, where the project specific revisions and progress are documented. The revision is being undertaken by RTCA Subcommittee SC-205 (jointly with European EUROCAE WG-71). 6. Closure

To close, we will consider an imaginary UAS autonomy scenario: a UA awakens before dawn at its home-field, near but outside the city; the UA checks fuel level, checks date and time, senses local weather and gets a regional forecast; after checking all systems including sensors, and checking NOTAMS, it decides to carry out one of several possible sensing missions at one of several designated locations. Following autonomous start-up and taxi, the UA then performs the mission with ATC data-link clearances, returns, lands, taxies and shuts-down all without mission-manager supervision or interaction. Technically, this can be done. Superficially, the operation could be safely performed in controlled airspace on an IFR flight plan with ATC providing traffic separation and conflict resolution. But in real-life, the UA lacks an important capability that would keep it on the ground. It cannot adequately see-and-avoid other aircraft in-flight. Conventional pilot-like seeand-avoid capability is a regulatory requirement even under IFR flight rules, and it is still beyond the ken of UAS. Although progress is being made on what is being called sense, detect and avoid, a system that meets a standard acceptable to the Regulators has not yet been achieved. Next-generation air traffic technologies (NGATS) may provide a way over this hurdle. (See related essay on NGATS.) Returning to the real-world, however, it seems clear that the very thought of an autonomous civil UAS will send shivers up the spine of airworthiness Regulators. As UAS knock on the door for access to unrestricted civil airspace, we believe that regulators will for quite some time to come - have little time in their schedules for considering flight-critical software that implements autonomy. Civil UAS will incorporate ever increasing levels of automation, throughout the UA systems, but human supervision will remain essential. Finis.

AeroVations Associates

August, 2007

References: 1. 2. 3. 4. 5. Advisory Circular (AC) 20-115B, RTCA, Inc. Document RTCA/DO-178B, DO-178B, Software Considerations in Airborne Systems and Equipment Certification, RTCA Inc. document, December, 1992. FAA Order 8110.49 The Job Aid, Conducting Software Reviews Prior to Certification, FAA Certification Service, Published: 16 January, 2004. SAE document ARP 4761, Guidelines and Methods for Conducting the Safety Assessment Process on Civil Airborne Systems and Equipment, SAE International, Issued 1996. Handbook for Object-Oriented Technology in Aviation (OOTiA), Co-Sponsored by U.S. FAA and NASA, published 26, October, 2004. Agent-Oriented Software Engineering, Nicholas Jennings and Michael Wooldridge, Queen Mary and Westfield College, University of London, London U.K. (1999). Jack, Intelligent AgentTM Summary of an Agent Infrastructure, N. Howden et al, Agent Oriented Software Pty. Ltd., Victoria, Australia First Flight True UAV Autonomy at Last (Link below.) http://www.agent-software.com/shared/profile/consulting.html 10. RTCA DO-178C - link to RTCA Project of Revision to DO-178B http://www.rtca.org/comm/Committee.cfm?id=55

6. 7.

8. 9.

AeroVations Associates

August, 2007