Operational Testing of Systems with Autonomy

March, 2019
IDA document: D-9266
FFRDC: Systems and Analyses Center
Type: Documents
Division: Operational Evaluation Division
Authors:
Authors
Heather M. Wojton, Daniel J. Porter, Yevgeniya K. Pinelis, Chad M. Bieber, Michael O. McAnally, Laura J. Freeman See more authors
The purpose of this briefing is to provide an executivelevel overview of a more detailed, working-level framework for testing systems with autonomy (SWA). The brief outlines the challenges and broad-stroke reforms needed to prepare for next century’s test challenges. The suggestions outlined here are not meant to be final.
A fundamental challenge in autonomy is developing trust in its decision-making capacity across all situations and environments it potentially will encounter. We can make many assumptions about human decision making that cannot be made about machine decision making. If we think of warfighters as a system being acquired, each individual unit has undergone decades of field testing regarding its quirks, and the entire manufacturing line has gone through tens of thousands of redesigns to increase efficiency. Developing human intelligence also takes a long time—decades of direct, time-intensive training from experts. Developing trust in that intelligence also takes time. If a person has survived to age 18, we can make inferences about their ability to navigate a plethora of complex, three-dimensional environments. For example, we’re confident they can walk in sand, in snow, and on aspalt because we’re confident they have a generalizable method of walking. We can’t assume the same is true about autonomous systems—we need evidence.