Reliability Survey of Military Acquisition Systems

August, 2014
IDA document: D-5257
FFRDC: Systems and Analyses Center
Type: Documents
Division: Operational Evaluation Division
Authors:
Authors
Jonathan L. Bell, Matthew R. Avery, Michael C. Wells See more authors
Test results from the last few decades indicate that the DoD has not yet realized significant statistical improvements in the reliability of many systems. However, there is evidence that those systems that implemented a comprehensive reliability growth program are more likely to meet their development goals. Reliable systems cost less overall, are more likely to be available when called upon, and enable a longer system lifespan. Reliability is more effectively and efficiently designed in early (design for reliability) vice being tested in late. While more upfront effort is required to build reliable systems, the future savings potential is too great to ignore. At the request of the Director, Operational Test and Evaluation (DOT&E), the Institute for Defense Analyses (IDA) has conducted annual reliability surveys of DoD programs under DOT&E oversight since 2009 to provide a continuing understanding of the extent to which military programs are implementing reliability-focused DoD policy guidance and assess whether the implementation of this guidance is leading to improved reliability. This paper provides an assessment of the survey results. Overall survey results support the understanding that systems with a comprehensive reliability growth program are more likely to meet reliability goals in testing. In particular, the results show the importance of establishing and meeting Reliability, Availability, and Maintainability (RAM) entrance criteria before proceeding to operational testing (OT). While many programs did not establish or meet RAM entrance criteria, those that did were far more likely to demonstrate reliability at or above the required value during OT. Examples of effective RAM entrance criteria include (1) demonstrating in the last developmental test event prior to the OT a reliability point estimate that is consistent with the reliability growth curve, and (2) for automated information systems and software-intensive sensor and weapons systems, ensuring that there are no open Category 1 or 2 deficiency reports prior to OT. There is also evidence that having intermediate goals linked to the reliability growth curve improves the chance of meeting RAM entrance criteria. The survey results also indicate that programs are increasingly incorporating reliability-focused policy guidance, but despite these policy implementation improvements, many programs still fail to reach reliability goals. In other words, the policies have not yet proven effective at improving reliability trends. The reasons programs fail to reach reliability goals include inadequate requirements, unrealistic assumptions, lack of a design for reliability effort, and failure to employ a comprehensive reliability growth process. Although the DoD is in a period of new policy that emphasizes good reliability growth principles, without a consistent implementation of those principles, the reliability trend will likely remain flat. In the future, programs need to do a better job incorporating a robust design and reliability growth program from the beginning that includes the design for reliability tenets described in the ANSI/GEIA-STD-0009, “Reliability Program Standard for Systems Design, Development, and Manufacturing.” Programs that follow this practice are more likely to be reliable. There should be a greater emphasis on ensuring that reliability requirements are achievable, and reliability expectations during each phase of development are supported by realistic assumptions that are linked with systems engineering activities. Programs should also establish RAM entrance criteria and ensure these criteria are met prior to proceeding to the next test phase. A program’s reliability growth curves should be constructed with a series of intermediate goals, with time allowed in the program schedule for test-fix-test activities to support achieving those goals. Finally, when sufficient evidence exists to determine that a program’s demonstrated reliability is significantly below the growth curve, that program should develop a path forward to address shortfalls and brief their corrective action plan to the acquisition executive.