In this presentation, IDA considers many safety and ethical concerns related to Department of Defense (DOD) use of artificial intelligence (AI) and machine learning capabilities. While capabilities that support personnel processes and systems tend to carry low safety risk, others risk undermining the DOD’s principles for responsible AI. Examples include service member privacy concerns, invalid prospective policy analysis, disparate impact against marginalized service member groups, and service members’ unexpected responses to AI and machine learning. As barriers to use of new capabilities have eroded, ways to apply them have increased, but the analytical community still does not understand some of these concerns. IDA proposes mechanisms to assure stakeholders that DOD’s use of AI and machine learning adheres to its ethical principles.