The Director, Operational Test and Evaluation (DOT&E) has issued several policy memos emphasizing statistical rigor in test planning and data analysis, including the use of design of experiments (DOE) principles, and smart survey design and administration. Often, particularly when testing software-intensive systems, it is necessary to account for both engineering and human factors simultaneously in order to facilitate a complete and operationally realistic evaluation of the system. While some software systems may inherently be deterministic in nature, once placed in their intended environment with error-prone humans and highly stochastic networks, variability in outcomes can, and often does, occur. This talk will briefly discuss best practices and design options for including the user in the DOE, and present a real-world example.