Decision-making inconsistencies in ATC: an empirical investigation into reasons for rejecting decision support automation
The issue of automation acceptance is not just an academic one. The ATM community has for some time recognised that insufficient acceptance (for instance, of new advisory systems) can jeopardise the introduction of new automation (Bekier et al., 2012; Kauppinen et al., 2002). Mismatches in human and automation strategies underlying decision-making have been identified as playing a part in the observed acceptance issues. To achieve acceptable and effective teamwork between human and automation it is necessary to develop systems that better acknowledge and respond to individual differences, and harmonise human and automation decision making strategies. The current state-of-the-art automation indicates that we have not achieved this milestone yet. Several past projects explored the potential benefits of strategic aiding automation, but until now all had been limited in one important regard: they could not ensure that automation strategy matched that of the human. Starting from another perspective, the Multidimensional Framework for Advanced SESAR Automation (MUFASA) project set out to explore the role of strategic conformance between human and automation: if automation could be developed in such a way that it perfectly mirrored the way an operator worked, would the operator accept it? To date, through a unique experimental protocol, the MUFASA project has developed and empirically tested a simulation platform capable of capturing operator performance and using it in such a way that a given operator’s own previous performance can be presented as “automation” through unrecognisable replays. In essence, “automation” was now, for the first time, able to perform exactly like the operator. Simulations involving 16 air traffic controllers revealed a main effect of conformance with conformal advisories being accepted more often, rated higher, and responded to faster than were non-conformal advisories. Notice that “conformal” advisories were unrecognizable replays of that given controller’s previous performance, whereas “non-conformal” advisories were those of a colleague who had chosen an alternate solution. Qualitative analysis of controllers’ conflict resolution performance indicated that controllers were inconsistent both internally and in comparison to their colleagues. If this is true, it challenges the majority of automation design that follows a “one-size-fits-all” approach. One speculation is that controllers are simply inconsistent over time in the solution and strategy they might choose to employ. Alternatively, it could be that controllers are not necessarily opposed to automation per se, but to advisories from any source (even, say, from a colleague). At the highest level, the scientific impact of MUFASA has been to provide some long-needed empirical insights into the fundamental building blocks of human-machine coordination. The original project provided meaningful initial data into the critical importance of both acceptance and strategic conformance that have the potential to determine automation use (Hilburn et al., 2014; Westin et al., 2013). Results and unanswered questions from the MUFASA project lead us to propose an extension to explore three potential research topics. Utilizing the existing research platform developed for the MUFASA project, research is now underway, with experiments scheduled for January-April, and planned to be completed by June 2015. In a series of real-time simulations, we will explore the following questions: Transparency – does automation transparency impact acceptance or agreement? Christoffersen and Woods (2002) argue that in order to cooperate with smart technologies you need more information (i.e., richer interfaces), not less. On the other hand, too much information can overload the user and negatively affect cooperation. The effects of automation transparency in regulating the amount of information available are worth investigating from an academic as well as an operational perspective. Consistency – to what degree do controllers agree on resolution strategies? Are controllers internally consistent in their resolution strategies over time? We will determine the structure and extent of both inter- and intra-consistency and how it affects automation design. Source Bias– are controllers biased against automation per se, or against any external source of advice? Would they show a similar level of bias against a presumed human advisor as against a presumed machine advisor? Subsequent experiments will investigate the effects of human vs automation advisory source on advice acceptance and controller performance. The paper intended for the CEAS conference in Delft 2015, will primarily focus on the consistency research and results obtained in the associated human-in-the-loop simulations. Research into controller consistency can be divided into inter-controller consistency (i.e. agreement between controllers) and intra-controller consistency (i.e. within, internal consistency). Generally, controllers are considered as homogenous (high inter-controller consistency) in their resolution strategies, but while overlooking individual differences perhaps has been sufficient for today’s ATC system researchers have argued for more tailored, individual-sensitive automation needed for successful human-automation teamwork in future ATC (Langan-Fox et al., 2009; Stankovic et al., 2008; Willems & Koros, 2007). Decision support systems acknowledging individual differences become increasingly important for user attitudes and performance in decision-making situations with vaguely defined tasks and problem-solving processes, of which ATC is a prime example (Liu et al., 2011). In terms of intra-controller consistency data are both sparse and unclear. Some data point to consistency (Magyarits & Kopardekar, 2001), other data suggest controllers are inconsistent (Westin, 2012), or that inconsistency increases with traffic complexity (Thomas et al., 2001). The results will have an operational impact in that they can inform operational communities about the do’s and don’ts of automation design, and how decision support tools support the operator—not only in terms of considering individual differences of the operator, but also to which extent operators are consistent over time. The lessons from this research are not limited to ATM, of course, and apply to any number of other domains in which automation is being designed to assist the human decision making process.