Automation is methods of production that rely on mechanical or electronic technologies as a replacement for human labor. Automatic systems have gained in autonomy and authority, whereby the activity of the systems has become less dependent on operator interventions. Human-centered automation problems have multiple attributes: an attribute reflecting human goals and capabilities, and an attribute reflecting automation goals and capabilities. The use of automation within high-risk industrial production systems has increased markedly during the last 50 years.
The suggestion that human-automation transactions should be conceptualized within the framework of cooperation, and consequently that automatic systems should be designed to be cooperative. The question is then how design can promote human-automation cooperation, and how the quality of cooperation can be assessed. The OECD Halden Reactor Project performed two closely related experiments, which allowed assessments of whether the quality of human-automation cooperation would be promoted by a humanmachine interface designed to increase the observability of the automatic system's activity using graphical and verbal feedback, as compared to a conventional humanmachine interface.
The quality of human-automation cooperation was assessed from subjective operator judgements. The experiments demonstrated a clear improvement in human-automation cooperation quality when the observability of the automatic system's activity was increased. As the trend in automation design seems to imply an increase in system autonomy and authority, the issue of human-automation cooperation can be expected to further gain in importance in the future settings. - The quality of human-automation cooperation in human-system interface for nuclear power plants - Ann Britt Miberg Skjerve, and Gyrd Skraaning, Jr.
Humans and Automation Lab
Research in the Humans and Automation Lab focuses on the multifaceted interactions of human and computer decision-making in complex sociotechnical systems. With the explosion of automated technology, the need for humans as supervisors of complex automatic control systems has replaced the need for humans in direct manual control. A consequence of complex, highly automated domains in which the human decision-maker is more on-the-loop than in-the-loop is that the level of required cognition has moved from that of well-rehearsed skill execution and rule following to higher, more abstract levels of knowledge synthesis, judgment, and reasoning.
Designing human-centered automation: trade-offs in collisionavoidance system design
Goodrich, M.A. Boer, E.R. Res. & Dev., Nissan Cambridge Basic Res., MA, USA;
This paper appears in: Intelligent Transportation Systems, IEEE Transactions on
Publication Date: Mar 2000.
Abstract: Human-centered automation problems have multiple attributes: an attribute reflecting human goals and capabilities, and an attribute reflecting automation goals and capabilities. In the absence of a general theory of human interaction with complex systems, it is difficult to define and find a unique optimal multiattribute resolution to these competing design requirements. We develop a systematic approach to such problems using a multiattribute decomposition of human and automation goals. This paradigm uses both the satisficing decision principle which is unique to two-attribute problems, and the domination principle which is a common manifestation of the optimality principle in multiattribute domains.