Automation Infatuation

Man is best when doing least?

A dog also is best when doing least

Shiro also, is best when doing least

The development of automation is plagued by a techno-centric world view as enunciated by Birmingham and Taylor (1954) in their observation that man is best when doing least.  It is difficult to imagine how this perspective gained currency among designers who, being human themselves, must realize that they do not function well if they are constrained to doing as little as possible. Are we not best when we are active and engaged?

Another manifestation of the techno-centric world view is apparent in the idea that systems work best when we eliminate the error-prone human. The audio track accompanying a video of the 1988 Airbus A320 crash at Habsheim airshow in France offers as a classic illustration. As the aircraft passes in a low fly-by, the narrator observes that aircraft manufacturers have tried to fix that problem (i.e., the problem of the error-prone human) by designing the pilot out of the cockpit. This is the first fully automated plane. Moments later, the aircraft crosses the end of the runway and sinks gracefully into the forest to erupt in a fireball.

No, this was a disaster, not a comedy routine!

Enter destination & press button

Enter destination & press button

The techno-centric ideal of a socio-technical system that will function adequately without human involvement is fantasy, and the problems associated with adding the human as an afterthought are legion.  It is hardly surprising that those immersed in the design of technology would be drawn towards automation. However, the common outcome of the technological emphasis is that automation as solution is treated as a design requirement. A basic principle of requirements engineering is that requirements should not be expressed in terms of solutions; they should be developed in response to operational need and should be expressed in solution-independent terms. Automation as a design requirement does not comply with this principle.

But note this: The topic for the 2014 Human Factors Prize is human-automation interaction/autonomy.The Prize, which recognizes excellence in HF/E research, confers a $10,000 cash award and publication of the winning paper in Human Factors. 

In human factors, we should know better. Ours is a design discipline. We are struggling towards a robust design perspective but our progress is confounded by the way we allow the issues to be framed around techno-centric ideas. Many forms of cognitive support (such as appropriate displays of information, well integrated communication tools, and support structures for organizing work flow) facilitate meaningful human engagement with work.  Automation does not; it restricts human engagement with work. That is presumably why automation has found such favor among those embedded in the techno-centric world view.

.. ridding us of the error-prone human

…. purging the error-prone human

We, in human factors, must break from the techno-centric world view!

If we are to make headway in this heavily techno-centric world, we need to forward an alternative; we need to forward an evocative image of a human-centric world view. We need to be assertively adamant in our rejection of design options such as automation posed as design principles and instead focus on the nature of the work and how it might be accomplished.  We will then be well-placed to think about how that work might be supported with one or more of the many human-centric design options open to us.

4 responses to “Automation Infatuation

  1. Gavan, first of all, the HFES prize is for _human interaction with automation_. They are interested in human interaction with special-purpose automation more than human-system interaction in a general sense. I didn’t think this was an attempt to promote a techno-centric world. Secondly, I share your obsession with human-centric design, but I see no reason why we should avoid exploiting the power of technology. Industry will always be concerned about human error and reliability and will make all possible attempts to design error out of systems. If this results in humans being designed out of the system, is that not a failure of the human factors community to demonstrate that unique human abilities still make an important contribution to system performance? This brings me to Function Allocation. As you pointed out elsewhere, this has been a debate for a long time and most practitioners seem to read into Fitts’ List whatever suits their purpose. The problem is that the Fitts principles are generally valid, but have to be interpreted for specific application domains. The views of Parasuraman, Sheridan and others are valid, but often without proper reference to specific contexts. As far as I know, there has been very little research into developing new models of human-system collaboration and this causes many problems for designers. I have spent the past three years trying to understand why the old FA models cannot be applied to advanced industrial environments like nuclear power plants. I think everybody agrees that there will probably always be things that humans can do that are impractical or too expensive for machines, or things that machines can do that are too hard or dangerous for humans. However, the balance between two tends to be different for modern technology. Especially with the advanced automation systems required to ensure safe and reliable operation of complex processes, a different approach is required to finding the balance between the need for intrinsic human skills and knowledge, and the availability of extrinsic support provided by technology. This is where the classical function allocation paradigms need to be updated. In the nuclear industry the most trusted method has always been NUREG/CR-3331 (“A Methodology for Allocating Nuclear Power Plant Control Functions to Human or Automatic Control”, 1983). But even this methodology is showing its age. It does not make provision for newer concepts like dynamic or adaptive function allocation. Advanced control systems typically allow definition of discrete modes, states, transitions and transients in a system. The challenge is now to design human-system interfaces that focus on situation awareness by providing unambiguous indications of the systems’s condition.
    There’s much more I can say about function allocation, but to cut a long story short, I don’t agree that a techno-centric world where systems function without humans is a fantasy. This is going to happen whether we like it or not. In spite of what we as human factors people tell engineers about automation (“automate not because you can, but because you have no choice…”), it is already happening. We see this in the rapid development of drones and many other autonomous systems like industrial robots. The fact that there is still a remote human controller somewhere may be only a temporary situation. I’m convinced that “Press button for destination” is not too far in our future. Just ask Google…

    • Jacques

      Thank you for the lengthy comment. I see four points here:

      The HFES prize ; More than one person pointed out that the human factors prize is not about designing automation but about working on the interaction between humans and automation. I understand that but continue to think that this reveals an unfortunate emphasis on the role that automation has to play in technological development. I do not want to claim that those who chose this topic set out to promote a techno-centric world view but rather that the techno-centric world view is pervasive and subtle in Human Factors (as it is everywhere) and this is just another illustration of how we are captured by it.

      Your second point; first of all I am not obsessed with human-centric design but rather with work-focused design. There is a subtle but critical difference. I do not want to avoid exploiting the power of technology but rather, I want to use it in service of the work we are trying to accomplish. I do agree that we, the human factors community, are failing to make the point that human capability remains essential in any complex work system. That is failing that I would like to think my post goes some way towards correcting.

      I am also concerned with the emphasis on error. This emphasis ignores the human strengths that make a complex system effective. While clearly, we do not want to have people making errors, I suggest that the best way to help people be reliable is to base design on a comprehensive work analysis.

      Function allocation; I have addressed this problem in my paper, Work-focused analysis and design. Cognition, Technology & Work 14(1): 71-81 (2012). I argue in that paper that the systematic and comprehensive application of Cognitive Work Analysis can resolve this problem. I know this is challenging. Possibly, we should discuss this in a Skype call.

      Automation as fantasy; unfortunately, the detail on the human factors prize mixes remote control with automation. I have no problem with remote control although in the drone program, that has been done very poorly. That might be a topic I could get behind.

      Let’s focus in this discussion on fully automated systems. The promise of the Google car is relevant. The popular press makes it appear as if this is just a year or two away. I recognize that full automation is possible in a tightly-constrained world. Will Google be able to constrain this driving world sufficiently to allow their rule-based system to be effective? We will see, but this sort of promise has been promoted within Artificial Intelligence for some 30 years or more. By now, I would like to think that we are appreciating the limitations of rule-based systems, that reliable performance of such systems relies on a tightly-constrained context that does not permit functionality to change as conditions change.

  2. Much to sympathise with here, especially around ‘techno-centric’: if you are looking at things from a technical point of view you tend to look at the technology not the problem. But humans do relatively badly at *routine* jobs compared to technology; we can look at what we do as ‘tasks’ and then divvy up between technology and humans, with the interesting stuff given to humans and the boring stuff to the tech.

    • Martin:

      Thank you for your thoughtful response. I would like, however, to caution you that some of what you say sounds like the now discredited approach of allocating functions based on a judgment about what machines are better at and what humans are better at. Starting in the 1950s, this approach has been popular in Human Factors but over the last decade or two has fallen out of favor. I discuss some of the related issues in Work-focused analysis and design. Cognition, Technology & Work 14(1): 71-81 (2012) – ping me if you would like a copy. I argue in that paper that the way around this problem is to take a systematic and serious approach to work-focused analysis and design. While we clearly do not want to have people doing things they cannot do very well, my central claim is that this will not be an issue if you do the work analysis properly.

Join the discussion