Vai al contenuto

Centro Studi Stasa

AF 447 as a paradygmatic accident

  • di

The role of automation on modern airplane

Antonio Chialastri
STASA, Italy

ABSTRACT

In this chapter, the author presents a human factors problem for automation: why, when and how automation has been introduced in the aviation domain, what problems arise from different ways of operating, and the possible countermeasures to limit faulty interaction between humans and machines. This chapter is divided into next parts: definition of automation, its advantages in ensuring safety in complex systems such as aviation; reasons for the introduction of on-board automation, with a quick glance at the history of accidents in aviation and the related safety paradigms; ergonomics: displays, tools, human-machine interaction emphasizing the cognitive demands in high tempo and complex flight situations; illustration of the AF 447 case, a crash happened in 2009, which causes are linked to faulty human-machine interaction.

INTRODUCTION

Human error is considered the first cause of accident. Actually, as many safety scholars affirm, the human error is only an epiphenomenon. The real cause of accident is what induces the human error, such as human performances and limitations, poor teamwork, organizational pressures on crews to obtain unreasonable performances, faulty human-machine interaction and lately some psychological upset, where the pilots posed an intentional threat to the safety of flight.

Although humans could be considered somehow a threat to safety, they are also the main resource to cope with unexpected events, unruly technology, changing environment, and uncodified system failures. System designers often conceive the human being as a superman or a robot. Actually, we have psychological limitations (constant or transient), physical constraints (due to ageing, fatigue, etc.), big differences among individuals, vices and emotions. All these elements are powerful sources of variability. This brings pros and cons. Variability is often considered something negative, because aviation is based on standard procedures. On the other hand, variability provides the needed flexibility to make the entire system resilient. Resilient systems are characterized by flexibility and robustness. Automation provides the latter characteristic. It is important to point out that we cannot achieve one alone, be it flexibility or robustness. Both are needed. The problem is how making this two necessary elements work together. The aim of this chapter is to show why, when and how automation has been introduced in the aviation domain, what problems arise from different ways of operating, and the possible countermeasures to limit faulty interaction between humans and machines. This chapter is divided into four main parts:

1. Definition of automation, its advantages in ensuring safety in complex systems such as aviation;

2. Reasons for the introduction of on-board automation, with a quick glance at the history of accidents in aviation and the related safety paradigms;

3. Ergonomics: displays, tools, human-machine interaction emphasizing the cognitive demands in high tempo and complex flight situations;

4. Illustration of the AF 447 case, a crash happened in 2009, for the fundamental role played by a faulty human-machine interaction.

WHAT IS AUTOMATION

According to a shared definition of automation, it may be defined in the following way: “Automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services”. Another plausible definition, well-suited the aviation domain, could be: “The technique of controlling an apparatus, a process or a system by means of electronic and/or mechanical devices that replaces the human organism in the sensing, decision-making and deliberate output” (Webster, 1981). The Oxford English Dictionary (1989) defines automation as:

1. Automatic control of the manufacture of a product through a number of successive stages;

2. The application of automatic control to any branch of industry or science;

3. By extension, the use of electronic or mechanical devices to replace human labour.

According to Parasumaran and Sheridan, “Automation can be applied to four classes of functions:

1. Information acquisition;

2. Information analysis;

3. Decision and action selection;

4. Action implementation.

Information acquisition is related to the sensing and registration of flight data. These operations are equivalent to the human perception. Let’s imagine a video camera. It helps to replace continuous, boring, monotonous human observation with reliable, objective and detailed data on the environment. Automation may handle easily and reliably these functions, as it is more efficient than humans in detecting. At the same time automation offers the possibility of positioning and orienting the sensory receptors, sensory processing, initial data preprocessing prior to full perception, and selective attention (e.g.: the focus function in a camera).

Information analysis follows the perception. Unlike the human cognitive functions, that could show some biases (Kahneman, Tversky, 1994), automation assures reliable working memory and inferential processes. Working memory allows for quick retrieval of information, thanks to a data-base with stored elements allows to match the incoming information with some known pattern of analysis.

After the data processing come the decision and action selection. Even here automation is useful because it involves different levels of augmentation or can replace the human decision-making with machine decision making. It is generally acknowledged that human decision-making processes are subject to several flaws, because of cognitive limitations, flawed logic and high variability. Moreover, machines are not affected by emotions or illusions, fatigue or boredom.

The fourth stage involves the implementation of a response or action consistent with the decision taken. Generally, it this stage automation replaces the human hands or voice. Certain features in the cockpit allow automation to act as a substitute for pilots. For instance, this occurs when – following an alert and warning for windshear conditions – the automation system detects an imminent danger from a power setting beyond a pre-set threshold. In this case, the autopilot automatically performs a go-around procedure, which avoids a further decrement in the aircraft’s performances.

Finally, the automation informs the pilots on which function is actually in use. This complete the feedback loop to keep the pilots aware of what is constantly going on. This information should be given either via visual, aural or haptic means. Usually, when an information is particularly relevant all means are used in a redundant way. For instance, since the stall is the “worst case situation” for a pilot, manufacturers chose to reinforce the “Stall” information via a red light (STALL), an aural warning, a voice call (STALL) and a haptic device (stick-shaker) that make the cloche to vibrate. All these indications together make almost impossible that such an important information could be left undetected.

HOW TO DESIGN THE AUTOMATION?

Besides being applicable for the abovementioned functions, automation has different levels corresponding to different uses and interactions with technology, enabling the operator to choose the optimum level to be implemented, based on the operational context (Parasumaran, Sheridan, 2000). When designing the automation, the manufacturer may opt for different levels, corresponding to the actual requirements of the operational scenario. These levels are:

1. The computer offers no assistance; the human operator must perform all the tasks.

2. The computer suggests alternative ways of performing the task.

3. The computer selects one way to perform the task and

4. Executes that suggestion if the human operator approves, or

5. Allows the human operator a limited time to veto before automatic execution, or

6. Executes the suggestion automatically then necessarily informs the human operator, or

7. Executes the suggestion automatically then informs the human operator only if asked.

8. The computer selects the method, executes the task and ignores the human operator.

Based on these level of automation, the manufacturer offers a series of options to pilots, who have to choose when and how to use it. Usually, the higher the workload, the higher the level of automation used, in order to displace the mental energy from doing (hand flying) to monitoring.

Accidents trend

Let’s look at the graph below, showing the accident curve along the years, from Sixties to today. The vertical axis corresponds to the number of accidents per million take-offs, while the horizontal axis corresponds to the relative decades. We can clearly notice some trends in flight safety. First of all, the safety level improved along the years. This is a good news. Unfortunately, there is also a bad news: the accident curve never reached zero. The third news is puzzling: from time to time the accident curve rises up, surges, according to a change in the nature of accidents (Figure 1).

Figure 1.  Source: ICAO doc. 9683/950. Accident rate over the years

During the Sixties accidents’ causes were linked to a series of factors; among them human performances, capabilities and limitations on one side, and poor ergonomics on the other. We should first investigate the reasons leading to the introduction of onboard automation. The main causes of aviation accidents were believed to be related to the human factor, such as “active failures” leading to loss of control of the aircraft. In those cases, the pilots failed to keep the aircraft under control, reaching over-speed limits, stalling, excessive bank angles, etc. The root cause for that kind of accidents was a flawed performance that eventually caused the loss of control (the effect). Factors related to human performance, e.g. the impact of fatigue, attention, high workload sustainability, stress mismanagement, etc. were consequently addressed.

Technological solutions were sought to help pilots manage these factors. Innovation eventually led to the introduction on massive scale of the auto-pilot, auto-throttle, flight director, etc. After the mid-Sixties, as a result of these innovations, the accident curve dramatically dropped. As we can observe, after a significant improvement the accident curve rose again during the mid-Seventies. Aviation safety experts were faced with accidents involving a perfectly functioning aircraft, with no evidence whatsoever of malfunctions. In these cases (known as “Controlled Flight Into Terrain” – CFIT), the aircrafts were hitting obstacles with the pilots in full control. The accidents were caused by loss of situational awareness, either on the horizontal or on the vertical path. The evidence showed that an improper interaction between pilots was the main cause behind the accident, so this time the solution came from psychology. Human factor experts developed techniques and procedures to enhance cooperation between pilots, and specific nontechnical training aspects became mandatory for pilots. For instance, a Cockpit Resource Management (CRM) course is nowadays mandatory in a pilot’s curriculum: its topics may include leadership, communication, cross-checking and criticizing fellow colleagues, assertiveness, resolution of conflicts, etc. The faulty interaction between pilots was not an unexpected phenomenon since the human factors issue emerged at the IATA Istanbul Conference in 1975. In 1977 the worst ever accident in the commercial aviation happened in Tenerife, where 583 lost their lives.

This accident could be considered as paradigmatic, because after that a series of reforms in the approach to safety where implemented. The focus, according to SHELL (Software, Hardware, Environment, Liveware-Liveware) paradigm shifted from automation to relational skills, even if it is true that some technological solutions were very important to avoid the CFIT phenomenon, such as the Ground Proximity Warning System. This device informs and warns the pilots about a dangerous proximity to terrain, providing also some safety features such as the aural “Go-around” prompt to pilots in order to climb immediately.

During the Eighties, some new typology of accidents emerged. In 1978 the de-regulation act was issued in the United States. This introduced initially a fierce competition between carriers that in most cases cut on expenses and investments to stay afloat on the aviation market. The conflict between protection and production led in some cases the crews to bend on profit-oriented decisions rather than on the safety-led judgement. Wrong or flawed decisions were taken in order to maximize the profit. Pressing organizational climate led the crews to erode the usual safety margins, such as landing under a heavy thunderstorm, limiting the uplift fuel, or accepting stressing and challenging duty time. The accident curve rose again. This time, to cope with this new kind of threats, the solution arrived from a normative point of view: stricter regulations, more severe checks by the State on airlines regarding training, maintenance and crew rostering. It was not only the commercial aviation field to be impacted by the new managerial approach, profit-oriented. The NASA itself experienced this new approach, accompanied by the “Cheaper, faster, better” mantra, obviously in a very challenging, demanding and extreme environment you cannot be at the same time cheaper, faster and better; if you are cheaper and better, you hardly be faster. If faster and cheaper, maybe you lose in quality, so any better. If faster and better, it is unlikely to be cheaper. The solution to the managerial approach was normative, with emphasis on the checks by the regulators, in order to make organizations compliant with norms, rules and procedures. NASA itself had two major mishaps (Challenger and Columbia) that showed several flaws at the organizational level. At the end of the decade, James Reason put forward a new safety paradigm, highlighting the impact of organizational factors in causing most of the accidents. His masterpiece “Human error” is published in 1990, just at the end of the deregulation decade. His conception of safety is that to avoid error we set a series of defensive barriers at every level (political, managerial, professional, technological). Since every barrier contains some flaws, induced by the variability of human performance, the accident dynamics appears when all holes present at every stage line-up to let the initial perturbation pass through all the defensive barriers. That’s why we know the Reason’s model as the “Swiss cheese model”.

During the Nineties the pendulum has swung back leading to loss of control as a major cause of aviation accidents. However, compared to the characteristic accidents experienced during the Sixties, the factors leading to loss of control appear to be different. Whereas in the beginnings of aviation, human performance was impaired by “under-redundancy”, or insufficient aids available to pilots for avoiding the psychophysiological effects of fatigue, workload and stress which reduced the pilots’ performance, nowadays we may quote another phenomenon: “over-redundancy”. This means that increasing automation may drive the pilot out-of-the-loop, thus causing reduced situational awareness, automation complacency or over-confidence and loss of skills, due to lack of practice in manually flying the aircraft. As a result, pilots may not be able to regain control once automation has failed, or may be incapable of effectively monitoring the performance of automated systems (and questioning it when required). Susan Bainbridge call such a situation “ironies of automation” to address a paradoxical approach to safety, namely the engineers’ approach. This point of view sees the human contribution as a threats to safety that has to be replaced, assisted or guided by automation. The paradox comes out when the designers allocate the human intervention where the automation fails or is not adequate to some tasks. In that particular moment, during an emergency or another critical situation, a complacent pilot, accustomed to rely on the automation, under-skilled because of lack of training, should do something that automation can’t; from main threat to main resource in a few seconds.

The safety philosophy behind the adoption of increasing on board automation is based on the assumption that human error is the main cause of accidents. Therefore, since the human (Liveware) component of the system is the flawed link in the accident chain, we ought to look for a substitute capable of handling the tasks once performed by pilots. This is partially true, as we’ll see later on. It is first necessary to understand what are the pros and cons of human contribution to safety, at what levels of operation does automation offer undoubted advantages and where the latter should end to leave room for pilots’ decisions. Pilots and machines are not a consumption, cheaper maintenance and flexible pilot training. Concern for the adaptability of pilots to these new solutions only came at a later stage and only after some severe mishaps. To achieve this balance, we’ll briefly analyze what levels of operation are involved in flight and where automation – primarily conceived to replace certain human operator tasks – should give way to the pilot’s intervention.

According to a paradigm proposed by Jan Rasmussen and also developed by James Reason, human activity can be grouped into three main modes: skill-based action, rule-based action and knowledge-based action. The first field is based on the human capability to accomplish physical tasks, such as providing correct input to flight control (so-called “stick and rudder training”), responding to external stimuli in a quick and consistent way, and coordinating the body in order to obtain a desired result. It is mainly an area in which the psycho-physiological aspect is paramount. Unfortunately, the human brain developed in the savannah where it was designed by evolution to move on two axis: back/forth, left/right. The entire perceptual, emotional and cognitive systems are adapt to this movement. We tend to forget that we fly since the beginning of 20th century. In about 100 years we cannot modify our main natural computer. So perception is altered inflight because of visual illusion, the secretion of adrenalin interferes emotionally with our decisions and the cognitive biases make us prone to errors in our judgment.

At the first level, the perceptual one, we use automation for monitoring tasks, such as detection, identification and response to external signals stemming from habits (body automatism or conditional reaction). Automation has played an important role at this level by replacing human performance rather well. We may now rely on a source of help when flying in clouds, immersed in thick fog, or at night. As a result, autopilots, auto-throttle (and later on, auto-thrust computers) have come to gradually replace pilots in “hand-flying”. Generally speaking, an autopilot can tolerate workloads that are hardly sustainable by a pilot. Let’s imagine an oceanic flight during the night; an autopilot is able to maintain (with no effort at all) altitude, speed, track and so forth, whereas pilots are subject to tiredness, attention lapses, distraction, etc. On the other hand, the systematic replacement of basic flying skills has led to the erosion of competence, because, as Germans put it: “Die übung macht den meister” (“Use makes master”). In the U.S. the FAA (Federal Aviation Authority) has suggested adopting “back to basics training”, in which pilots are taught how to fly without the help of automatisms and how to retrieve the elementary notions of aerodynamics, in order to avoid grossly misreading altitude, speed and power. Unfortunately, the new conception of fly-by-wire is based on the so-called “flight laws”. Based on the configuration, the electrical sources available plus countless parameters, a middle-sized aircraft such as the Airbus A-320 (and similar) responds with different logic when the pilot gives an input to the flight control. The problem is that when everything works as per design and no failure is present the aircraft works in “normal law”. It means that almost everything is managed automatically, the flight envelope has numerous protections (against stall, excessive bank, excessive speed and so on). When the first failure shows up there is an initial degradation in “alternate law” and eventually in “direct law” where most of the protections disappear. It is to be noticed that other intermediate configurations are possible. Basically, the more a pilot is in trouble, the less aids he/she receives from automation. Moreover, it is difficult to have a big picture about strange configurations since they depend on several parameters (flight phase, speed, electrical interactions, etc.). It is evident that a pilot can hardly keep him/herself well trained, because he/she cannot simulate different configuration while in a normal commercial flight. The pilot can see and experience different states of the aircraft only in a simulator session. Unfortunately, training costs are less and less affordable in a competitive market, so airlines tend to reduce to a minimum (imposed by the international rules) the simulator sessions available to pilots. Basically, the pilots fly normally an airplane that during an emergency turns out to be different from what they expect.

The second level in the Rasmussen taxonomy – the rule base action – is the conceptual layer. It indicates compliance with the rules, norms, laws, and everything laid down in the official documentation. It is unproblematic to apply a given rule whenever conditions warrant it. This is the case of a limit set for a device, e.g. the maximum temperature for operating an engine (EGT). When the upper limit is exceeded, something happens: red indications on instruments, alarms, flashing light on to attract the pilot’s attention, automatic exclusion of the failed system and so forth. A machine can easily detect whether the operating conditions are normal or abnormal, by matching the real values with an operating envelope. Since the pilot may forget some rules, apply them incorrectly, or fail to apply valid norms, certain functions (especially those relating to the monitoring activity that induces boredom and complacency) are assigned to automation. It is a consequence of automation, therefore, that the flight engineer is no longer required in the cockpit. Some problems were initially detected in the normal flying activity of newly designed cockpits, since two pilots were required to manage a three-pilot cockpit, with automation playing the role of a “silent crewmember”.

The third level – the knowledge-base – includes the sound judgement of pilots in deciding if and when the given rules are applicable. This implies the notion of a complex system. Complexity is evident at every level of reality, from physics to biology, from thermodynamics to meteorology (Morin, 2001). Different conceptions of complexity emerge in the current scientific debate, but generally speaking, we may highlight some commonalities between the different theories: refusal of reductionism, different level of system description (be it physical or biological or even man-made) according to the level of observation, emergent proprieties, etc. Since aviation is a complex system made up of complex subsystems such as humans, technology and the environment, it is almost impossible to govern everything in advance through rules and norms. There will always be a mismatch between the required task and the final outcome (Hollnagel, 2006). The resilience engineering approach to safety is aware of this complexity and focuses on the ambivalent role – in such a system – of man, who simultaneously constitutes a threat and resource in coping with unexpected events, unforeseeable situations and flawed procedures. Much of this activity, which continuously and strategically adapts the means to the goal, is undetected either by the top management, or by the front-line operators themselves (pilots). These micro corrections are so pervasive that the person involved in accomplishing a task fails to even realize how much he/she deviates from a given rule. The front-end operator should always seek to compromise between efficiency and thoroughness. Hollnagel calls this compromise the ETTO (Efficiency, Thoroughness Trade Off) principle. The paradox emerging from the blind application of rules – the so-called “white strike” or “working by the rule” – is that it leads to a paralysis of the entire production line (Hollnagel, 2009).

The effects of such shortcuts during normal operations are another area of concern affecting flight safety, due to the systems’ opacity, the operator’s superficial knowledge, uncertainties and ambiguities of the operational scenario. If we turn to the initial introduction of aircraft automation, we may detect some commonalities regarding the way pilots have coped with the innovations. The third level of Rasmussen, the knowledge level, is the ability to cope with complex situation, to invent from the scratch a solution, or to be creative. It’s the work of an intelligent operator and at the moment could not be replace by computers, or not with the same reliability. At this stage we should introduce the artificial intelligence, that has a series of objections hard to overcome. In fact, given the undoubted progress of Artificial intelligence, we are focusing on what could made artificial. Actually, philosophers and scientists are striving from the beginning of ancient Greek civilization to understand what intelligence is. It is here that the human contribution enter the scene at the third level of Rasmussen’s conception. Computers can act upon what a pre-designed program foresees. Reality is beyond our present ability to model it. The mismatch between designers’ idea of the world and what actually really happen in the operational scenario, the gap between imagination and reality is continuously bridged by professionals. Alas, they don’t realize how many times they make the system flexible with their sound judgment.

Automation should follow an evolutionary path rather than bring a revolutionary approach.

Its adoption on board aircrafts does not respond to the planned purpose of enhancing safety “from scratch” in a consistent way, but rather resembles a biological organism trying to continuously adapt to the challenges posed by its environment (fly-fix-fly). This trial-and-response approach can be observed regardless of the fact that innovation introduced on board generally lags a step behind the overall level achieved by the industry. In fact, one of the requirements for a certain technology to be implementable in the aviation domain is its reliability; it is preferable to have a slower yet reliable system rather than a high-speed one that is not completely tested or tried in an operational environment. We may identify three main generations of aircraft automation systems: mechanical, electrical and electronic. In the beginnings of commercial flight there were no instrumental aids to help pilots to fly. A piece of string was attached to the wing to indicate whether the airflow over it was sufficient to sustain flight. Later on, the first anemometers and altimeters were introduced to indicate pilots the airspeed and altitude, respectively. These were the first steps toward the “virtualization of the environment” (Ralli, 1993).

The invention of the pneumatic gyroscope (replaced, shortly after, by the electric gyroscope), used to stabilize an artificial horizon, helped pilots to understand their situation even in meteorological conditions characterized by extremely poor visibility, while at the same time preventing dangerous vestibular illusion (a false sense of equilibrium stemming from the inner ear). These simple instruments were merely capable of providing basic indications. Early signs of automation were introduced on board aircrafts during the decade from 1920 to 1930, in the form of an autopilot based on a mechanical engineering concept that was designed to keep the aircraft flying straight: a very basic input to control the flight at a “skill” level. Moreover, as airplanes became bigger and bigger, it became necessary to apply some form of amplification of the pilots’ physical force, because of the airflow over large aerodynamic surfaces. Servomechanisms were introduced on board, alongside certain devices aimed at facilitating perception of the force acting on such surfaces (artificial feel load, Mach trim compensator) and absorbing the effects of the so-called Dutch-roll, an abnormal behaviour whereby the airplane yaws and oscillates in an uncoordinated manner (yaw damper). This (mechanical) innovation was the first of multiple steps that began widening the gap between the pilot’s input (action on the yoke) and the final outcome (aerodynamic movement). Instead of direct control, with the yoke mechanically attached to the ailerons, airplanes began to be constructed with a series of mechanisms intervening between the pilot’s input and the expected output. In this case, the virtualization of flight controls accompanied the parallel virtualization of flight instruments introduced by the artificial horizon (Attitude Display Indicator). At this stage, automation aided pilots mainly in their skill-based activity.

The second generation of automation included electric devices replaced the old mechanisms. Electric gyroscopes instead of pneumatic ones, new instruments such as the VOR (Very High Frequency Omni-directional Range) to follow a track based on ground aids, the ILS (Instrumental Landing System) to follow a horizontal and a vertical path till the runway threshold, and so forth. The 1960s saw plenty of innovations introduced on board aircrafts that enhanced safety: electric autopilots, auto-throttle (to manage the power setting in order to maintain a selected speed, or a vertical speed), flight directors (used to show pilots how to maneuver to achieve a pre-selected target such as speed, path-tracking and so forth), airborne weather radars, navigation instruments, inertial platforms, but also improved alarming and warning systems capable of detecting several parameters of engines and other equipment. Whereas the first generation of automation (mechanical) managed the pilot’s skill-based level, the second generation managed the skill-based and rule-based levels previously assigned to pilots. The airplane systems were monitored through a growing number of parameters and this gave rise to a new concern: the inflation of information with hundreds of additional gauges and indicators inside the cockpit, reaching almost 600 pieces (Boy, 2011). At this stage, pilots used the technology in a tactical manner. In other words, their inputs to automation were immediately accessible, controllable and monitored in the space of a few seconds. For example, if the pilot wanted to follow a new heading, he would use a function provided by the autopilot. The desired heading value was selected in the glare-shield placed in front of the pilot’s eye and the intended outcome would be visible within a few seconds: the airplane banked to the left or right to follow the new heading. End of the task. At this stage, automation helped pilots also at a rule-based level, since monitoring of thousands of parameters required efficient alarming and warning systems, as well as recovery tools. The third generation of innovation involved electronics, and was mainly driven by the availability of cheap, accessible, reliable and usable technology that invaded the market, bringing the personal computer into almost every home. The electronic revolution occurring from the mid-80s also helped to shape the new generation of pilots, who were accustomed to dealing with the pervasive presence of technology since the early years of their life. Electronics significantly helped to diminish the clutter of instruments on board and allowed for replacing old indicators – gauges in the form of round-dial, black and white mechanical indicators for every monitored parameter – with integrated colored displays (e.g.: CRT: Cathode Ray Tube, LCD: Liquid Crystal Display) capable of providing a synthetic and analytic view of multiple parameters in a limited area of the cockpit. It is worth mentioning that the type of operations implied by the Flight Management System shifted from tactical to strategic. In fact, whilst in the previous electrical automation stage, pilots were accustomed to receiving immediate feedback visible shortly after the entered input, in the new version, a series of data entered by the operator would show their effects at a distance of hours. The data was no longer immediately accessible and visible, therefore this new way of operating placed greater emphasis on crew co-ordination, mutual cross-checking, operational discipline, not only in the flying tasks but also in the monitoring activity. The Flight Management System database contains an impressive amount of data, from navigational routes, to performance capabilities and plenty of useful information that can be retrieved from pilots. Further on we analyze the traps hidden behind this kind of automation.

However, it is important to point out that the actual discontinuity introduced by this generation of automation: was the notion of electronic echo-system. Compared to the past, when pilots were acquainted with the inner logic of the systems they used, their basic components, and normal or abnormal procedures for coping with operational events, in the new cockpit pilots are sometimes “out of the loop”. This occurrence forces them to change their attitude towards the job. On an airplane such as the A-320 there are almost 190 computers located in almost every area of the fuselage. They interact with each other without the pilot being aware of this interaction. Every time the pilot enters an input to obtain a desired goal (e.g. activating the hydraulic system), he/she starts a sequence in which not only the selected system is activated, but also a number (unknown to the pilot) of interactions between systems depending on the flight phase, operational demands, the airplane’s conditions, etc. The unmanageable complexity of the “electronic echo-system” is a genuine epistemological barrier for the pilot. Whereas before, the pilot had thorough knowledge of the entire airplane and could strategically operate in a new and creative manner whenever circumstances required, the evolution of the cockpit design and architecture has brought about a new approach to flight management that is procedural and sequential. Only actions performed in accordance with computer logic and with the given sequence are accepted by the system. In acting as a programmer who cannot perform tasks that are not pre-planned by the computer configuration, the pilot has lost most of his expertise concerning the hardware part of the system (the aircraft). He too is constrained by the inner logic which dictates the timing of operations, even in high-tempo situations. Consequently, in the international debate on automation and the role of pilots, the latter are often referred to as “system operators”. This holds true up to a certain point but, generally speaking, it is an incorrect assessment. Recalling what has been said about the levels of operation, we may say that at the skill-based level, pilot have become system operators because flying skills are now oriented to flight management system programming. As a flight instructor once said: “We now fly with our fingers, rather than with a hands-on method”. What he meant was that the pilot now appears to be victim of the “push-the-button” syndrome, that leads them to look for switches and devices to operate rather than govern the yoke and throttle. At the skill-based level, automatisms may be in charge for the entire flight, relegating the pilot to a monitoring role. However, at a rule-based level, computers manage several tasks once accomplished by pilots, including monitoring the pressurization system, air conditioning system, pneumatic system and so forth. This statement is no longer valid when referred to the knowledge-based level. Here, the pilot cannot be replaced by any computer, no matter how sophisticated the latter is. Sound judgment of an expert pilot is the result of a series of experiences in which he/she fills the gap between procedures and reality. Since this paper discusses automation rather than the human factor in aviation, anyone wanting to investigate concepts such as flexibility, dealing with the unexpected, robustness, etc. may refer to authors adopting a Resilience Engineering approach (Hollnagel, 2008) (Woods, Dekker, 2010). The fact that pilots are no longer so acquainted with the airplane as in the past, has led them to only adopt the procedural way to interact with the airplane. This is time-consuming, cognitively demanding, and above all, in some cases it may lead to miss the “big picture”, or situation awareness.

This new situation introduces two major consequences: automation intimidation (ICAO, 1998) and a restriction of the available tools for coping with unexpected events. In this sense, we can compare the natural echo-system – made up of a complex network of interactions, integrations, retro-actions that make it mostly unforeseeable and unpredictable – with the electronic system. On a modern aircraft, the electronic echo-system is mostly not known by its user. Pilots only know which button they have to press, what the probable outcome is, and ignore what lies is in between. It is a new way of operating that has pros and cons. To assess the real impact on safety, we ought to analyze the relationship between man and technology, by shifting from HMI (Human Machine Interaction) to HCI (Human Computer Interaction). A further step will lead to Human Machine Engineering and/or Human Machine Design (Boy, 2011). The core of this approach is that the focus of the research should be on human-centered design; in other words, the final user – with his/her mental patterns activated in real scenarios – should be regarded as the core of the entire project for a new form of automation. From an engineering perspective, it is very strange that concepts routinely used to describe the role of aircraft automation miss a basic focal point: the final user. In fact, when we talk about technology, we refer to bolt-on versus built-in systems. These expressions indicate the different pattern of integration of aircraft technology. Bolt-on indicates the introduction of a new technology on board an airplane conceived without automation. It is a reactive mode that strives to combine the old engineering philosophy with new devices. It is a kind of “patchwork” which requires several local adjustments to combine new requirements with old capabilities. The expression “built-in” indicates the development of a new technology incorporated in the original project. Every function is integrated with the airplane’s systems, making every action consistent with the original philosophy of use. The paradox is that the final user has been removed and left out of the original project, when new types of aircrafts were conceived during the Eighties. Nowadays, after several avoidable accidents, pilots are involved in the early stage of design in order to produce a user-friendly airplane. It is worthy to mention another aspect that could be interesting related to this topic. Not only pilots are unaware of the aircraft philosophies and design, but designers themselves are. After the Concorde accident in Paris in 2002, the prosecutor called the project manager of the aircraft, the one who conceived, plan and execute the project that led to make the Concorde flying. He knew every single part, he knew every system and knew who, where and when subsystems were designed. At the time, it was possible to master a project. Today, most of the airplanes are conceived, designed, built and certified in different places, by different designers, with multiple working groups that have a restricted view on the overall project. It implies as well side-effect problems regarding the liability in case of malfunction or accident. Who is responsible, accountable and liable for an accident induced by automation?

Why automation?

Two main reasons led to the decision to adopt aircraft automation: the elimination of human error and economic aspects. The first element stems from the general view whereby human performance is regarded as a threat to safety. Such a topic would require a paper in itself, so it is more appropriate to briefly mention some references for students eager to investigate the topic thoroughly. The second element is easier to tackle since we can even quantify the real savings related to, say, lower fuel consumption. According to IATA estimates, “a one percent reduction in fuel consumption translates into annual savings amounting to 100,000,000 dollars a year for IATA carriers of a particular State”. (ICAO, 1998). Aside from fuel, the evolution of aircraft technology over the years has led to a dramatic improvement in safety, operational costs, workload reduction, job satisfaction, and so forth. The introduction of the glass cockpit concept allows airlines to reduce maintenance and overhauling costs, improve operational capabilities and ensure higher flexibility in pilot training.

a. Fuel consumption. A crucial item in an airline’s balance sheet is the fuel cost. Saving on fuel is vital to remain competitive on the market. The introduction of the “fly-by-wire” concept helps to reduce fuel consumption in at least three areas: weight, balance and data predictions.

1. The fly-by-wire concept has brought a tangible innovation. Inputs coming from the pilots’ control stick are no longer conveyed via cables and rods directly to the aerodynamic surfaces. In fact, the side-stick (or other devices designed to meet pilots’ demands regarding a conventional yoke) provides input to a computer which – via optic fibers – sends a message to another computer placed near the ailerons or stabilizer. This computer provides input to a servo-mechanism to move the surfaces. Therefore, there is no longer any need for steel cables running through the fuselage and other weighty devices such as rods, wheels, etc. This also significantly reduces the aircraft’s weight and improves fuel consumption, since less power is required to generate the required lift.

2. The second area that contributes to saving fuel is aircraft balance. The aircraft must be balanced to maintain longitudinal stability (pitch axis in equilibrium). This equilibrium may be stable, unstable or neutral. In a stable aircraft, the weight is concentrated in front of the mean aerodynamic chord. Basically, this means that the stabilizer (the tail) should “push-down” (or, technically, induce de-lifting) to compensate for the wing movements. In an unstable equilibrium, the balance of weight is shifted sensibly backwards compared to a stable aircraft. In other words, the stabilizer should generate lift to compensate for the wing movements. Where does the problem lie with unstable aircrafts? A stable aircraft tends to return to its original state of equilibrium after it deviates from the latter, but is less maneuverable since the excursion of the stabilizer is narrower. On the contrary, unstable equilibrium causes an increasingly greater magnitude of oscillations as it deviates from the initial point. It makes the aircraft more maneuverable, but unstable. In practical terms, in these kinds of airplanes pilots are required to make continuous corrections in order to keep the aircraft steady. This is why the computer was introduced to stabilize the airplane with continuous micro-corrections. This significantly reduces the pilots’ workload for flying smoothly. Moreover, due to the distribution of weight concentrated on the mean aerodynamic chord, an unstable aircraft consumes less fuel.

3. The third factor helping pilots is a database capable of computing in real time any variation to the flight plan either on the horizontal path (alternative routes, shortcut, mileage calculations, etc.) or on the vertical profile (optimum altitude, top of descent to manage a low drag approach, best consumption speed, and so forth). This enhances the crew’s decision-making task in choosing the best option in order to save fuel.

b. Maintenance costs. The glass cockpit concept enables airlines to reduce maintenance and overhauling costs. In conventional airplanes, every instrument had its box and spare part in the hangar. Whenever a malfunction was reported by the crew, maintenance personnel on the ground fixed it by replacing the apparatus or swapping the devices. All these actions required a new component for every instrument. If we consider that an airliner has roughly one million spare parts, we can easily understand the economic breakthrough offered by the glass cockpit concept. In these airplanes, a single computer gives inputs to several displays or instruments. The maintenance approach is to change a single computer rather than every component or actuator. Based to this operating method, few spare parts are required in the hangar: no more altimeters, no more speed indicators, no more navigation displays (often supplied different manufacturers). Moreover, training of maintenance personnel is simplified as it focuses on a few items only which, in turn, allows for increased personnel specialization.

c. Selection and Training costs. The fast pace in growth of the airline industry over the last decades has generated concern about the replacement of older pilots, since training centers cannot provide the necessary output for airline requirements. Hiring pilots from a limited base of skilled workers creates a bottleneck in the industrial supply of such an essential organizational factor as are pilots for an airline. Automation has facilitated the hiring of new pilots, since the basic skills are no longer a crucial item to be verified in the initial phase of a pilot’s career. If, barely thirty years ago, it would not have been sufficient “to have walked on the moon to be hired by a major airline”, as an expert pilot ironically put it, nowadays the number of would-be pilots has increased exponentially. In the current industrial philosophy, almost everyone would be able to fly a large airliner safely with a short amount of training. This phenomenon gives rise to new and urgent problems, as we’ll now see. Besides the selection advantages, broadening the potential pilot base enables airlines to save money for the recurrent training of pilots or to reduce transition costs. Indeed, once the manufacturer sets up a standardized cockpit display, the latter is then applied to a series of aircraft. If we look at the Airbus series comprising A-319, A-320, A-321, A-330, A340, etc., we realize how easy is to switch from one airplane to another. In this case, the transition course costs are significantly lower since pilots require fewer lessons; indeed, aside from certain specific details, the only difference involves the performance (take-off weight, cruise speed, landing distances, etc.). Since pilot training costs make up a considerable portion of an airline’s budget, it is financially convenient to purchase a uniform fleet made up of same “family” of airplanes. Often, the regulations enable airlines to use a pilot on more than one airplane belonging to a “family”. Consequently, this leads to shorter transition courses, operational flexibility as pilots get to fly several aircrafts at a time, and in the long run, better standardization among pilots. Some authors have pointed out how automation has redefined the need for different training processes and crew interaction (Dekker, 2000).

 d. Operational flexibility. A pilot flying with no aids at all, be they mechanical, electrical or electronic, is limited in many ways. He/she must fly at low altitude, because of his/her physiological limits (hypoxia), he/she cannot fly too fast since the effort on the yoke exceeds his/her physical power, he/she cannot even fly in bad weather (clouds or poor visibility) since he/she must maintain visual contact with the ground. Automation allows to overcome such limitations. Higher flight levels also mean lower fuel consumption and the possibility of flying out of clouds. Faster speeds allow for reaching the destination earlier and completing multiple flights a day. Aircraft instruments enable pilots to achieve better performance. Let’s imagine an approach in low visibility conditions. In the beginnings, pilots would rely on a Non-Directional Beacon (NDB) as an aid to find the final track to land on the runway. Safety measures were implemented such as the operating minima. These implied that during a final landing approach, the pilot had to identify the runway before reaching a certain altitude (landing minima). Obviously, as instruments became more reliable, the landing minima were lowered. Subsequently, the introduction of the VOR (Very High Frequency Omnidirectional Range), which provided a more accurate signal, allowed for performing an approach closer to the runway and at a lower “decision altitude”, as the safety margins were assured. When the ILS (Instrumental Landing System) was introduced on board, pilots could also rely on vertical profile indications. This implied higher safety margins that led the regulatory bodies to once again lower the landing minima. As the ILS became increasingly precise and accurate, the landing minima were lowered till they reached the value of zero. This means that an airplane can touch down with such low visibility that pilots must rely on autopilot to perform such an approach. This is due to human limitations such as visual and vestibular illusions (white-out, wall of fog, duck under, etc.) that may impair the pilot’s performance. Let’s imagine an approach during the ‘50s: with a visibility of 1,000 metres at the destination airport, the airplane should have diverted to another airport because the pilots would not have been able to attain the airport visual reference at the decision altitude (landing minima). Nowadays, the same airport could operate with 100 metres visibility due to improved transmitting apparatus on the ground and flight automation that enhances pilot performance. A problem evidenced by several authors concerns the shift in responsibility of the people managing the automation system. With the performances brought about by automation that exceed human capabilities, a failure in the automatic system places pilots in a situation in which they have no resources available for coping with an unexpected event. In one case, where the pilots were performing a low visibility approach, the automatic system went out of control at very low altitude, causing the airplane to pitch down and hit hard on the runway. Experts acknowledge that the pilots’ recovery in that case was beyond reasonable intervention, that is to say “impossible” (Dismukes, Berman, Loukopoulos, 2008). Indeed, the time available for detecting, understanding and intervening was so short that it was virtually impossible to solve the airplane’s faulty behaviour. Who is responsible in such a case? Is it correct to refer to the pilots’ mismanagement, poor skill, or untimely reaction when they are operating outside their safety boundaries? If the operations are conducted beyond human capabilities we should also review the concept of responsibility when coping with automation. As Boy and Grote put it: “Human control over technical systems, including transparency, predictability, and sufficient means of influencing the systems, is considered to be the main prerequisite for responsibility and accountability” (Boy, 2011).

Different cockpit presentation.

 In recent years, cockpit design has undergone somewhat of a revolution. Old aircraft were designed to satisfy pilots’ needs and relied upon slight modifications in the previous pilots’ panel scanning pattern. A common standardized cockpit display emerged in the early ’50s: the T-model. It included the basic instruments such as artificial horizon (in the top centre), anemometer (speed indicator) on the left, the altimeter on the right and compass (indicating the heading and track) in the bottom-centre position. Two further instruments, the side-slip indicator and vertical speed indicator were added later on. The adjacent picture (Figure 2) shows the classic T-model. The pilot was accustomed to perceive configuration, disregarding the precise indications.

Figure 2 Classic T-model of cockpit display

With the introduction of the glass cockpit, the traditional flight instrument display was replaced by a different presentation encompassing more information, greater flexibility, color coding and marking, but at the same time, it could lead to information overload, as several parameters are displayed in a compact area. In a single display, known as the PFD (Primary Flight Display), multiple information is included not only for basic parameters, but also for navigation functions, approach facilities, automatic flight feedback, flight mode awareness, and so forth. Since the pilot needs to cross-check a great number of instruments at a glance, the design of traditional instruments was developed according to a pattern of attention, from the more important information to the more marginal information. Each instrument had its own case and its functional dynamics was perfectly known by the pilot; it was easy to detect, easy to use during normal operations and easy to handle in case of failure. The switch positions and shapes were positioned on board according to a pattern of use and were instantly recognizable at the touch. It was common for pilots to perform the checklist according to the “touch and feel” principle, by detecting whether a switch was in the required configuration by confirming its ON/OFF position. Almost every switch had a peculiar shape: i.e., rough and big for operative ones, smooth and small for less important switches. A form of training implemented in most flying schools was the “blind panel“ exercise, which consisted in covering the trainee’s eye with a bandage and asking him/her to activate the switch prompted by the instructor. Being familiar with the physical ergonomics of the cockpit proved very helpful. The pilot knew how much strength was required to activate a command, how much the arm had to be stretched to reach a knob and so forth. This exercise enhanced the pilot’s skill, or the skill level of operation (according to Rasmussen’s Skill-Rule-Knowledge paradigms). With the new cockpit configuration (dark panel) there is another kind of complacency and lack of skills regarding the activation/deactivation of systems. Since during normal operations the overhead panel (containing most of the system’s control panels) the pilots don’t touch something that works automatically. So they are not familiar with particular use that may arise during an abnormal or emergency situations. For example, in the older aircraft the pressurization system was handled by the pilot not flying (who talked with the ATC, read the checklist and managed the systems). The management of this system was quite simple, but required continuous attention and practice. In highly automated airplanes, the pressurization system is not even touched during normal operations. When something happens to the pressurization system, the pilot should monitor, act and cross-check not only the cabin altitude indications, but should move the switches, trimming the equivalent altitude. It is an extremely remote hypothesis, but nevertheless it absorb completely one pilot attention during the remaining of flight, causing a major threat to crew integration.

Regarding the input to the flight controls, it was assured via flight control wheels and sticks, cables and rods, in order to enable the pilot to intervene directly on the aerodynamic surfaces such as the ailerons, rudder and stabilizer. With regard to displays, the traditional instrument configuration – made up of drum pointers, single instrument boxes and “touch and feel” overhead panel – was replaced by a new approach towards information available to the pilot. The “touch and feel” philosophy was replaced by the “dark cockpit” approach. A dark, flat, overhead panel was adopted to show the pilot that everything was OK. Failures or anomalies were detected by an illuminated button, recalled by a master caution/warning light just in front of the pilots’ eyes. In the unlikely (but not impossible) situation of smoke in the cockpit severely impairing the pilots’ sight, (due to the concentration of a thick layer of dense, white fog), pilots of conventional airplanes could retrieve the intended system’s configuration “by heart”, by detecting the switch position with a fingertip. Vice-versa, let’s imagine a dark panel with smoke on board. It is truly challenging to spot where to place your hand and, above all, what the system’s feedback is, since the system configuration is not given by the switch position but by an ON/OFF light, which is often undetectable.

Ergonomics

The problem with innovation, especially in a critical context for safety as is aviation, lies in the interaction between a community of practice and a new concept, conceived and implemented by engineers. This implies a relationship between automation and ergonomics. Ergonomics is a word deriving from the Greek words ergon (work) and nomos (law), and it is a field of study aimed at improving work conditions so as to guarantee optimal adaptation of the worker to his/her environment. We may identify three main types of ergonomics: physical, cognitive and social (or organizational). Initially, with the onset of the first generation of automation (mechanical), ergonomists studied physical ergonomics, namely: how to reach a control, how much force is required to operate a lever, visibility of displayed information , seat position design, and so forth. For example, several accidents occurred due to misuse of the flap lever and landing gear lever. Indeed, these were positioned near one another and had similar shapes, leading the pilots to mistake automation them (Koonce, 2002). The ergonomists found a straightforward and brilliant solution: they attached a little round-shaped rubber wheel to the lever tip to indicate the landing gear, and a wing-shaped plastic cover indicating the flaps. Moreover, the two levers were separated in order to avoid any misuse. In the next generation of automation, namely electrical automation, the ergonomists aimed to improve the cockpit design standardization. A pilot flying on an airplane had difficulty in forming a mental image of the various levers, knobs and displays, as every manufacturer arranged the cockpit according to different criteria. Dekker has clearly illustrated a case of airplane standardization: the position of the propeller, engine thrust and carburetor in a cockpit in different airplanes, even if they were built by the same manufacturer (Dekker, 2006).

Pilots transitioning to other aircrafts struggled to remember the position of every lever since they were adjacent to each other, and the possibility of mistaking them was very high. Moreover, during the training process there was a risk of the so-called “negative transfer”, that is, the incorrect application of a procedure no longer suited to the new context. For example, the switches on Boeing aircrafts have top-down activation, while Airbus has chosen the swept-on system (bottom-up). A pilot transitioning from a Boeing 737 to an Airbus A-320 may quickly learn how to switch on the systems, but in conditions of stress, fast-paced rhythm and high workload, he/she may return to old habits and implement them incorrectly. Social ergonomics tries to eliminate such difficulties. Cognitive ergonomics studies the adaptability of the technology to the mental patterns of the operator. The third generation of automation introduced on board aircrafts gave rise to many concerns about whether the instrument logic was suitable for being correctly used by pilots. From a cognitive perspective, the new system should be designed according to the user’s need, while bearing in mind that pilot-friendly is not equivalent to user-friendly (Chialastri, 2010). A generic user is not supposed to fly, as some engineers erroneously tend to believe. Pilots belong to a professional community that share a mental pattern when coping with critical operational situations. The modus operandi is the result of a life-long inflight experience. A professional community such as the pilot community is target-oriented, and aims at the final goal rather than observing all the required procedural tasks step-by-step. This is why we talk about “shortcuts”, “heuristics”, and so forth. For example purposes, let’s discuss the introduction of a new airplane with variable wings: the F-111. It was conceived by the designers with a lever in the cockpit to modulate the wings from straight (useful at low speed) to swept (to fly at high speed). In the mind of the designer (who is not a pilot), it seemed perfectly sensible to associate the forward movement of the lever in the cockpit with the forward movement of the wings: lever forward – straight wings, lever back – swept wings. When the airplane was introduced in flight operations, something “strange” occurred. Pilots associated the forward movement with the concept of speed. So, when they wanted high speed, they pushed the thrust levers forward, the yoke down to dive and, consequently, the wing lever forward (incorrectly so). For a thorough analysis of this case, see Dekker (2006). All this occurred due to the confusion between generic users and specialized users, as are pilots. As Parasumaran puts it: “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation and, as a result, poses new coordination demands on the human operator” (Parasumaran, Sheridan, 2000).

Automation surprise

The interaction between pilots and technologies on board the aircraft raises some concerns regarding an acknowledged problem: the automation surprise. This occurs when pilots no longer know what the system is doing, why it is doing and what it will do next. It is an awful sensation for a pilot to feel to be lagging behind the airplane, as – ever since the early stages of flight school – every pilot is taught that he/she should be “five minutes ahead of the airplane” in order to manage the rapid changes occurring during the final part of the flight. When approaching the runway, the airplane’s configuration demands a higher workload, the communication flow with air traffic controllers increases and the proximity to the ground absorbs much of the pilot’s attention. Once all the fast-pace activities are handled with the aid of automation, the pilot may shed some of the workload. When automation fails or behaves in a “strange” manner, the workload increases exponentially. This is only a matter of the effort required to cope with automation, and could be solved with additional training. A more complicated issue – arising from investigations on some accidents – is the difference between two concepts, namely: “situational surprise” and “fundamental surprise” (Wears, 2011).

To summarize our explanation, we may refer to the following example on the everyday life. We are driving in the highway and suddenly we hear a loud bang from left bottom part of our car. We are surprised, but after three seconds we realize that a tyre is burst, so we steer towards the edge of the highway, braking to slow down our speed and looking for the behaviour of other cars behind us. This is a situation surprise. Let’s now think to the same situation, with a single difference: while driving, our car start flying. For the first three seconds we are astonished. We don’t speak, trying to figure out what the hell is going on. We literally don’t know what to do, which resources we have to cope with this incredible situation. We cannot even plan an alternative, and even if we can, we cannot communicate it. Obviously, the reader may object “Car don’t fly”. On the other hand we may say that “Planes don’t stall” (actually, they do, as we’ll soon show, but many pilots assume they don’t). Therefore, translating these concepts in the pilot’s operational life, we may say that a situation surprise arises when something occurs in a pre-defined context, in which the pilots know how to manage the situation. Such a situation is unexpected, being an abnormal condition, but nonetheless remains within the realm of the pilot’s knowledge. Moreover, it returns to being a manageable condition once the pilot acknowledges the difference between a normal and an abnormal condition, by applying the relevant abnormal procedure. After the initial surprise due to the unexpected event, the pilot returns to a familiar pattern of operation acquired with training. A so-called fundamental surprise is a very different circumstance: it implies a thorough re-evaluation of the situation starting from the basic assumptions. It is an entirely new situation for which there is no quick-fix “recipe” and is difficult to assess, since it is unexampled. The pilots literally do not know what to do because they lack any cognitive map of the given situation. This concept can be well summarized by the findings following an accident involving a new generation airplane. When analyzing the black box after an accident, it is not rare to hear comments such as: “I don’t know what’s happening” – this is a something of a pilot’s nightmare. It should be noted that some failures occurring on board highly automated aircrafts are transient and are very difficult to reproduce or even diagnose once on the ground. Even the manufacturer’s statements regarding electrical failures are self-explaining, as a failure may produce different effects in each case or appear in different configurations depending on the flight phase, airplane configuration, speed and so forth. Sometimes, an electrical transient (also influenced by Portable Electrical Devices activated by unaware passengers) could trigger certain airplane reactions that leave no memory of the root cause.

Automation issues

Although automation has improved the overall safety level in aviation, some problems tend to emerge from a new way of operating. A special commission was set up in 1985 by the Society of Automotive Engineers to determine the pros and cons arising from flight deck automation. Nine categories were identified: situation awareness, automation complacency, automation intimidation, captain’s command authority, crew interface design, pilot selection, training and procedures, the role of pilots in automated aircrafts (ICAO, 1998). Sometimes, pilots lose situation awareness because they lag behind the automation logic. A pilot should always know what the automation system is going to do in the next five minutes, to either detect any anomalies or take control of the airplane. In fast-paced situations – such as crowded skies, proximity with the terrain and instrumental procedures carried out in poor visibility – a high number of parameters must be monitored. As the workload increases, the pilot tends to delegate to the automation system a series of functions in order to minimize the workload. It is important to specify that the physical workload (the number of actions performed within a given time frame) differs from the cognitive workload in that the latter implies a thorough monitoring, understanding and evaluation of the data coming from the automation system. The paradox involving airplane automation is that it works as an amplifier: with low workloads, it could lead to complacency (“let the automatic systems do it”) or boredom that reduce alertness and awareness. Plainly said: “When good, better; when bad, worse”. Poor user interface design is another issue. Norman, Billings et al. have studied the way humans interact with automation and have developed cognitive ergonomics.

There are several studies concerning the optimal presentation (displays, alarms, indicators, etc.) for pilot use. For example, old displays were designed in such a way that each single instrument provided data for a specific domain only: the anemometer for speed, the altimeter for altitude and flight levels, and so forth. With the integrated instruments, all the required data is available at a glance in a single instrument, which not only provides color coded information but also on-demand information by means of pop-up features, so that it can be concealed during normal operations and recalled when needed. Several issues are related to displays, but only two are addressed here: cognitive mapping of instruments and human limitations in applying the color-code to perception. We have to consider that the replacement of round-dial black and white instruments, such as the old anemometer, with colored multifunction CRT (Cathodic Ray Tube) instruments, has determined a new approach by pilots in mentally analyzing speed data. Indeed, with round-dial indicators, pilots knew at a glance in which speed area they were flying: danger area, safe operations or limit speed. Although it is not important to know the exact speed, according to a Gestalt theory concept we need to see the forest rather than every single tree.

Furthermore, from a cognitive perspective, we tend to perceive the configuration as a multiple instrument: for example, during the approach phase, the anemometer needle will point to 6 o’clock, the vertical speed indicator to 8 o’clock and so forth. In the new speed indicator, the values appear on a vertical speed tape rolling up and down, with the actual speed magnified for greater clarity. Nevertheless, the real perception of the speed range which the pilots are flying is somehow impaired by this kind of indication. For example, a dangerous situation in flight may arise whenever the airplane approaches its maximum operating altitude. As the altitude increases the relevant stall speeds tend to close in to a single value (coffin corner). Since the speed window is limited to 20 kts above and 20 kts below the actual speed, the pilots may think that they have a greater margin on stall than they actually have. Look at the following picture (Figure 3).

Figure 3 The acceptable speed shown by the speed-tape indicator

The acceptable speed is the black band, while the amber and the red represent the no-fly zone (high speed buffet and low speed buffet). If we compare the acceptable area with the no-fly zone, apparently we have 50% of the field indicating that we have margin over the flight boundaries. Actually, if we translate the forty knots in a rounded up dial, we realize that the acceptable speed range reduces to 10% of the selectable speed. A safety concern may arise whenever the pilots accept to fly at an altitude where the low speed buffet is very close to the high speed buffet (the so called coffin corner). In this case an increase in the wind, an unexpected moderate turbulence or some thunderstorm activity below which induces a convective movement may cause an oscillation in speed triggering some consequences. In fact, in the highly automated aircrafts some protections (such as high speed or low speed protection) automatically activate, with unpleasant consequences. For example, the airplane may dive down to avoid a loss of speed or it can climb to limit the increase of speed. In any case, these are upsetting conditions for a pilot flying at cruising altitude. The speed tape shows also additional indicators are displayed on the speed tape, such as minimum speed, flap retraction speed and maximum speed for every configuration. In many ways, this system facilitates the pilot’s task. Problems arise whenever failures occur (electrical failure, unreliable indication etc.), as the markings disappear.

The situation worsens compared to the old indicators, as pilots needs to create a mental image of the speed field in which they are flying. For example, it is unimportant for us to know whether we are flying at 245 or 247 knots, but it becomes important to know that we are flying at 145 knots rather than 245 knots. This means that the mental mapping of the speed indication involves a higher workload, is time-consuming and also energy consuming. Let’s take a look at these two different speed indicators to grasp the concept. The second issue is related to human performance and limitations. The human eye may detect colors, shape, light and movement with foveal vision. There are two kinds of receptors in the fovea: the cones and the rods. The cones are responsible for detecting shapes and colors, in an area roughly large 1 cent, and account for focused vision. Instead, rods are responsible for peripheral vision and are able to detect movement and light. This means that color-coded information should appear in an area just in front of the pilot’s head, as only the cones may such information. In fact, since an input appearing to the side can be detected by the rods, which are insensitive to color and shape, the pilot will not notice it unless he stares at it directly. Other relevant indicators for pilots are symmetry and context. The needle in a round-dial instrument satisfies both these requirements, since we may easily detect which field of operation we are currently in (near the upper limit, in the middle, in the lower margin), in addition to the trend (fast-moving, erratic, gradual) and symmetry with nearby parameters. A digital indicator shows this information in an alternative way, by arranging numbers on a digital display, as can be clearly seen in the above picture. The configuration on the left can be grasped at first glance, while the second set of information requires a considerable effort to detect differences.

Disrupted symmetry of Digital information

Another area worthy of attention is the greater magnitude of the errors made by pilots when flying highly automated airplanes. Pilots flying older aircrafts normally relied on their ability to obtain the required data using a heuristic and “rule of thumb” approach. This method was not extremely accurate but roughly precise. Nowadays, with the all-round presence of computers on board, flight data can be processed very precisely but with the risk of gross errors due to inaccurate entries made by pilots. An example taken from everyday life may help to explain the situation: when setting older-version alarm clocks, the user’s error might have been limited to within the range of a few minutes, whereas the new digital alarm clocks are very accurate but subject to gross errors – for example 8 PM could be mistaken for 8 AM. Lower situation awareness means that pilots no longer have the big picture of the available data. The aviation truism, “trash in-trash out”, is applicable to this new technology as well. Having lost the habit of looking for the “frame”, namely the context in which the automation system’s computed data should appear, pilots risk losing the big picture and, eventually, situation awareness. Unsurprisingly, there have been cases of accidents caused by inconsistent data provided by the computer and uncritically accepted by pilots. This is neither a strange nor uncommon occurrence. Fatigue, distractions, and heavy workload may cause pilots to lower their attention threshold. In one case, a pilot on a long haul flight entered the available parameters into the computer to obtain the take-off performance (maximum weight allowed by the runway tables, speed and flap setting). He mistook the take-off weight with other data and entered the parameters into the computer.

The end result was a wrong take-off weight, wrong speed and wrong flap setting. The aircraft eventually overrun the runway and crashed a few hundred meters beyond the airport fences. Another issue linked the introduction of aircraft automation is a weakening of the hierarchy implied by the different way of using the automation system. The captain is indeed the person with the greatest responsibility for the flight and this implies a hierarchical order to establish who has the final word on board. This hierarchical order is also accompanied by functional task sharing, whereby on every flight there is a pilot flying and a pilot not flying (or monitoring pilot). Airplanes with a high level of automation require task sharing, in which the pilot flying has a great degree of autonomy in programming the Flight Management System, in deciding the intended flight path and type of approach. Above all, the pilot flying also determines the timing of the collaboration offered by the pilot not flying, even in an emergency situation. In fact, since the crew must act in a procedural way in fulfilling the demands of the automation system, both pilots must cooperate in a more horizontal way compared to the past. The hierarchical relationship between captain and co-pilot is known as the “trans-authority gradient” (Hawkins, 1987) and, from a human factor perspective, should not be too flat nor too steep.

Case studies

Currently, several case studies could be cited to illustrate the relationship between pilots and automation. In the recent past, accidents have occurred due to lack of mode awareness (A-320 on Mount St. Odile), automation misuse (Delhi, 1999), loss of braking leading to loss of control (S. Paulo Garulhos, 2006), loss of control during approach (B-737 in Amsterdam, 2008), and several other cases. From these accidents, we chose a couple of peculiar cases. The first case occurred in 1991 and involved an Airbus flying at cruise altitude. While the captain was in the passenger’s cabin, the co-pilot tried to “play” with the FMS in order to learn hands-on. He deselected some radio-aids (VOR: Very High Frequency Omnidirectional Range) from the planned route. After the tenth VOR was deselected, the airplane de-pressurized, forcing the pilot to perform an emergency descent. From an engineering perspective there was a bug in the system but, moreover, it was inconceivable for the pilot to link the VOR deselection (a navigation function) to the pressurization system. The second case is related to a B-727 performing a low-visibility approach to Denver. The flight was uneventful till the final phase. The sky was clear but the city was covered by a layer of fog that reduced the horizontal visibility to 350 ft. and the vertical visibility to 500 ft. The captain was the pilot flying, in accordance with the company regulations, and used the autopilot as specified in the operating manual. The chosen type of approach was an ILS Cat II. The ILS is a ground based navigation aid that emits a horizontal signal to guide the airplane to the centre of the runway together with a vertical signal to guide the aircraft along a certain slope (usually 3°) in order to cross the runway threshold at a height of 50 ft. The regulations state that once the aircraft is at 100 ft., the crew must acquire the visual references to positively identify the runway lights (or markings). If not, a go-around is mandatory. Once the captain has identified the runway lights he must proceed to land manually, by switching off the autopilot at very low altitude. There are certain risks associated with this maneuver, since – with impaired visibility – the pilot loses the horizon line, depth perception and the vertical speed sensation, and could be subject to spatial disorientation due to the consequences of the so-called “white-out” effect. In this case, however, nothing similar occurred. The autopilot duly followed the ILS signals, but due to a random signal emitted in the last 200 ft., the aircraft pitched down abruptly. The captain tried to take over the controls, though unsuccessfully. He later reported – during the investigation – that the window was suddenly “full of lights”, meaning that the aircraft assumed a very nose-down pitch attitude. The high rate of descent, coupled with the surprise factor and low visibility, did not allow the captain to recover such a degraded situation: the airplane touched down so hard that it veered off the runway, irreversibly damaging the fuselage. This case emphasizes the relationship between pilots and liability. Operating beyond human capabilities, the pilot finds himself/herself in a no-man’s land where he/she is held responsible even when he/she cannot sensibly regain control of the airplane. As stated by Dismukes and Berman, it is hardly possible to recover from such a situation.

The Air France 447 crash

Few accidents may lead us to reflect on the automation role than the AF 447 case. This flight, carried out on June 2009 from Rio de Janeiro to Paris, disappeared from the radar screens while it was over the Atlantic Ocean. Different causes were identified to explain why the pilots lost the control for about three and a half minutes, diving from an altitude of 33000 ft. to the Ocean surface. Before making any hypothesis, it was necessary to recover the wreckage and read the Flight Data Recorder in conjunction with the Cockpit Voice Recorder. In the history of accidents losing control from cruise altitude is very rare. Investigators wondered how it happened, trying to figure out the real dynamics of the accident. Reading the flight data recorder and the cockpit voice recorder it emerged that after reaching the cruise level the Captain left the cockpit, leaving two first officer in the flight deck. Before leaving his seat he recommended the first officers not to deviate excessively from the assigned route, in order to save fuel. The ITCZ (Inter Tropical Converging Zone) a very challenging meteorological phenomenon, with thunderstorms, lightning, turbulence and ice formation, was lying ahead and many airplanes on the same route deviated significantly to the North. The problem with the fuel was due by the high airplane weight at take-off. In order to avoid a commercial setback, offloading cargo and/or passengers, the Captain decided to uplift the minimum (also the maximum, given the actual conditions) fuel to reach Paris. On the other hand, flying with minimum fuel allows few options to deviate from the programmed route. Large deviations would have meant an extra landing in Bordeaux to refuel and reach Paris. On a commercial point of view this is a “sacrificing decision”. On the safety point of view, and with hindsight, this was a trade-off that eroded the safety margins. Following the Captain instructions, the first officers in the flight deck did not deviate as other planes did. They encountered some phenomena such as the St. Elmo Fire (electrostatic discharges on the windshield) that made the younger copilot upset. Another phenomenon, unknown to pilots, was the ice accumulation on the pitot tubes, the probes that allow the speed computer to calculate the difference between dynamics and static air pressure.

When ice accumulate on these probes, the speed indication becomes unreliable. Several problems arise from this unreliable indication. First, there is no warning flag, no alerting light nor any chime recalling the pilots that what they see is not valid. It is the same difference, in the everyday life, between accepting a 30 euro banknote (which doesn’t exist) and a 500 euro banknote (it exists, but could be false). Pilots must rely upon they knowledge and expertise to detect an unreliable indication, be it altitude, speed or attitude. As soon as they detected a “strange” speed the pilot flying pull back his side-stick. The aircraft entered a steep climb at high altitude, gaining almost 3000 ft. In the meantime the actual speed decreased till the stall speed. The Airbus A-330 has different flight laws, that normally protect against excessive speed or insufficient speed along the entire flight envelope. When a failure is detected the flight laws reconfigure in degraded modes, such as the so called alternate and direct law. In these conditions the airplane may stall. That’s what happened. With missing speed sources, the airplane degraded in alternate law, that lack any protection against low speed stall. When the co-pilot pulled back his side-stick, making the airplane lose energy, the speed decreased below the minimum to sustain the flight and the angle of attack increased beyond the maximum allowable. From then on, the pilots lost the situation awareness, entering in a dive that last three minutes and forty seconds, with no way to sort out. The startle effect is a phenomenon well known to the human factors experts. It is a fundamental surprise, that makes the pilots astonished, unable to understand, plan and check what is going on. The airplane behaved in a way incomprehensible to them. The “STALL” alarm sounded at least seventy times during the fatal descent. Nobody in the cockpit acknowledged the they were in a low speed situation leading to a stall. Or, better, they had two different situation awareness but didn’t verbalize they perceptions. The more experience first officer, seated on the left side, thought to be in a low speed situation, while the younger co-pilot thought to be in a high speed situation. In fact, he pulled back his side-stick in order to reduce the speed. Or, at least, this is what we may infer reading the final report published by BEA. Having two different, incompatible, perception of the flight progress, they didn’t speak out to state the problem (unreliable speed), nor to explain which was their main concern (too slow, too fast). The airplane alarm system saturated the cockpit with aural warnings (a chime, recalling they were leaving the assigned altitude), voice calls (“ALTITUDE”, “STALL” “DUAL INPUT”). Pilots themselves were saying something, plus the aerodynamic noise, the thunderstorm that gave also an additional buffet to the physical perception of the pilots. The first officer called the cabin attendant to wake up the Captain. When he came back into the cockpit found a situation already degraded. The aircraft was diving down with an attitude of 5°, but with an angle of attack of about 40°. The engines were at full thrust, while the vertical speed was at 10000 ft per minute (almost four times the usual rate of descent). The Captain was probably in the sleep inertia phase, following an abrupt awakening. The situation he had to cope with was extremely challenging because he didn’t follow the evolution of the airplane state from the beginning. He entered the cockpit and didn’t figure out what was the real problem. The first officer reported several times “I don’t know what’s happening”, adding confusion to the Captain effort to understand the nature of the problem. In the meantime the younger copilot was acting continuously backward on his side-stick. Unfortunately, the flight controls design of the A-330 makes almost impossible to detect which are the pilot input on the sidestick, because the side-sticks are not coordinated; when a pilot moves his control the other doesn’t follow. The only hint is when both pilots act on their respective sidestick. In this case, the aural warning “DUAL INPUT” activates. It is to be mentioned that some features of the A-330 led the entire crew in confusion. For example, the STALL warning sounds only if the speed is above 60 kts. With the unreliable speed below that threshold the warning stopped. The consequence was that when the pilot pushed down his side-stick the STALL warning came on and when he pulled back his side-stick (trespassing the activation threshold) the warning went off. The recommendations following this accident were aimed at some design issue. First, the non-coordinated side-stick movements. Second, the STALL warning must be reinforced with a visual indication. Third, the crew coordination has been recognized to be ineffective, but most of this issue is due to the fundamental surprise that impaired their communication skills. Some other recommendations were issued, but the focus of this paper is to show how an improper design could lead (even decades later) to accident because the human factors dynamics were not addressed when the airplane was conceived. It is worth to mention that some years later another Airbus A-320 crashed after the pilots lost the control at cruise altitude. Even in that case, there was no way to sort out the upset attitude at high altitude.

CONCLUSION

This brief paper highlights certain aspects concerning the introduction of automation on board aircrafts. Automation has undeniably led to an improvement in flight safety. Nevertheless, to enhance its ability to assure due and consistent help to pilots, automation itself should be investigated more thoroughly to determine whether it is suitable in terms of human capabilities and limitations, ergonomics, cognitive suitability and instrument standardization, in order to gradually improve the human performance.

REFERENCES

Anderson, E. W. (1994). Cross-category variation in customer satisfaction and retention. Marketing Letters, 5, 19–30.

Air Pilot Manual, (2011), Human Factors and pilot performance, Pooleys, England Alderson

D.L., Doyle J.C., “Contrasting view of Complexity and Their Implications For Network-Centric Infrastructures”, in IEEE Transaction on Systems, Man and Cybernetics – part A: Systems and Humans, vol 40 No. 4, July 2010

Amalberti R., “The paradox of the ultra-safe systems”, in Flight Safety Australia, September/October 2000 Bagnara S., Pozzi S., (2008). Fondamenti, Storia e Tendenze dell’HCI. In A. Soro (Ed.), Human Computer Interaction. Fondamenti e Prospettive. Monza, Italy: Polimetrica International Scientific Publisher.

Lisanne Bainbridge, (1987), “The Ironies of Automation”, in “New technology and human error”

J. Rasmussen, K. Duncan and J. Leplat, Eds. London, UK: Wiley,. Barnes C., Elliott L.R., Coovert M.D., Harville D., (2004), “Effects of Fatigue on Simulation-based Team Decision Making Performance”, Ergometrika volume 4, Brooks CityBase, San Antonio TX

Boy G., a cura di, (2011), The Handbook of human Machine Interface – A Humancentered design approach, Ashgate, Surrey, England

Chialastri Antonio (2011), “Human-centred design in aviation”, in Proceedings of the Fourth Workshop on Human Centered Processes, Genova, February 10-11

Chialastri Antonio (2011), “Resilience and Ergonomics in Aviation”, in Proceedings of the fourth Resilience Engineering Symposium June 8-10, 2011, Mines ParisTech, Paris

Chialastri Antonio (2010), “Virtual Room: a case study in the training of pilots”, HCI Aeroconference, Cape Canaveral

Chialastri A. (2015), Human factor – Il rapporto uomo-macchina, IBN, Roma

Cooper, G.E., White, M.D., & Lauber, J.K. (Eds.) (1980) “Resource management on the flightdeck,” Proceedings of a NASA/Industry Workshop (NASA CP-2120)

Dekker S., “Sharing the Burden of Flight Deck Automation Training”, in The International Journal of Aviation Psychology, 10(4), 317–326 Copyright © 2000, Lawrence Erlbaum Associates, Inc.

Dekker S. (2003), “Human Factor in aviation: a natural history”, Technical Report 02 – Lund University School of Aviation AF 447: a paradigmatic accident? The role of automation

Dekker S., Why We need new accident models, Lund University School of Aviation,Technical Report 2005-02

Dismukes, Berman, Loukopoulos (2008), The limits of expertise, Ashgate, Aldershot, Hampshire

Ferlazzo F. (2005), Metodi in ergonomia cognitiva, Carocci, Roma

Flight Safety Foundation (2003), “The Human Factors Implications for Flight Safety of Recent Development In the Airline Industry”, in Flight Safety Digest, March April

Hawkins Frank (1987), Human factor in Flight, Ashgate, Aldershot Hampshire

Hollnagel E., Woods D., Leveson N. (2006) (a cura di), Resilience Engineering – Concepts and Precepts, Ashgate, Aldershot Hampshire

Hollnagel Erik, (2008) “Critical Information Infrastructures : should models represent structures or functions ?”, in Computer Safety, Reliability and Security, Springer, Heidelberg

Hollnagel Erik, (2009), The ETTO Principle – Efficiency-Thoroughness Trade-Off, Ashgate, Surrey, England Hutchins Edwin (1995), “How a cockpit remembers its speeds”, Cognitive Science, n. 19, pp. 265-288.

IATA (1994), Aircraft Automation Report, Safety Advisory Sub-Committee and Maintenance Advisory Sub-committee. ICAO – Human Factors Digest No. 5, Operational Implications of Automation in Advanced Technology Flight Decks (Circular 234)

ICAO (1998) – Doc. 9683-AN/950, Montreal, Canada

Köhler Wolfgang (1967), “Gestalt psychology”, Psychological Research, Vol. 31, n. 1, pp. 1830

Koonce J.M., (2002), Human Factors in the Training of Pilots, Taylor & Francis, London

Maurino D., Salas E. (2010), Human Factor in aviation, Academic Press, Elsevier, MA, USA Morin E. (2008), Il Metodo – Le idee: habitat, vita organizzazione, usi e costumi, Raffaello Cortina editore, Milano

Morin E. (2004), Il Metodo – La vita della vita, Raffaello Cortina editore, Milano

Morin E. (2001), Il Metodo – La natura della natura, Raffaello Cortina editore, Milano

Morin E. (1989), Il Metodo – La conoscenza della conoscenza, Feltrinelli editore, Milano

David Navon (1977), “Forest before trees: the precedence of global features in visual perception”, Cognitive Psychology, n. 9, pp. 353-383

Norman D. (1988), The psychology of everyday things, New York, NY: Basic Books, 1988.

Parasumaran, Sheridan , “A Model for Types and Levels of Human Interaction with Automation” IEEE Transactions on Systems, Man, and Cybernetics—part A: Systems and Humans, Vol. 30, No. 3, MAY 2000

Parasumaran R., Wickens C., “Humans: Still Vital After All These Years of Automation”, in Human Factors, Vol. 50, No. 3, June 2008, pp. 511–520.

Ralli M. (1993), Fattore umano ed operazioni di volo, Libreria dell’orologio, Roma

Rasmussen J., Skills, Rules, Knowledge: Signals, Sign and Symbol and Other Distinctions in Human Performance Models, AF 447: a paradigmatic accident? The role of automation In “IEEE Transactions Systems, Man & Cybernetics”, SMC-13, 1983

Reason James, (1990) Human error, Cambridge University Press, Cambridge

Reason James (2008), The human contribution, Ashgate, Farnham, England

Salas E., Maurino D., (2010), Human factors in aviation, Elsevier, Burlington, MA, USA

Wears Robert, Kendall Webb L. (2011), “Fundamental on Situational Surprise: a case study with Implications for Resilience” in Proceedings of the fourth Symposium on Resilience Engineering, Mines-Tech, Paris.

Woods D., Dekker S., Cook R., Johannesen L., Sarter N., (2010), Behind human error, Ashgate publishing, Aldershot, England

Woods D. D., Modeling and predicting human error, in J. Elkind S. Card J. Hochberg and B. Huey (Eds.), Human performance models for computer aided engineering (248- 274), Academic Press 1990

Wright Peter, Pocock Steven and Fields Bob, (2002) “The Prescription and Practice of Work on the Flight Deck” Department of Computer Science University of York York YO10 5DDAnderson, E. W. and Sullivan, M. (1993). The antecedents and consequences of customer satisfaction for firms. Science, 12(2), 125–143.

KEY TERMS AND DEFINITIONS

Ordered Attribute: An attribute with nominal, but ordered values, for example, increasing levels of satisfaction: low, medium, and high.

Evaluation of Ordered Attributes: For ordered attributes the evaluation procedure should take into account their double nature: they are nominal, but also behave as numeric attributes. So each value may have its distinct behavior, but values are also ordered and may have increasing impact.

Attribute Evaluation: A data mining procedure which estimates the utility of attributes for given task (usually prediction). Attribute evaluation is used in many data mining tasks, for example in feature subset selection, feature weighting, feature ranking, feature construction, decision and regression tree building,  data discretization, visualization, and comprehension.

Context of Attributes: In a given problem the related attributes, which interact in the description of the problem. Only together these attributes contain sufficient information for classification of instances. The relevant context may not be the same for all the instances in given problem.

Non-Myopic Attribute Evaluation: An attribute evaluation procedure which does not assume conditional independence of attributes but takes context into account. This allows proper evaluation of attributes which take part in strong interactions. 

Feature Subset Selection: Procedure for reduction of data dimensionality with a goal to select the most relevant set of features for a given task trying not to sacrifice the performance.

Feature Weighting: Under the assumption that not all attributes (dimensions) are equally important feature weighting assigns different weights to them and thereby transforms the problem space. This is used in data mining tasks where the distance between instances is explicitly taken into account.