NEWSROOM |
<
|
|
||
The Limits
of Expertise Rethinking Pilot Error And The Causes of Airline Accidents |
||
By Key Dismukes,
|
||
Approach
Reviewed NTSB reports of the 19
Asked: Why might any airline crew in situation of accident crew
knowing only what they knew be vulnerable?
Can never know with certainty why accident crew made specific errors
but can determine why the population of pilots is vulnerable
Considers variability of expert performance as function of interplay
of multiple factors
No one thing causes accidents
Confluence of multiple events, task demands, actions taken or not
taken, and environmental factors |
||
|
Hindsight Bias
Knowing the outcome of an accident flight reveals what crew should
have done differently
Accident crew does not know the outcome
They respond to situation as they perceive it at the moment
Principle of local rationality: experts do what seems reasonable,
given what they know at the moment and the limits of human information
processing
Errors are not
de facto
evidence of lack of skill or lack of conscientiousness
Two Fallacies About Human Error
Myth:
Experts who make errors performing a familiar task reveal lack of skill,
vigilance, or conscientiousness
Fact:
Skill, vigilance, and conscientiousness are essential but not sufficient
to prevent error
Myth:
If experts can normally perform a task without difficulty, they should
always be able to perform that task correctly
Fact:
Experts periodically make errors as consequence of subtle variations in
task
demands, information available, and cognitive processing
Each Accident Has Unique Surface Features and Combinations of Factors
Countermeasures to surface features of past accidents will not prevent
future accidents
Must examine deep structure of accidents to find common factors
Six Overlapping Clusters of Error Situations
1) Inadvertent slips and oversights while performing highly practiced
tasks under normal conditions
2) Inadvertent slips and oversights while performing highly practiced
tasks under challenging conditions
3) Inadequate execution of non-normal procedures under challenging
conditions
4) Inadequate response to rare situations for which pilots are not
trained
5) Judgment in ambiguous situations
6) Deviation from explicit guidance or SOP
1) and 2) Inadvertent slips and omissions:
Examples:
Forgetting to: reset altimeters at FL180, arm spoilers, turn on pitot
heat, set flaps to the take-off position
Errors are usually caught or are inconsequential
Errors may not be caught when other factors are present:
interruptions, time pressure, non-normal operations, stress
4) Inadequate response to rare situations for which pilots are not
trained
Examples:
False stick shaker activation just after rotation (JFK, 1992)
Oversensitive autopilot drove aircraft down at Decision Height
(OHare, 1998)
Anomalous airspeed indications past rotation speed (LaGuardia, 1994)
Uncommanded autothrottle disconnect with non-salient annunciation
(West Palm Beach, 1997)
Surprise, confusion, stress, and time pressure play a role
No data on what percentage of airline pilots would respond adequately
in these situations
5) Judgment and decision-making in ambiguous situations
Examples:
Continuing approach in vicinity of thunderstorms (
Not de-icing (
No algorithm to calculate when to break off approach; company guidance
usually generic
Crew must integrate incomplete and fragmentary information and make
best judgment
If guess wrong, crew error is found to be cause
Accident crew judgment & decision-making may not differ from
nonaccident
Lincoln Lab study: Penetration of storm cells on approach not uncommon
Other flights may have landed or taken off without difficulty a minute
or two
Questions:
What are actual industry norms for these operations?
Sufficient guidance for crews to balance competing goals?
Implicitly tolerate/encourage less conservative behavior as long as
crews get by
6) Deviation from explicit guidance or SOP
Example: Attempting to land from unstabilized approach resulting from
slam-dunk approach
Simple willful violation or more complex issue?
Are stabilized approach criteria published/trained as guidance or
absolute bottom lines?
Competing pressures for on-time performance, fuel economy
What are norms in company and the industry?
Pilots may not realize that struggling to stabilize approach before
touchdown imposes such workload that they cannot evaluate whether
landing will work out
Cross-Cutting Factors Contributing to Crew Errors
Situations requiring rapid response
Challenges of managing concurrent tasks
Equipment failure and design flaws
Misleading or missing cues normally present
Plan continuation bias
Stress
Shortcomings in training and/or guidance
Social/organizational issues
Nearly 2/3 of 19 accidents
Examples: upset attitudes, false stick shaker activation after
rotation, anomalous airspeed indications at rotation, autopilot-induced
oscillation at Decision Height, pilot-induced oscillation during flare
Very rare occurrences, but high risk
Surprise is a factor
Inadequate time to think through situation automatic response required
Challenges of managing concurrent tasks
Workload high in some accidents (e.g.,
Overloaded crews failed to recognize situation getting out of hand
Crews became reactive instead of proactive/strategic
Monitoring and cross-checking suffered
But: adequate time available for all tasks in many accidents
Inherent cognitive limitations in switching attention:
Plan continuation bias (e.g.,
Unconscious cognitive bias to continue original plan in spite of
changing conditions
Appears stronger as one nears completion of activity (e.g., approach
to landing)
Why are crews reluctant to go-around?
Bias may prevent noticing subtle cues indicating original conditions
have changed
Default plan always worked before
Reactive responding is easier than proactive thinking
Stress
Stress is normal physiological/behavioral response to threat
Acute stress hampers performance
Narrows attention (tunneling)
Reduces working memory capacity
Combination of surprise, stress, time pressure, and concurrent task
demands can be lethal setup
Social/Organizational Issues
Actual norms may deviate from Flight Operations Manual
Little data available on extent to which accident crews actions are
typical/atypical
Competing pressures not often acknowledged Implicit messages from
company may conflict with formal guidance
e.g. on-time performance vs. conservative response to ambiguous
situations
Pilots may not be consciously aware of influence of internalized
competing
Implications and Countermeasures
Focus on deep structure, not superficial manifestations
Complacency is not an explanation for errors
Most accidents are systems accidents
Many factors contribute to and combine with errors
Unrealistic to expect human operators to never make an error or to
automate humans out of the system
Design overall operating system for resilience to equipment failure,
unexpected events, uncertainty, and human error
Equipment, procedures, & training must be designed to match human
operating characteristics
Need better info on how airspace system typically operates and how
crews respond
e.g., frequency/site of slam-dunk clearances, last-minute runway
changes, unstabilized approaches
FOQA and LOSA are sources of information
Must find ways to share FOQA and LOSA data industry-wide to develop
comprehensive picture of system vulnerabilities
NASA research for next generation FOQA: Aviation Performance
Measurement System (APMS)
Dr. Tom Chidester: >1% of 16,000 flights: high energy arrivals
unstabilized approaches landing exceedances
When FOQA and LOSA uncover norms deviating from formal guidance, must
find why (e.g., must identify and change forces discouraging crews from
abandoning unstabilized approaches)
Conflicting messages from company (e.g., concern for on-time
performance and fuel costs)?
Viewed as lack of skill?
Fear of recrimination?
Fail to recognize logic for unstabilized approach criteria?
Countermeasure: Publish and check bottom lines; reward adherence
Procedures
Airlines should periodically review normal and non-normal procedures
for design
Checklists run during periods of high interruptions
Allowing critical items to float in time (e.g., setting take off
flaps during taxi)
Silent annunciation of critical checklist items
Pilot Monitoring forced to go head down in critical period
Formalize, train, and test monitoring and cross-checking
Training
Train pilots, managers, instructors, and designers about human cognitive
operating characteristics:
1. Dangers of repetitious operations:
Checklists are vulnerable to looking without seeing, and forgetting
items when interrupted or deferred
Briefings can become mindless recitations
Crews can become reactive rather that proactive/strategic
2. Dangers of plan continuation bias and of juggling multiple tasks
concurrently
3. Effects of acute stress on performance
Countermeasures:
1. Use briefings and inquiry to look ahead, question assumptions about
situation, identify threats, and prepare options and bottom lines
2. Ask What if our plan does not work?
3. Reduce checklist vulnerability
Execute items in a slow, deliberate manner, pointing and touching
Anchor checklist initiation to salient event (e.g. top of descent)
Create salient reminder cues when items are interrupted or deferred
4. Stress inoculation training
Awareness of cognitive effects
Slow down and be deliberate
Extra attention to explicit communication and workload management
Implications and Countermeasures:
Policy
Acknowledge inherent trade-offs between safety and system efficiency
Include all parties in analysis of trade-offs
Make policy decisions explicit and implement guidance |
©AvStop Online Magazine Contact Us Return To News |