Factor Differentiation in Risk Analysis and Crisis Management

Abstract

The purpose of this paper is to dive into important risk characterization factors which have not previously been described, namely: boundary factor differentiation, prioritization and triage, probabilities versus rates, probability of non-detection, 1st through 3rd person perspectives, stages of ontological becoming, passivity versus active search, holistic versus focused perspective, trusting versus controlling perspective, separation of convoluted crises, and timing clarity. Clear examples of each differentiating factor are provided within real world cases.

Share and Cite:

Posthuma, R. , Kreinovich, V. , Zapata, F. and Smith, E. (2022) Factor Differentiation in Risk Analysis and Crisis Management. American Journal of Industrial and Business Management, 12, 1498-1516. doi: 10.4236/ajibm.2022.1210083.

1. Introduction

Effectiveness of risk analyses is most strongly increased by the ability to differentiate and clearly identify fundamental factors such as the Probability of Non- Detection which facilitate finding and mitigating sources of system failure while assessing design alternatives.

Risk analyses are process-driven iterative activities that traditionally have assessed only two factors: the likelihood of an actualization of a potential risk, and the severity of consequences if the risk becomes an actual issue.

Risk analyses are highly emphasized in the practices of creating integrated, effective systems (Blanchard & Fabrycky, 2006; Buede, 2000; Chapman, Bahill & Wymore, 1992), a perspective now finding broader and more general application in product and process design, as well as in service, management, and social systems design.

1.1. Customer Satisfaction

Customers can assess their satisfaction by considering their satisfaction within criteria sets. Generally, criteria can be gathered into four sets (Figure 1): Performance, Cost, Schedule and Risk.

The valence (+, or −) of performance factors is usually positive, while the valence of cost factors, schedule factors, and risk factors is usually negative. Care must be exercised when combining these criteria, and sub-criteria (and sub-sub- criteria) scores, because the correct valence must be applied (by the multiplication of +1, or −1) so that the uppermost super-criterion, Customer (or Stakeholder) Satisfaction is correctly added together. Specifically, Customer Satisfaction seeks Performance to go up, but Cost, Schedule time, and Risk to go down. Of the four top-level criteria, Risk can be said to be of the top-most importance, because risks can cause an entire product or project to fail.

1.2. Prioritizing Risk Mitigation in Design Processes

Risk mitigation is crucial, and thus becomes elevated in importance, to the point where modern design methods (Kazman, Klein, & Clements, 2000) utilize risk identification both for detecting sources of failure, and as a screen for considering feasible alternative designs (Figure 2).

Modern development processes place risk identification and mitigation first. The most effective way to find risks is usually through the mind of the experienced designer; if something causes worry or nightmares, it is a significant risk. Risk Scenarios, realistic vision-like mental skits, are developed in detail to emphasize possible catastrophic failure occurrences. Design decisions (Kirkwood, 1999) are then taken to reduce the risk of catastrophic failures. Scenarios can be complex, so it is best to begin with clarifying basic scenes, upon which more complicated scenarios are built, facilitating discussions among designers. For example, the simple scenario of an employee filing a complaint is a building

Figure 1. Performance, cost, schedule & risk as generalized criteria sets.

Figure 2. Design process with an emphasis on risk analysis (ATAM, 2018).

block to the more complicated scenarios of the allegations of sexual harassment, or systemic racism.

1.3. System Boundary

The first task in discussing risks is the identification of a system’s boundary (Figure 3). Identification of the boundary of the System-of-Interest (SoI) is key in reducing uncertainties. The boundary of the SoI is defined from the perspective of the stakeholders, who may have discussions to refine the boundary. That which is within the system boundary is the responsibility of the system managers, who design and manage the system to handle influences and impacts from the environment outside the system, from which known, unknown, and random events may arise. For practical purposes, an organization’s boundary often coincides with the limits of its legal liabilities.

Organizational departments must be clear on the scope of their responsibilities. For example, an industrial relations department must clearly define the boundary of its host organization, in order to clarify what industrial relations resolutions are within the domain and benefit of the organization. Although such an organization is constantly influenced and impacted by industrial relations risks, the organization cannot continually expend resources to resolve all industrial relations issues.

Human Resource (HR) departments can usually better deal with the risk of employee disgruntlement by going to sub-systems where root risks arise and can be fully contained with risk-mitigating changes. Thus, reducing the risk of employee benefits mistakes is handled within the benefits department, and reducing the risk of insufficient knowledge is handled within the training & development department.

1.4. Certainty, Risk and Uncertainty (Table 1)

· Certainty exists when relevant factors are identifiable and quantifiable.

· Risk exists when relevant factors are known, but only characterized by probability distributions with known parameters (Gigerenzer, 2002).

Figure 3. Boundary of a System of Interest (SoI).

Table 1. Certainty, risk, uncertainty.

· Uncertainty exists when relevant factors cannot be identified or valued.

1.5. Prioritization: Work on High-Risk Items First

“Do the hard parts first” (Rechtin & Maier, 2000). A principle of good design (Bahill & Botta, 2015) is to work on high risk items first, in order to reduce project risk as soon as possible (Botta & Bahill, 2007). High-risk items and sub-systems are more likely to change in early design processes, thereby producing changes in other sub-systems. When interactions among sub-systems are considered, it can be seen that each change can produce a cascade of subsequent changes, which is a condition of instability in the overall design.

In order to stabilize an evolving design, stability is first established in individual items and sub-systems, through risk analysis and risk mitigation, usually by experimenting with design factors and setting those factors to values which reduce risk. It is important to determine whether it is even feasible to reach a stable design for high risk items (Bahill & Botta, 2015), because, if a high-risk item cannot be stabilized, and the project is scrapped, then the cost of developing the other sub-systems can be saved (Clausen & Frey, 2005).

For example, a nascent Human Resource Management (HRM) department, or an HRM department recently impacted by significant change, such as by a merger or major acquisition, should seek to stabilize its healthcare solutions to employees by selecting a fully self-contained and stable healthcare provider. The same approach is best applied to employee benefits and retirement needs. Similarly, an employee rewards program is probably best separated from other HRM departments, in order to limit any spread of disgruntlement issues.

Triage

The medical procedure of classifying individuals needing medical attention in the face of limited medical resources can be analogously applied to systems design:

1) Individuals who will probably survive, regardless of level of care

à Items which will probably work, regardless of re-design

2) Individuals who will probably die, regardless of the level of care

à Items which will probably not work, regardless of re-design

3) Individuals with significantly increased chances of survival, if they receive care

à Items which will significantly improve, if they are re-designed

System designers, then, will encounter:

1) Items or sub-systems which can be passed over;

2) Items or sub-systems which must be removed from the system;

3) Items or sub-systems which must be re-designed.

1.6. Risk Burn-Down Process with a Risk Budget

Activities are beneficially centered around risk reduction. As program work progresses (Bar-Asher, 2006), and as individual risks are eliminated, mitigated, or transferred outside the system boundary, the total risk within the systems boundary is reduced, or “burnt down”. From a risk mitigation perspective, risk burn-down is the most essential process. Prioritization and progressive risk burn-down are primary in achieving system feasibility, stable operations, and efficiency.

The total amount of risk allowable for a project can be identified, or better, quantified as a Risk Budget. To meet the constraint of a risk budget, the total current risks are summed, and compared against the total allowable risk. A risk budget demands that total risk be contained, mandating that the system cannot be deployed until the risk in the systems is less that the risk budget.

2. Risk Analysis

Risk analysis factors are described below.

2.1. Probability × Severity

Antoine Arnauld, & Nicole (1662), said: “Fear of harm ought to be proportional not merely to the gravity of the harm, but also to the probability of the event.” Antoine Arnauld thus stated the two-factor formulation of a risk:

Risk = S × P o

where

S = Severity, or “Gravity of Harm”;

Po = Probability of Occurrence.

Probability of Occurrence must be based on a stated Time Period, within a known system boundary, within which relevant factors are identified. For example: the probability of occurrence of a flat tire for an automobile with 4 tires is 0.1 (1 flat tire every 10 years) when the automobile is operated within a modern urban environment. A generic risk Table 2 will look like the following.

This two-factor formulation is standard in risk management (Haimes, 1999). A normalized range of [0-1] for each factor provides that the product of the factors, the Priority number ( Priority = P o × S ), will also remain within a normalized range of [0-1].

Table 2. Risk = Probability × Severity.

2.2. Probability of Occurrence Split across the System Boundary

In the cyber-security arena, the Po Probability of Occurrence is split into two contributing probabilities which interface at the system boundary. Within the system boundary is Pv Probability of Vunerability, and outside the system boundary there is Pt Probability of Threat. So that:

Po Probability of Occurrence = (Pv Probability of Vunerability) × (Pt Probability of Threat) (Figure 4).

2.3. Pp Probability of Penetration and Rate of Attack

Po Probability of Occurrence is an absolute probability based on a declared time period (often 1 year). However, if a Pp Probability of Penetration is based on a single attack, then the Rate of Attack can be multiplied in (Figure 5), to get:

Rate of Occurrence per year = (Pp Probability of Penetration for a single attack) × (Rate of Attacks per year)

For the rest of this paper, only Po Probability of Occurrence will be utilized.

2.4. Hawthorne Studies

The Hawthorne Studies (McCarney et al., 2007), conducted at the Hawthorne Works, a Western Electric plant from 1924 to 1927, demonstrated that workers will react to changes in factors in the environment which are altered in order to conduct seemingly objective experiments on worker productivity. Importantly, workers subjectively react to perceptions of whether work and productivity are

Figure 4. Probability of occurrence split across the system boundary.

Figure 5. Pp Probability of penetration and rate of attacks per year give rate of occurrences per year.

being observed, usually with at least temporary productivity increases. This illustrates the importance of observation and detection, whether subjective, or objective, as will be developed in the following inclusion of the Probability of (objective) Non-Detection.

2.5. Probability of Non-Detection: Signal Detection of Risk

Bahill and Karnavas (2000) introduced the additional factor of Difficulty of Detection, otherwise known as Pnd Probability of Non-Detection. In mechanical systems, the probability of non-detection usually captures the probability that human operators will fail to notice a risk which, according to its probability of occurrence, has become an actual issue in the system. In HRM, the probability of non-detection will usually capture the probability that an actual issue in HR operations is not noticed by managers. Because an issue that is not noticed is not quickly fixed, and thereby increases the risk of system failure, the Priority number, Priority = Po × Pnd × S, will increase as the Pnd Probability of Non-Detection increases.

In Table 3, an unknown virus spreader is seen to have a risk priority almost an order of magnitude higher that a headache. Although the headache is far more common, the danger posed by an undetected spreader of a deadly virus, places the virus spreader at a significantly higher risk priority.

Because the Pnd Probability of Non-Detection cannot be allowed to reduce a Priority number to 0 zero, the Pnd Probability of Non-Detection must have lower limit. 0.1 is suggested as a practical lower limit.

Pearson and Clair (1998: p. 68) emphasize signal detection as very important in gaining awareness and responding to impending crises. The related inverse to signal detection is the Probability of Non-Detection Pnd. Note: Pnd Probability of Non-Detection must have an imposed Minimum value, suggested as 0.1, because, in the case of fully obvious and detectable actualized risks, the Pnd Probability of Non-Detection cannot be allowed artificially drive the Priority Number toward zero, in relation to full set of risks, in which the Pnd Probability of Non-Detection factor will provide useful scaling of the Priority Number in regard to the difficulty of detection of the risks. There are also advantages in limiting the number of different values of the Pnd Probability of Non-Detection factor; for example,

Table 3. Probability of non-detection in risk.

a difficult to detect value such as 0.9, and a not usually difficult to detect value such as 0.5.

2.6. Survey of Top HRM Needs

A survey of top-level HRM risk categories can aid managers to monitor operations from a top-level. Experts can provide estimates for the risk factors, for companies in general, or for sectors of companies. Becker and Smidt (2016) classified major categories of HRM risks as:

· Employee Health & Wellbeing

· Productivity

· Financial

· Labor Turnover

· Attendance Rate/Patterns

· Reputation

· Legal

· Innovation

These risks are entered into Table 4, along with other major HRM risks, with estimated probabilities and severities.

Example Comparison: At the top of Table 4, two risks contrast with each other, based on the difference in Probability of Detection. Both, loss of Psychological Ownership, and, a lapse in Legal Compliance, are serious risks, with a Severity of 0.90. Both of these risks have a relatively small objective Probability of Occurrence of 0.20. However, while a lapse in Legal Compliance can be expected to be detected relatively quickly, and thus have a relatively low Probability of Non- Detection of 0.20, a loss of Psychological Ownership can be much more difficult to detect and thus has a Probability of Non-Detection of 0.90. The resulting difference in the multiplicative product is that loss of Psychological Ownership has a Priority number of 0.16, which is more that 4 times higher that the Priority number of lapse in Legal Compliance at 0.036.

2.7. 1st through 3rd Person Perspectives

When evaluating Probability of Occurrence Po, Probability of Non-Detection Pnd, and SeverityS, it is crucial to determine the personal perspective from which each factor is assessed. 1st, 2nd, and 3rd person can be utilized as classifiers.

· 1st person is the customer or the user.

· 2nd person is properly the engineer, manager, and producing corporation.

· 3rd person is the Objective viewpoint.

Mistake A: 1st person is often improperly taken by the engineer, manager, or corporation producing the system, making the 2nd person the customer or the user.

Mistake B: 3rd person is improperly usurped for its formal authority by the corporation, manager, or engineer which has displaced the customer or user out of the 1st person.

Table 4. Top HRM Risk categories, with potential estimations of probabilities and severity.

2.8. Boeing MCAS System in the 737 Max Aircraft

Upper-level management in Boeing, through 2010 to 2017, drove program managers, product managers, regulation specialists and engineers to ignore life-threaten- ing risks in the redesigned 737 Max as the aircraft was driven through FAA certification (JATR, 2019). Profit-seeking lured management to cover-up risks and to paint the new aircraft as safe. Ultimately, the true risks of the re-designed 737 Max were objectively examined only after hundreds of passenger deaths and a world- wide publicity and reputational crisis. Management drove both the creation of the deceit-driven crisis, and the three years and counting of cleanup.

Boeing was in competition with Airbus’s A320neo, a new, more profitable version of the A320, but with larger, more fuel-efficient engines which allowed 15% fuel savings. Boeing rushed to similarly retrofit its 737 with larger engines, in such a way that the new “limited-change” design could swiftly pass FAA oversight. The insertion of larger engines under the low-hanging 737 wing required the engines to be placed forward of the low wings, causing the 737 to pitch excessively upward on steep takeoffs under high G loads, diminishing forward air speed, possibly leading to a stall. To counteract this unwanted effect, in 2017 Boeing secretly inserted the Maneuvering Characteristics Augmentation System (MCAS) into the re-designed 737 Max (House of Representatives, 2020; FAA, 2020), not announcing the presence of the new MCAS system as the 737 Max was expressly escorted through FAA certification as a simple upgrade. The MCAS system (Ostrower, 2018) alternatively took data from either the left or right Angle of Attack (AoA) sensors on alternate flights, in order to pitch the nose of the aircraft downward upon non-disclosed “artificial intelligence” criteria.

Fault Tree analysis of the AoA sensors, the MCAS system, and the Pilot emphasizes the weakness of the Boeing design (Figure 6). OR gates indicate that a sub-fault can propagate along the fault tree, all the way to a fatal crash, when only one of inputs at each gate is at fault. The standard and robust approach to total system fault tolerance is to aggregate the component inputs with redundant parallelism, through AND gates, which prevent a fault from propagating unless All inputs of the AND gate are at fault. It is unimaginable why Boeing would create a critical system based on OR gates, which obviously characterize a fault and crash prone configuration. By simply creating a system with AND gate, as is standard industry practice, both fatal 737 Max crashes could have been avoided.

From the optimistically blind business management perspective, the MCAS system was “highly reliable”, with any faults being “easily detectable by the pilots”, who would “always” be able to mitigate the aircraft fault. Boeing’s false “1st person” viewpoint, with the “objective” 3rd person viewpoint attached, was pushed through the FAA, until two deadly crashes in 2018 killed 346 passengers and crew members.

Table 5 compares hypothetical risk analysis values, contrasting Boeing’s perspective against the perspective of pilots and passengers who were never informed about the MCAS system. Pilots and passengers would likely have perceived MCAS risk as 1 million times greater than Boeing, if the pilots and passengers had been informed. Boeing actually knew that, upon sudden pilot detection of unknown failure (MCAS caused), the uninformed pilots would have only 10

Figure 6. MCAS fault tree.

Table 5. MCAS failure risk as perceived by Boeing versus pilots & passengers.

seconds or less to turn off MCAS and maneuver, in order to prevent a death plunge.

2.9. Actualization of Risks: Ontological Stages of Becoming

Four stages in the actualization of risks are helpful:

1) Mental: logical thought

2) Emotional: limbic stimulation

3) Active: motion in response to the risk

4) Factual: objective values over collective instances

In the purest sense, these stages refer to a previously never actualized risk. For the example of the 2019 El Paso Shooting (Table 6), the following values are reasonable.

Stage of development assessments will differ, depending on whether the risk has previously been actualized, and upon the personal perspective of the assessor.

2.10. Combining 1st through 3rd Person Perspectives and Ontological Stages of Becoming

For precision, the three personal perspectives, and the four stages of actualization, are explicitly separated in the following Table 7, in order to specify the nature of each of the three factors of risk.

P o × P n d × S = Priority #

This complete risk analysis table provides the benefits of:

· Placing the Customer (1st person) first.

· Placing an objective (3rd person) factor assessment before the subjective assessment of managers and engineers (2nd person).

· Specifying the stage of actualization of each of the three factors of risk.

· Unifying the risk assessments in one table, which allows the easy comparison of perspectives versus stages of actualization.

Table 8 gives an example.

Table 6. 2019 El Paso shooting risk analysis from a 3rd person perspective prior to the shooting.

Table 7. Risk evaluation by person and actualization stage.

Table 8. Boeing MCAS risk evaluation by person & actualization stage.

2.11. Social Media: Passivity versus Active Search of Risks

Social media has provided the alternative channel of crisis awareness through informal social media posts, which may go viral, alerting a large audience. However, social media posts do not directly alert responsible parties through traditional channels. For example, although a crisis may be known on social media, the fire department and police department may not be called and informed. Watchful organizations must therefore monitor social media for pertinent posts. In the numerical formulation of risks in Table 9, it is seen that simply searching for unacceptable social media posts will flip the Probability of Non-Detection from almost zero at about 0.05, to almost one at about 0.99. According to the resulting Priority number, simply searching a suspect’s social media posts reduces the risk of unacceptable social media posts by a factor of 20.

3. Chapter. Crises Management

Crises are triggered by disruptive deviations of reality from the status quo and current models, shattering expectations and creating feelings of helplessness. Instead of the expected environment, a new reality suddenly interjects itself between people and survival. Upon the arising of a crisis, individuals and groups within organizations must turn to cognitive sense-making, emotional balancing, and behavioral actions which bring about organizational stability, restoration, and productivity.

Often, no one in the organization knows how to respond to the crisis at hand. Stakeholders must communicate, coordinate, and share information in order to build an updated understanding of the state of the environment. Leadership emerges by real-time problem solving within a crisis for which the complete solution is not known. Within fear and stress, innovations must be quickly generated. In place of routine tradeoffs, decision making within crises often involves tradeoffs which were theretofore not present; for example, the COVID crisis forced the large-scale tradeoff between safety in the face of the possibility of contagion versus Economic sustainment. In evolving crisis situations, competing priorities emerge with investigations and interactions with the new environment (Kerrissey & Edmondson, 2020).

Crisis management best practices include first attending to urgent risks which are threatening to cause immediate harm, as well as attending to significant risks whose mitigation will importantly reduce problems for the long term. Crisis management is thus a time-conscious risk analysis, emphasizing prioritization and the mitigation of risks.

Table 9. Risk difference in non-detection of unacceptable social media posts.

3.1. COVID-19 Crisis Management in HR: Perspective of HR Manager

“In a few short weeks in early 2020, we entered into the Coronavirus Crisis, which has effected practically all businesses. Unlike anything most of us have ever experienced before, this crisis sent both our personal and our professional lives into a tailspin, leaving us feeling frustrated, annoyed, overwhelmed, concern for self, concerned for others, and unsure of what to do. Such intense feelings are exactly what organizations have to work through. When an intense change like this comes up, it comes very swiftly and unannounced.”—Irma Juarez, HR Manager of major retailer

Multiple Crises Management

Recently, Irma Juarez, HR Manager, spoke to students at the University of Texas at El Paso (UTEP) about Crisis Management (Juarez, 2020). Her talk is paraphrased below.

“As a retailer, we have gone through two major crises in the last year. The first tragedy occurred in 2019, with the tragedy of an active shooting, in which 22 people lost their lives and several were injured. Next, the Covid-19 pandemic struck.”

“In order to effectively manage HR and help your organization through such a crisis, an HR leader first has to understand what they’re dealing with, and what type of responses are going to be adequate. A good organization will have contingency action plans in place. Understanding those contingencies is what’s going to help you understand how to be effective in your role, to be able to help the organization along.”

“One type of crisis is a natural crisis, including hurricanes, tornados earthquakes, and pandemics. Human-caused crises can happen suddenly, with no advanced warnings. With extremely short notice, something is going down, something’s happening. Managers don’t get a warning ahead of time. The COVID-19 pandemic is such a crisis. The virus was a predictable threat, but the actual evolution of the crisis was unknown. The complication is we don’t have an end date. And I think that’s what complicates these crises, is we don’t know how long it’s gonna last. We don’t know what other things we’re going to have to undertake to make sure that we get through the crisis correctly and effectively. So that unknown date is very, very serious and complicates everything.”

“In contrast, an active shooter situation is instantaneous. You have a tragic action and then you have to really pick up the pieces and you have to work through the aftermath. So, you know, a pandemic is really a long-term crisis, and the active shooter crisis is a short-term crisis. Labeling a crisis is important, because it’s important to know what you’re having to deal with, and what your response is going to be. You’re dealing with matching up your response.”

“The pandemic is a complex crisis because there are so many different potential states in the future. The actions of executives and teams now, in the midst of a crisis, will significantly determine if the organization is going to be able to come through the crisis or is going to be completely immersed into and lost into what’s happening. And that’s why an early organizational response is important, because you have an opportunity to create that roadmap on how to maneuver through the crisis.”

3.2. Traps and Traits for Leaders: Holistic vs Focused, Trust vs Control

In their article “Are You Leading through the Crisis… or Managing the Response?” McNulty and Marcus (2020) list the following challenges for leaders:

· Narrowing of mental focus in face of a crisis, because of the innate response of self-protection.

· Inability to step back and take a holistic view of what’s happening.

· Bad judgment, and only bandaging the situation.

· Limited experience: personal experiencefrom single organization, single industry.

In particular, McNulty and Marcus advise the following:

· Scan the landscape and understand what your challenges are, what your opportunities are, and prepare a better response plan for what’s happening.

· Delegate and identify partners in order to make the tough decisions, and in order to provide proper support.

· Resist the temptation of wanting to control every decision.

· Trust the folks that you have on your team to help you manage that crisis, and let them execute.

· Make sure that you don’t overtake or overstep your role during a crisis, because overstepping may swallow you up.

3.3. Overlapping Crises of a Retailer: Separation of Convoluted Crises

The risks of the active shooter crisis and the Covid-19 crisis are separated in Table 10.

Notice that the variegation of the complex realities of the world provide that, usually, the risks of difference crises may not overlap much. It is thus better to conduct risk analyses with a focus on the specific crisis situation, as is shown in Table 11 and Table 12.

Notice that risk analyses are ambiguous unless the underlying assumptions are clarified, specifically:

Table 10. Overlapping risks.

Table 11. Active shooter risks.

Table 12. Covid-19 pandemic risks.

1) Probability of Occurrence must be based on a stated Time Period, within a known system boundary, within which relevant factors are identified.

2) Pnd Probability of Non-Detection: therange and/or discrete values must be specified.

3) Severity: a scoring function relating the domain of real world consequences over the range of Severity must be provided.

4) Personal Perspective of the risk assessor must be provided.

5) Stage of Actualization must be specified.

In Table 11 and Table 12, the stated assumptions are:

1) Probabilities of Occurrence are within a retail store active shooter situation, and within a world pandemic.

2) Pnd Probability of Non-Detection: discrete values 0.1 to 0.9 in 0.1 increments.

3) Severity scoring function: no harm = 0, through death = 1.

4) Personal Perspective of the risk assessor: 3rd person.

5) Stage of Actualization: Actual active shooter situation, and actual pandemic.

3.4. Timing Differentiation for Clarity

It is interesting to note that the Moderna vaccine was developed in only 2 days

Table 13. Timing differentiation of COVID-19 vaccine approval process risks.

(FDA, 2020; Dangerfield, 2020; Neilson, Dunn, & Bendix, 2021), and was manufactured in sufficient quantities for testing in about one month; however, it was the legally required FDA testing phases, safety protocols and procedures which delayed reaching “emergency-use approval” of the vaccine by the FDA for about 10 months. Table 13 highlights key risks in the as-is legal approval process, as compared to a contemplated expedited approval process.

The results of this comparative risk analysis for alternative approval processes shows less cumulative risk in early deployment; indeed, earlier deployment will probably be considered in future vaccine approval processes, especially in the face of deadlier viruses. Notice how the risk analysis was abbreviated by leaving out risks with low priority. Each deployment option now lists only two unique risks; however, a full risk analysis would have each of the two deployment options list all four risks.

This comparative risk analysis for alternative vaccine approval processes could also have been augmented by the addition of the benefits attached to each alternative, making this comparison a fuller trade study among alternative choices. Benjamin Franklin (1772), in his letter to Joseph Priestly, indeed outlined the comparison of alternatives by the consideration of both pros and cons, to be collected over a number of days, as the pros and cons arise to mind, followed by the balancing and cancellation of the pros and cons, until only a determining few pros and cons remain.

4. Conclusion

Risk analyses are at the root of rational responses to crises, because risk analyses dissect dangerous situations into identifiable risks for which, according to triage assessment, mitigation of the risk is either highly beneficial, or non-consequential during the crisis. Although the bare tasks of risk analyses consist of listing risks and characterizing the risks according to probability of occurrence, severity, and computed priority number, usually significant ambiguity within risk analyses must be reduced by explicitly stating assumptions for the risk analyses, as well as assumptions within the numerical determination of each risk factor. Since the time of Antoine Arnauld, the basic epistemic reality of risk factors has remained the same, but the modern risk analyst has an expanded toolset for risk characterization and mitigation.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Arnauld, A., & Nicole, P. (1662). Logic, or, the Art of Thinking: Containing, besides Common Rules, Several New Observations Appropriate for Forming Judgment (5th Ed.). Cambridge University Press.
[2] ATAM: Architecture Tradeoff Analysis Method (2018). Software Engineering Institute, Carnegie Mellon University.
[3] Bahill, A. T., & Botta, R. (2015). Fundamental Principles of Good System Design. Engineering Management Journal, 20, 9-17.
https://doi.org/10.1080/10429247.2008.11431783
[4] Bahill, A. T., & Karnavas, W. J. (2000). Risk Analysis of a Pinewood Derby: A Case Study. Systems Engineering, 3, 143-155.
https://doi.org/10.1002/1520-6858(200033)3:3<143::AID-SYS3>3.0.CO;2-0
[5] Bar-Asher, J. (2006). Development Program Risk Assessment Based on Utility Theory. INCOSE International Symposium, 16, 1634-1646.
https://doi.org/10.1002/j.2334-5837.2006.tb02839.x
[6] Becker, K., & Smidt, M. (2016). A Risk Perspective on Human Resource Management: A Review and Directions for Future Research. Human Resource Management Review, 26, 149-165.
https://doi.org/10.1016/j.hrmr.2015.12.001
[7] Blanchard, B. S., & Fabrycky, W. J. (2006). Systems Engineering and Analysis. Prentice-Hall.
[8] Botta, R., & Bahill, A. T. (2007). A Prioritization Process. Engineering Management Journal, 19, 20-27.
https://doi.org/10.1080/10429247.2007.11431745
[9] Buede, D. M. (2000). The Engineering Design of Systems: Models and Methods. John Wiley and Sons, Inc.
[10] Chapman, W. L., Bahill, A. T., & Wymore, A. W. (1992). Engineering Modeling and Design. CRC Press Inc.
[11] Clausen, D., & Frey, D. D. (2005). Improving System Reliability by Failure-Mode Avoidance Including Four Concept Design Strategies. Systems Engineering, 8, 245-261.
https://doi.org/10.1002/sys.20034
[12] Dangerfield, K. (2020). Moderna Designed Its Coronavirus Vaccine in 2 Days—Here’s How. Global News, Posted November 30, 2020.
https://globalnews.ca/news/7492076/moderna-coronavirus-vaccine-technology-how-it-works
[13] FAA: Federal Aviation Administration (2020). Airworthiness Directives: The Boeing Company Airplanes. Federal Register, 85, 74560-74593.
[14] FDA: Food & Drug Administration (2020). Moderna COVID-19 Vaccine: FDA Briefing Document. VRBPAC: Vaccines and Related Biological Products Advisory Committee.
[15] Franklin, B. (1772). Benjamin Franklin to Joseph Priestley, 19 September 1772. Founders Online, National Archives.
https://founders.archives.gov/documents/Franklin/01-19-02-0200
[16] Gigerenzer, G. (2002). Reckoning the Risk. Penguin Books.
[17] Haimes, Y. Y. (1999). Risk Management. In A. P. Sage, & W. B. Rouse (Eds.), Handbook of Systems Engineering and Management (pp. 137-174). John Wiley & Sons.
[18] House of Representatives, Committee on Transportation & Infrastructure (2020). Final Committee Report: The Design, Development & Certification of the Boeing 737 Max.
[19] JATR: Joint Authorities Technical Review (2019). Boeing 737 Max Flight Control System: Observations, Findings, and Recommendations.
[20] Juarez, I. (2020). Talk to Students at the University of Texas at El Paso.
[21] Kazman, R., Klein, M., & Clements, P. (2000). ATAM: Method for Architecture Evaluation. CMU/SEI-2000-TR-004, Carnegie Mellon Software Engineering Institute.
https://doi.org/10.21236/ADA382629
[22] Kerrissey, M. J., & Edmondson, A. C. (2020). What Good Leadership Looks Like during This Pandemic. Harvard Business Review, April 13,
https://hbr.org/2020/04/what-good-leadership-looks-like-during-this-pandemic
[23] Kirkwood, C. W. (1999). Decision Analysis. In A. P. Sage, & W. B. Rouse (Eds.), Handbook of Systems Engineering and Management (pp. 1119-1145). John Wiley & Sons.
[24] McCarney, R., Warner, J., Iliffe, S., Van Haselen, R., Griffin, M., & Fisher, P. (2007). The Hawthorne Effect: A Randomized, Controlled Trial. BMC Medical Research Methodology, 7, Article No. 30.
https://doi.org/10.1186/1471-2288-7-30
[25] McNulty, E. J., & Marcus, L. (2020). Are You Leading Through the Crisis ... or Managing the Response? Harvard Business Review, March 25, 2-5.
[26] Neilson, S., Dunn, A., & Bendix, A. (2021). Moderna’s Groundbreaking Coronavirus Vaccine Was Designed in Just 2 Days. Business Insider, Retrieved 1 April 2021.
https://www.businessinsider.com/moderna-designed-coronavirus-vaccine-in-2-days-2020-11
[27] Ostrower, J. (2018). What Is the Boeing 737 Max Maneuvering Characteristics Augmentation System. The Air Current, Retrieved March 14, 2019.
[28] Pearson, C. M., & Clair, J. A. (1998). Reframing Crisis Management. The Academy of Management Review, 23, 59-76.
https://doi.org/10.2307/259099
[29] Rechtin, E., & Maier, M. (2000). The Art of Systems Architecting. CRC Press LLC.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.