Identifying Major Obstacles Impacting Strategy Execution in Large-Scale and Small-Scale Organizations in the U.S. ()
1. Introduction
One of the significant core competencies of a successful organization is strategy (Pryor et al., 2007). “Without successful implementation, a strategy is but a fantasy” (Hambrick & Cannella, 1989, p. 278). Strategy is a key competency for successful organizations, but without effective implementation, even the best strategies remain unrealized (Hambrick & Cannella, 1989). While designing a strategy is the first step toward organizational success, the real challenge lies in execution (Hrebiniak, 2006). Research indicates that 30% to 70% of strategies fail during implementation, making execution one of the most significant unresolved issues in management (Cândido & Santos, 2019, 2015). Many organizations struggle with translating strategy into action, resulting in wasted time, resources, and capital (Allio, 2005). This quantitative study seeks to identify the major obstacles impacting strategy execution in both small- and large-scale U.S. organizations. By investigating these barriers, the research aims to uncover opportunities for organizations to enhance their performance. Studies have shown that many managers excel in strategy formulation but encounter difficulties during implementation (Hrebiniak, 2006), with execution being more challenging than planning (Olson et al., 2005). A failure to address these barriers can lead to a breakdown in the execution process, which is a major contributor to performance discrepancies among firms (Greer et al., 2017).
Although much progress has been made in strategic management, strategy implementation remains underexplored compared to decision-making and planning (Hutzschenreuter & Kleindienst, 2006). Researchers such as Alexander (1985), Beer and Eisenstat (2000), and Kotter (1995) have identified several obstacles to implementation. This study expands on their work by collecting real-time data from business professionals in large- and small-scale organizations across the U.S. Through an online survey, the research examines the relationship between strategy execution obstacles and organizational performance indicators, including Financial, Customer, and Internal Performance (FPR, CPR, IPR). Additionally, the study investigates whether there are significant differences in these obstacles between small- and large-scale organizations. By focusing on the practical barriers to execution, this research aims to provide organizations with actionable insights to improve their strategy implementation processes and achieve better performance outcomes. The failure rate of strategy implementation remains a critical issue in organizational management, with estimates ranging from 50% to 90% (Gray, 1986; Kaplan & Norton, 2001a; Kiechel, 1982; Sirkin et al., 2005). Despite significant advancements in strategic management, the challenge of translating strategy into successful execution persists. Cândido and Santos (2015) highlighted that 73% of managers view strategy implementation as more complex than formulation, and 82% believe they have the least control over the execution process. This gap between planning and execution has led to widespread performance issues across organizations of all sizes. Strategic management has evolved significantly since its formal foundation in the 1960s (Amitabh, 2010; Hambrick & Chen, 2008). However, obstacles to implementation continue to hinder organizations’ ability to achieve their strategic objectives. Studies have shown that effective execution relies on the collaboration between top management and middle management, with both parties needing to exchange critical information to ensure high-quality implementation (Raes et al., 2011). The objective of this research is to identify the major obstacles impacting strategy execution in both large- and small-scale organizations in the U.S. By focusing on these obstacles, and the study seeks to provide insights into improving execution processes, addressing the literature gap on failed strategy implementations, and helping organizations achieve better outcomes. The research also explores how organizational learning and strategic management can positively influence implementation, contributing to future strategic success (Argyris, 1989; Dooley et al., 2000).
The Purpose and Significance of the Study
The purpose of this quantitative, correlational study is to identify and analyze the critical factors impacting strategy implementation and to explore the relationship between these obstacles and organizational performance indicators across both large-scale and small-scale organizations in the U.S. This research addresses a notable gap in the literature by investigating how obstacles to strategy execution, as identified by Köseoglu et al. (2018) and Cândido and Santos (2019), affect various performance metrics, including Financial Performance (FPR), Customer Performance (CPR), Internal Performance (IPR), and Overall Organizational Performance (OPR). By focusing on these factors, the study aims to uncover how challenges in strategy execution can be transformed into opportunities, providing organizations with actionable insights to enhance their strategic implementation processes. The significance of this study is underscored by the widespread challenge of successful strategy execution faced by corporate leaders globally. According to Sull et al. (2015), executional excellence remains a top challenge, with a significant percentage of large organizations struggling to implement strategies effectively. The high failure rates of strategic initiatives, estimated between 50-90% (Kaplan & Norton, 2001b; Mintzberg, 1994), highlight the urgent need for a more nuanced understanding and solutions in this area. This research contributes to the existing literature by establishing a model that links obstacles to strategy execution with organizational performance indicators, offering a practical framework for improving strategic outcomes. The study’s focus on both large and small-scale organizations is particularly relevant, as small businesses, which are vital to the economy through their contributions in employment, goods, services, exports, and innovation (Dreyer et al., 2017), are especially vulnerable to execution failures due to limited resources (Simsek et al., 2018). By providing a comprehensive analysis of the root causes of implementation challenges and proposing actionable strategies, this research aims to enhance organizational performance and support sustainable growth across various industries in the U.S.
2. Literature Review
2.1. Strategy Implementation
The approach toward strategy implementation research has changed in the past few years. Scholars have called for heightened attention to this topic, which has garnered momentum for strategy implementation to be listed as one of the critical topics in organization theory literature and strategy management (de Oliveira et al., 2019; Kastanakis et al., 2019). Effective strategy implementation is a substantial component of organizational performance success and a potential source of competitive advantage. Although previous research on the subject resulted in a set of recommendations, case studies, and empirical results that provided insight into the topic, these bodies of work failed to produce a cohesive framework identifying obstacles in strategy implementation and successfully overcoming them. Instead, research associated with strategies often treats implementation as a black box and simultaneously overlooks sources of performance heterogeneity that are derived from variations in the strategy execution process (Tawse & Tabesh, 2021). Although remarkable progress has been made in strategic management, the problems associated with the implementation of strategies in organizations persist. The high failure rate of strategy implementation is a critical and ongoing issue for research scholars and practitioners (Barney, 2001; Hickson et al., 2003; Mockler, 1995; Tawse & Tabesh, 2021). Despite significant progress in the strategic management field, one of the major unresolved management problems is the failure of strategy implementation efforts (Cândido & Santos, 2019).
2.2. Strategy Execution Models
Strategy execution is a critical process that involves translating strategic plans into actionable steps to achieve organizational goals. Various models and frameworks have been developed to assist managers and researchers in identifying obstacles to successful strategy implementation. These models offer a holistic approach but may require customization to address specific organizational needs and regional contexts. One of the most influential models is the Balanced Scorecard, proposed by Kaplan and Norton (2001a). This framework evaluates strategy implementation through key performance indicators across four perspectives: financial, customer, internal processes, and learning and growth. The Balanced Scorecard facilitates a cause-effect relationship between strategic goals and their outcomes, making it a popular tool for measuring the success of strategy implementation (Hourani, 2017). Another significant model is Pryor et al.’s (2007) 5P’s Conceptual Framework, which emphasizes the alignment and integration of strategic, tactical, and behavioral elements essential for effective strategy execution. The 5P’s—purpose, principles, processes, people, and performance—offer a comprehensive approach to addressing the complexities of strategy implementation, although adjustments may be necessary when dealing with external factors. Porter’s Generic Models (1980) provide a typology for strategic positioning, categorizing strategies into cost leadership, differentiation, and focus. While widely accepted, these models can be challenging to implement without considering the specific needs of the target customer group (Polo & Weber, 2010). Lastly, the McKinsey 7S Model, developed by Waterman et al. (1980), highlights seven critical factors—strategy, structure, systems, style, staff, skills, and shared values—that influence strategy execution. This model is often integrated with other frameworks, such as the Balanced Scorecard, to facilitate organizational change and strategy implementation (Hanafizadeh & Ravasan, 2011). These models provide valuable insights but often require tailored research to address unique organizational challenges effectively.
2.3. Strategy Implementation Obstacles
Cobbold et al. (2001) analyzed a 1997 survey of 200 companies in the “Times 1000”. Eighty percent of those company directors said that only 14% of them were implementing strategy, though they had the right strategies. Furthermore, 86% of the executives had problems transferring their strategy execution into reality. Alexander (1985) stated that the key reason for the implementation failure was that company executives, managers, and supervisors were not prepared with practical models during the implementation process. Alexander further argued that executives, without adequate models, tried to execute strategies without the knowledge of multiple complex factors to make execution work (Okumus, 2003). Rumelt (2011) analyzed the Popular Fortune research and found that less than 10% of well-formulated strategies were effectively executed. The research concluded that it was better to have a less excellent strategy that was well executed than to have an excellent strategy that was partially or never executed.
2.4. Literature Gap
A thorough review of qualitative and quantitative research on strategy implementation reveals key gaps that need to be addressed. Qualitative studies have provided valuable insights into the complexities of executing strategies, often focusing on categorizing obstacles and identifying factors that influence success. For example, Yang et al. (2010) identified nine key obstacles, but their study did not quantify the relative importance of these factors. Similarly, Cândido and Santos (2019) highlighted 22 obstacles but did not prioritize them. While these studies offer frameworks for understanding strategy execution challenges, their lack of quantitative data limits their ability to rank obstacles by significance. In contrast, quantitative research has aimed to validate qualitative findings and test assumptions, but gaps remain in providing empirical evidence. Noble and Mokwa (1999) explored the behavior of middle managers and its impact on strategy implementation, while Skivington and Daft (1991) emphasized the need to bridge framework and process views in strategy implementation. Despite these efforts, many studies, including those by Minarro-Viseras et al. (2005) and Kohtamäki et al. (2012), have been limited by industry-specific contexts, reducing their generalizability. The recurring theme across both qualitative and quantitative research is the need for empirical validation. While qualitative studies emphasize the complexity of strategy execution, quantitative research has not yet fully explored or ranked the relative importance of identified obstacles. This gap highlights the need for future research that integrates both qualitative insights and quantitative data to better understand and prioritize the challenges of strategy implementation across different organizational contexts. In conclusion, while previous studies have contributed to understanding strategy execution, a major gap exists in the quantitative validation of the factors influencing successful implementation. Addressing this gap will be crucial for advancing strategic management research and providing more actionable insights for organizations.
The literature underscores the critical role of strategy implementation in achieving organizational success. Pryor et al. (2007) highlight the necessity of robust execution for reaching organizational objectives, while Xue et al. (2005) and Yang (2019) report high failure rates and performance losses during strategy implementation, indicating persistent challenges. Despite this, existing research often lacks clarity on what constitutes effective implementation. While theoretical frameworks dominate the literature (Bonoma, 1984; Lee & Puranam, 2016), there is a notable lack of empirical studies that identify and categorize obstacles to strategy implementation. Crittenden and Crittenden (2008) argue that current research provides broad insights but fails to pinpoint critical factors for successful implementation. This gap is particularly pronounced in U.S. organizations, despite their prominence in strategy research (Alharthy et al., 2017). Creasap (2011) surveyed organizations in the southern U.S. but focused narrowly on federal agencies, neglecting a broader range of organizational contexts and sizes. This limitation highlights the need for comprehensive studies that address both large-scale and small-scale U.S. organizations. Additionally, the evolving nature of management practices necessitates updated research. Hrebiniak (2006) and Kalali et al. (2011) point out that organizational culture significantly impacts strategy execution, yet studies specific to the U.S. context are limited. Radomska (2014) and Gosselin (2005) explored the correlation between obstacles and company performance, but their research did not differentiate between various performance indicators or provide a U.S.-centric analysis. Bhatti et al. (2013) and Radomska (2014) reveal that linking strategy execution obstacles with performance indicators remains scarce. While literature identifies numerous obstacles to strategy implementation, there is a critical need for more focused research. Future studies should provide detailed insights into the relationship between these obstacles and organizational performance, particularly within the diverse context of U.S. organizations.
2.5. Research Questions
The following research questions guided the study. For each research question, both the null and alternative hypotheses were formulated.
1) Is there a statistically significant relationship between the barriers to strategic implementation and performance indicators for organizations—Financial Performance Related (FPR), Customer Performance Related (CPR), and Internal Performance Related (IPR)?
Ho1a. There is no statistically significant relationship between the barriers to Strategy Implementation Obstacles and Financial Performance Related (FPR) indicators.
Ha1a. There is a statistically significant relationship between the barriers to Strategy Implementation Obstacles and Financial Performance Related (FPR) indicators.
Ho1b. There is no statistically significant relationship between the barriers to Strategy Implementation Obstacles and Customer Performance Related (CPR) indicators.
Ha1b. There is a statistically significant relationship between the barriers to Strategy Implementation Obstacles and Customer Performance Related (CPR).
Ho1c. There is no statistically significant relationship between the barriers to Strategy Implementation Obstacles and Internal Performance Related (IPR).
Ha1c. There is a statistically significant relationship between the barriers to Strategy Implementation Obstacles and Internal Performance Related (IPR).
2) Is there a statistically significant difference in strategy implementation obstacles between small-scale organizations and large-scale organizations?
Ho2. There is no statistically significant difference in strategy implementation obstacles between small-scale organizations and large-scale organizations.
Ha2. The obstacles to strategy implementation are significantly different for large-scale and small-scale organizations.
3. Methodology
3.1. Research Design
A quantitative correlational design methodology was used to identify the relationship between strategy implementation obstacles and performance indicators of the organization. Structural Equation Modeling (SEM) was used for statistical analysis as SEM enables researchers to examine the relationships among variables visually (Wong, 2013). SEM allows latent constructs and related variables to be analyzed simultaneously in a structural model to evaluate the variables’ relationship with other variables (Wolf et al., 2013). In this case, the independent variables (strategy implementation obstacles) are related to the performance indicators of the organization. Data were collected from managers and senior executives from small- and large-scale organizations in the U.S. For Research Question One, obstacles to strategy execution represented the independent variables’ construct, and organizational performance indicators represented the dependent variables’ construct. This quantitative study utilized a validated electronic survey previously used by Köseoglu et al. (2018) in his study to collect quantitative data from managers, C-level employees, and leaders to identify the significant factors that hinder the strategy implementation process. In using a survey, researchers can create a “numeric description of trends, attitudes or opinions of a population” (Creswell, 2009, p. 145). However, with permission, the survey was modified to represent large-scale and small-scale organizations, as opposed to the hotel industry. Once Köseoglu et al. (2018) granted approval to reproduce the survey, the data collection methodology was finalized, and an online survey was created in preparation for obtaining approval from the University of the Cumberlands and the Institutional Review Board (IRB) (see Appendix A) to conduct the study. An electronic survey using Microsoft Forms was used to collect data from managers, leaders, and senior executives across the United States. The survey was self-administered and distributed to participants online through social media platforms using LinkedIn and Amazon Turk. When survey respondents self-report responses through a survey, the data can provide perceptions and trends of a large group of people (Creswell, 2009). This research study identified factors/obstacles impacting strategy implementation and its impact on organizational performance indicators with the aid of a survey instrument (Table 1).
Table 1. Strategy implementation obstacles.
Number |
Strategy Implementation Obstacles |
SIO1 |
Not carrying out a comprehensive strategic analysis in decision making |
SIO2 |
Frequent interventions by owners/management to business operations |
SIO3 |
Not considering all key stakeholders |
SIO4 |
Lack of a comprehensive strategic plan |
SIO5 |
Lack of sufficient training needed for implementation |
SIO6 |
Focusing on only financial performance |
SIO7 |
Not having effective evaluation/control systems |
SIO8 |
Lack of support from senior managers for implementation |
SIO9 |
Lack of full commitment from employees |
SIO10 |
Low level of motivation of employees |
SIO11 |
Lack of consensus among decision-makers |
SIO12 |
High turnover among senior managers |
SIO13 |
Low level of motivation of managers |
SIO14 |
Lack of fit between organizational culture & the strategic decision |
SIO15 |
Middle managers & employees not fully understanding the decision |
SIO16 |
Unsuitable leadership style during implementation |
SIO17 |
Lack of fit between the organization’s overall goals & the decision |
SIO18 |
Lack of a clear vision & goals by the organization |
SIO19 |
Multiple decisions/ projects are being implemented at the same time |
SIO20 |
Conflicts among senior managers |
SIO21 |
Conflicts among departments |
SIO22 |
Lack of fit between organizational structure & the strategic decision |
SIO23 |
Turnover among middle managers |
SIO24 |
Short-term thinking by the owner(s) |
SIO25 |
Fear of insecurity among employees |
SIO26 |
Lack of necessary skills by employees |
SIO27 |
Disagreements among owners & managers |
SIO28 |
Poor decision-making skills of owners & managers |
SIO29 |
Heavy bureaucracy within the organization |
SIO30 |
Owners & managers often change their strategic decisions |
SIO31 |
Conflicts & disagreements among employees |
SIO32 |
Insufficient resources needed for implementation |
SIO33 |
Unexpected changes within the organization during the implementation |
SIO34 |
Time limitation |
SIO35 |
Need more resources than originally planned |
SIO36 |
Resistance from employees |
SIO37 |
Resistance from departments |
SIO38 |
Lack of technology to implement the strategic decision |
SIO39 |
Unexpected changes in the external environment |
SIO40 |
Need more time for implementation than originally planned |
SIO41 |
Strategic decision not offering anything valuable to employees |
Note. The obstacles to the implementation of the strategy and factors were identified from multiple research studies. Adopted from “A Study on the Causes of Strategies Failing to Success,” by Köseoglu et al., 2009, Journal of Global Strategic Management, 3, 2, 77-91 and “Corporate sustainability strategy bridging the gap between formulation and implementation,” by Engert & Baumgartner, 2016, Journal of Cleaner Production, 113, 822-834.
3.2. Sampling Procedures
Based on the nature of the research study, the sample population for the survey targeted managers, directors, and senior executives in small- and large-scale organizations within the United States. Considering these population criteria, the method of sampling selected was a random sampling method. When the random sampling method is used, no eligible survey participant or group has a better chance of being selected for survey participation. Distinct samples have an equal chance of being selected by the random sampling method (Cochran, 1977). Such sampling methods provide a better estimate of parameters when compared to purposive sampling. Also, each unit of the sample has a specific preassigned chance of inclusion in the sample. In this method, every individual in the sampling scope has a known, non-zero chance of being selected for the sample. The random sampling method provided an unbiased and better estimate of the parameters, especially when the population in the scope of the research study is homogeneous (Singh & Masuku, 2014).
The target population for the survey included managers, senior executives, and leaders who are currently employed in the U.S. in both small-scale and large-scale organizations. There were 31.7 million small-scale organizations and 20,139 large-scale organizations in the U.S. (SBA, 2019). The number of managers and senior executives employed in the United States accounts for 18,986,000 (U. S. Bureau of Labor Statistics [BLS], 2022). Approval to conduct the research was granted by the University of the Cumberlands and the Institutional Review Board (IRB) (see Appendix A). The target population was reached through two different sources. Firstly, through social media platforms using LinkedIn. A notification was then sent to these target populations through the messaging system or postings on professional networking groups within LinkedIn, explaining the survey, asking for their participation, and providing a link to the survey. Before proceeding with the survey, participants were given information on the anonymity and confidentiality of the survey. Participants then provided informed consent by agreeing or disagreeing to proceed. Secondly, the survey targeted the population through Amazon Turk. The requirement for the location was set to the United States, and informed consent was relied on to ensure that only managers and the above population completed the survey. The survey consisted of 70 items and took approximately 15 minutes to complete. There were no other inclusion or exclusion criteria.
The sample size for a comparative quantitative design is determined and verified using the G*power. The G*Power 3.1 program was used to generate a sample size of 128 for t-tests, where the statistical test was selected as the difference between two independent means (two groups). The sample size of the group was expected to be 64. Power (1-b error probability) was set to.8, p < 0.05, and tails were set to two. Tabachnick et al.’s (2007) rule of thumb suggested 100 participants plus the number of independent variables as the sample size of the test, which makes the population size 140. For non-experimental correlation, the minimum sample size as per research requirement is greater than 60 (medium effect; p < 0.05). However, for the purpose of this study, the sample size of 128 generated by the G*Power 3.1 program was considered. This research study focused on both large-scale and small-scale organizations. This research study defined the “size” of an organization by aligning it with the U.S. Small Business Administration’s (2019) (SBA) definition of a small business. SBA’s Table of Size Standards provides size definitions for North American Industry Classification System (NAICS) codes. This size system varies widely by industry, revenue, and employment. Small businesses are those businesses operating as partnerships, sole proprietorships, or corporations, usually with 500 or fewer employees (SBA, 2019). Sepulveda (2021) also stated that a small business is a company that has under 500 employees. Furthermore, the U.S. General Services Administration (2018, 2019) also defined small businesses as having 500 or fewer employees. For the purpose of this study, the same standards were used to determine the small-scale organization across all industries. The survey population in the scope of the research study was divided into small-scale or large-scale based on the number of employees. Organizations that do not fall into the category of small-scale organizations were treated as large-scale organizations.
3.3. Data Collection Sources
This research study adopted the survey from Köseoglu et al.’s (2018) study. Köseoglu et al. (2018) developed and created the survey to determine the obstacles to strategy implementation. The purpose of Köseoglu et al.’s (2018) study was to examine potential barriers to implementing strategic decisions in hotels in Antalya, Turkey. In order to determine the underlying structure of the survey measurement framework (Brown, 2006), the 41-item instrument was subjected to an Exploratory Factor Analysis (EFA) via SPSS for Windows, Release Version 23.0 (Köseoglu et al., 2018). A four-step process was followed to determine the dataset’s suitability for EFA, starting with analyzing the value of Cronbach’s alpha. Cronbach’s alpha value was determined as 0.944, which is greater than the requirement of alpha greater than 0.70. The absence of multicollinearity was identified to be above zero. Bartlett’s test of sphericity was conducted to confirm the patterned relationship in the dataset, which was identified as less than 0.05 (Köseoglu et al., 2018). Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy was identified as 0.828, which is greater than the cut-off points of 0.50. Finally, the ‘a’ superscripts as individual diagonal elements were determined to be greater than 0.50. Thus, the survey instrument and dataset were suitable for EFA (Köseoglu et al., 2018). Permission to adopt the survey was granted by the author, along with permission to reproduce it electronically. The author further granted the permission to modify the survey to make it relevant to the target industry and change demographics by adding a parameter that identified small-scale and large-scale organizations. Köseoglu et al. (2018) emphasized the survey’s credibility by highlighting how the survey was developed based on the critical literature review. The first 41 statements of the survey were adapted from previous strategy implementation studies (Alashloo et al., 2005; Alexander, 1985; Augier & Teece, 2009; Brenes et al., 2008; Helfat & Martin, 2015; Hrebiniak, 2006; Kargar & Blumenthal, 1994; Köseoğlu et al., 2009; Miller et al., 2004; Noble, 1999; Okumus, 2003; Rapert et al., 2002). This survey tool’s grounded and tangible value was reflected by the participation of managers and senior-level executives specifically addressing strategic execution challenges. The successful implementation of the strategy was assessed using a five-point scale ranging from 1-strongly not agree to 5-strongly agree (Al-Ghamdi, 1998; Harrington & Kendall, 2006). Scandura and Williams (2000) stated that the surveys had been found to maximize generalizability but were low in realism and precision of measurement. However, using a survey tool grounded in the experiences of executives participating in strategy execution raises the prospect of this survey rating high on both generalizability and realism (Creasap, 2011).
3.4. Recruitment and Distribution
The data recruitment and distribution process for the study was meticulously structured in three parts, leveraging mixed methods to enhance the robustness of findings, as suggested by Strauss and Corbin (1998). The first stage involved direct recruitment, where participants who expressed interest were provided a Microsoft Forms link to complete the survey. Participants were required to consent before proceeding, ensuring ethical standards were met. The second stage of recruitment utilized LinkedIn, a prominent professional social media platform with over 530 million users globally (LinkedIn, 2017). Faris and Moore (2017) demonstrated the efficacy of social media in professional contexts, with 69% of surveyed professionals using platforms like LinkedIn for both personal and professional purposes. This platform was crucial for targeting a broad, relevant audience for the study. The third recruitment method employed Amazon Mechanical Turk (MTurk), a well-regarded online labor market platform. MTurk has been validated by several studies (Metz, 2020; Hunt & Scheetz, 2019; Goodman & Paolacci, 2017) for its ability to provide diverse, high-quality data. Participants were selected based on specific criteria, such as being U.S.-based managers or senior executives, and were required to consent to the study before participation. For data distribution, once IRB approval was secured, participants accessed the survey via Microsoft Forms, either directly or through shared links on social media and MTurk. Importantly, no personal identifying information was collected, and the anonymity of participants was maintained throughout the process. Microsoft Forms does not store IP addresses, and MTurk employs worker IDs to mask personal identities, further ensuring participant confidentiality. Upon reaching the desired sample size, survey data were downloaded in Excel format and securely stored on a password-protected computer for processing. Statistical analysis was conducted using JASP software, with all data anonymized prior to analysis. No identifying information about organizations was collected beyond their size, as per SBA standards. This comprehensive approach to data recruitment, distribution, and processing ensured a high level of data integrity and participant confidentiality, providing a solid foundation for subsequent statistical analysis and interpretation.
3.5. Statistical Tests
This study examined the relationship between the barriers to Strategy Implementation Obstacles and Financial Performance Related (FPR) indicators. The study also focused on identifying statistically significant differences in strategy implementation obstacles/barriers between small-scale and large-scale organizations. As such, in order to examine factors leading to poor strategy implementation proposed as the strategy implementation problem variables, the following research questions were formed. Structural equation modeling using SmartPLS software was utilized for this study. Structural Equation Modeling (SEM) allows researchers to examine relationships between observable and unobservable variables, also referred to as latent or unobservable constructs (Hair et al., 2019; Wong, 2013). Additionally, SEM is a statistical tool used in sustainable research to test assessment systems and explore relationships of variables (Yuan et al., 2018). SEM model combines both factor analysis and multiple regression (Setyawan et al., 2018). The SEM model is considered appropriate when theoretical frameworks are tested from a prediction perspective (Hair et al., 2019). When latent variables are included within the conceptual model, the PLS-SEM multivariate analysis method can simultaneously analyze multiple latent variables that cannot be measured directly (Hair et al., 2014a). Additionally, Iacobucci (2010) stated that SEM is a prominent tool for researchers to examine the effect of independent variables on dependent variables.
This statistical analysis method does not impose distributional assumptions on data, so a normality test was not required while using PLS-SEM (Hair et al., 2019). SmartPLS also provides a descriptive analysis of the collected data as a sample summary. Hair et al. (2014b) stated that when PLS-SEM is compared to the covariance-based partial least squares structural equation modeling (CB-SEM), the former maintains a greater statistical power and is more likely to identify statistically significant relationships among the variables present and utilized within the given sample data set. Furthermore, when considering the possibilities of non-normal data producing skewed distributions, the bias-corrected and accelerated (BCa) bootstrapping functionality of SmartPLS with 5,000 samples can be applied to the research data to adjust path coefficients for skewness (Hair et al., 2019). Researchers are provided with the opportunity to direct the statistical software during the confirmatory factor analysis process in SEM (Saxena, 2011). Furthermore, Iacobucci (2010) noted that SEM has become prominent among researchers as it does an efficient job when relationships are compared simultaneously, unlike individual regression analyses. The research questions of the current study guided the quantitative study of the research conducted. The research questions were investigated to identify the major obstacles in strategy implementation in the United States and identify their relationship with organization performance indicators. This study attempted to bridge the gap in the literature by addressing the research questions in the scope of this study. In order to analyze Research Question One, an SEM was used to determine the relationship between strategy implementation obstacles and performance indicators for organizations. The level of significance was set at.05. The strategy implementation obstacles were the independent constructs, and organizational performance indicators were the dependent constructs for the SEM. For Research Question Two, an independent samples t-test determined the difference in response of participants with respect to strategy implementation obstacles in small-scale and large-scale organizations. The level of significance was set at.05. As this question focused on identifying the difference between two group means, an independent samples t-test solves this question.
4. Data Analysis
4.1. Analysis of Research Questions
The survey data was descriptively analyzed to determine the count (N), minimum and maximum values, mean, and standard deviation (SD) for each variable, using JASP software. All 346 participants answered every question, and the variables were measured on a five-point Likert scale, with values ranging from one to five. The mean was used to evaluate the dataset’s distribution, and the SD assessed the variability of responses relative to the mean. The study examined 41 obstacles to strategy implementation and four indicators of organizational performance. To facilitate statistical analysis, categorical survey data was converted into numerical data. For analysis using PLS-SEM, the measurement model’s reliability and validity were assessed following the guidelines by Hair et al. (2019). Reflective and formative constructs were categorized based on the direction of their connector arrows relative to the latent variable, as outlined by Hair et al. (2014a).
4.2. Analysis of Measurement Models
Classifying the constructs as formative or reflective is critical for the SEM (Hanafiah, 2020). For the reflective model, the latent construct is present independently of the measures. The formative model’s latent construct depends on constructive, operational, or instrumental interpretation. As the formative model defines the latent variable, the variables are not interchangeable (Bollen & Diamantopoulos, 2017). Additionally, in reflective models, the flow of causality is from the construct to the indicators, and a change follows any change in the construct in indicators. Furthermore, formative models are characterized by changes in the indicators because of changes in the constructs (Hanafiah, 2020). In the reflective model, the inclusion or exclusion of one or even more indicators outside a domain does not impact the content validity of the construct. Nevertheless, in a formative model, the conceptual meaning of the construct can change when there is an addition or removal of an indicator (Wang et al., 2015). As the current model of this study was focused on identifying the relationship between strategy implementation obstacles and performance indicators of the organizations, any change in the indicators of strategy implementation of performance indicators does not impact the construct or conceptual meaning of this construct; the model in scope of this study was categorized as a reflective construct.
4.3. Assessment of Reflective Measurement Models and Constructs
The reflective measurement models and corresponding constructs are analyzed by assessing outer loadings, internal consistency reliability, convergent validity, and discriminant validity (Hair et al., 2019). Analyzing the measurement model demonstrates the factor loadings and establishes the reliability and validity of each applicable variable. Generally, in examining factor loadings, it is recommended to be above 0.708 (Hair et al., 2019). Hair et al. (2019) stated that the factor loadings over this threshold “indicate[s] that the construct explains more than 50 percent of the indicator’s variance, thus providing acceptable item reliability” (p. 8). Furthermore, it is recommended that the construct be looked at in its entirety, as a factor loading with low value is not always a means for elimination (Hair et al., 2019). If it is exploratory research, 0.4 or higher is acceptable (Hulland, 1999). The reflective constructs used for measurement of the current research model included Strategy Implementation Obstacles, Financial Performance Related indicators, Customer Performance Related indicators, and Organization Performance Related indicators. The outer loading values were examined to assess these constructs. The outer loadings of the related indicators are shown in Table A1 (see Appendix A). Out of the 41 indicators of Strategy Implementation Obstacles’ construct, 28 of them with outer loading values less than 0.60 were removed; thus, 13 critical obstacles were selected for SEM. Similarly, one of the Financial Performance Related Indicators (FPR3) was removed as the factor loading was less than 6. In examining factor loadings, a loading above 0.708 is generally recommended (Hair et al., 2019). However, for exploratory research, Hulland (1999) stated that a value of 0.4 or higher is acceptable. Upon removing indicators with the least outer loading values (less than 0.60), the composite reliability and average variance extracted metrics of the related constructs increased, respectively (Hair et al., 2014b). The revised outer loadings are shown in Table 2.
The internal consistency reliability of reflective measurement models is demonstrated by analyzing each construct’s composite Reliability, Cronbach’s Alpha, and Rho A (ρA) metrics (Hair et al., 2019). The composite reliability values of constructs are expected to be between 0.70 and 0.90, except when completing exploratory research, where the value may range from 0.60 and 0.70 (Hair et al., 2019). A Cronbach Alpha value greater than 0.70 was recommended to ensure the reliability of each construct (Hair et al., 2019). As a generally accepted rule, a Cronbach Alpha value between 0.6 - 0.7 indicates an acceptable level of reliability, and 0.8 or greater is a very good level (Ursachi et al., 2015). However, values higher than 0.95 are not necessarily good, as they might indicate redundancy (Hulin et al., 2001).
Traditional research used “Cronbach’s alpha” to measure the internal consistency reliability of data, but it tends to provide a conservative measurement in PLS-SEM. However, “Composite Reliability” was suggested as a replacement (Bagozzi & Yi, 1988; Hair et al., 2012). Composite Reliability values larger than 0.6 demonstrate high internal consistency reliability (Wong, 2013). Additionally, Bagozzi and Yi (1988) suggested that Composite Reliability should be 0.7 or higher, but 0.6 or higher is acceptable for exploratory research. Examining the constructs’ convergent validity through the Average Variance Extracted (AVE) metric is critical in assessing the reliability of reflective measurement models (Hair et al., 2019). The value of the AVE metric is expected to be equal to or greater than 0.50, as this would indicate that a construct explains 50 percent or more of the items’ variance (Hair et al., 2019). However, if AVE is less than 0.5 but composite reliability is higher than 0.6, the convergent validity of the construct is still adequate. According to Hair et al. (2017), the threshold value required for Average Variance Extracted (AVE) is greater than or equal to 0.5. However, if the composite reliability value is higher than 0.6 for a particular
Table 2. Reflective measurement constructs’ outer loadings.
Reflective Measurement Construct |
Indicator |
Outer loadings post removal of indicators |
Customer Performance Related |
CPR1 |
0.627 |
|
CPR2 |
0.627 |
|
CPR3 |
0.604 |
|
CPR4 |
0.791 |
Financial Performance Related |
FPR1 |
0.638 |
|
FPR2 |
0.731 |
|
FPR4 |
0.615 |
|
FPR5 |
0.727 |
|
FPR6 |
0.687 |
|
FPR7 |
0.627 |
|
FPR8 |
0.733 |
Internal Performance Related |
IPR1 |
0.681 |
|
IPR2 |
0.65 |
|
IPR3 |
0.623 |
|
IPR4 |
0.714 |
Strategy Implementation Obstacles |
SIO6 |
0.679 |
|
SIO13 |
0.64 |
|
SIO20 |
0.616 |
|
SIO21 |
0.798 |
|
SIO24 |
0.643 |
|
SIO25 |
0.62 |
|
SIO26 |
0.764 |
|
SIO31 |
0.68 |
|
SIO34 |
0.669 |
|
SIO36 |
0.778 |
|
SIO37 |
0.73 |
|
SIO38 |
0.678 |
|
SIO40 |
0.603 |
construct, AVE equal to 0.4 can be accepted (Hair et al., 2017). The AVE values of all reflective measurement models were greater than 0.40, therefore displaying convergent validity among the reflective constructs. The AVE values of the reflective constructs are shown in Table 3. According to Hair et al. (2017), if the composite reliability value is higher than 0.6 for a particular construct, AVE equal to 0.4 can be accepted. As such, if the composite reliability of all constructs is greater than 0.75, the reliability of the composite reliability value is established. These values can be seen in Table 3. Additionally, as the Cronbach Alpha for all constructs was greater than 0.70, the constructs were determined to be reliable (Hair et al., 2019).
Table 3. Cronbach’s alpha, AVE composite reliability values of reflective measurement constructs.
Reflective Measurement Construct |
Cronbach’s alpha |
Composite reliability (rho_a) |
Average variance extracted (AVE) |
CPR |
0.761 |
0.77 |
0.444 |
FPR |
0.858 |
0.861 |
0.464 |
IPR |
0.765 |
0.765 |
0.446 |
SIO |
0.921 |
0.923 |
0.472 |
The remaining analyses of the constructs sought to determine discriminant validity. Heterotrait-Monotrait Ratio (HTMT) was utilized to determine discriminant validity. The main difference between the HTMT criteria lies in their specificity. HTMT is the most conservative criterion among the approaches to validate Discriminant validity, primarily because of its ability to achieve the lowest specificity rates of all the simulation conditions (Henseler et al., 2015). A threshold value of 0.90 was proposed by Henseler et al. (2015) for structural models with conceptually similar constructs. Higher values of HTMT lead to Discriminant validity problems (Henseler et al., 2015). In determining discriminant validity, all the latent constructs in scope meet the threshold value where HTMT < 0.90, except for CPR. HTMT value for CPR was slightly above the threshold (0.911). Rönkkö and Cho (2022) stated that a large correlation does not always mean a discriminant validity problem, especially if one is expected based on theory or previous empirical observations. Furthermore, the deletion of CPR would not be compatible with the theoretical reasoning for its inclusion within the model. According to the balanced scorecard approach by Kaplan and Norton (2004), financial measures alone are not an ideal representation to gauge organizational performance. Organizational performance should also include non-financial indicators, such as customer satisfaction, internal business or organizational processes, and organizational capacity along with financial indicators (Albertsen & Lueg, 2014; Francioli & Cinquini, 2014; Jakobsen & Lueg, 2014). The HTMT values of the constructs are shown in Table 4. Similarly, several studies stated that strategy-performance research should incorporate multiple performance measures, including both financial and non-financial indicators, as the choice of these factors influences conclusions about the relationship between strategy and performance. (Cavalieri et al., 2007; Hillman & Keim, 2001; Parnell et al., 2006; Pongatichat & Johnston, 2008; Ryan, 2015). However, though there were four performance indicators of the organization initially, the Organization Performance Related (OPR) construct was dropped as it did not meet Cronbach’s alpha and Discriminant validity test requirements. Cronbach’s alpha was reported as 0.684, and HTMT was above 1.039; thus, it was removed from the SEM latent construct list.
Table 4. Discriminant validity of reflective measurement constructs.
|
Alpha |
CPR |
FPR |
IPR |
Alpha |
|
|
|
|
CPR |
0.582 |
|
|
|
FPR |
0.614 |
0.835 |
|
|
IPR |
0.652 |
0.911 |
0.838 |
|
4.4. Analysis of Structural Model
Once the threshold values were achieved for the measurement model, the next step was to assess the structural model. The structural model portion of this paper was analyzed using PLS-SEM and was interpreted using the recommendations of Hair et al. (2019). The structural model was examined using the indicator collinearity, R2 values, f2 effect sizes, Q2 values, and statistical significance and relevance of path coefficients (Hair et al., 2019). The model was first assessed for collinearity check using the Variance Inflation Factor (VIF) metric. It was identified that all VIF values between predictor and dependent constructs fell well below the recommended value of 3, thus reflecting collinearity values within the threshold. Secondly, the structural model was examined for R2 values to determine the endogenous constructs’ explained variance. Hair et al. (2014a) stated that the R2 value, also known as the coefficient of determination, identifies and represents an exogenous latent variable’s effect on a connected endogenous latent variable. The higher R2 values indicate a greater explanatory power and range from 0 to 1. R2 values of 0.75, 0.50, and 0.25 can be considered substantial, moderate, and weak (Henseler et al., 2009; Hair et al., 2014b). Raithel et al. (2012) stated that R2 values are acceptable based on the context, and in some instances, an R2 value as low as 0.10 is considered satisfactory. The R2 values of the constructs are listed in Table 5.
Table 5. R2 values of endogenous constructs.
Reflective Measurement Construct |
R-square |
CPR |
0.345 |
FPR |
0.381 |
IPR |
0.430 |
R2 is a function of the number of predictor constructs; the value of R2 is higher when the number of predictor constructs is greater. Hence R2 must be interpreted in relation to the context of the study, especially from related studies and models with similar complexity. The R2 value of organizational performance indicators was between 0.345 and 0.430. The R2 value for Customer Performance Related (CPR) was 0.345, Financial Performance Related (FPR) was 0.381, and Internal Performance Related (IPR) was 0.430. These R2 values represent strong, moderate, and weak explanatory power. However, as the R2 value of a dependent construct is inherently affected by its number of predictor constructs, Hair et al. (2019) stated that these values should be examined in the context of each individual study. After evaluating the R2 values, the study analyzed how removing a certain predictor construct impacted an endogenous construct’s R2 value. The f2 effect size of Strategy Implementation Obstacles on Financial Performance Related Indicators was 0.616. The f2 values of constructs are listed in Table 6. The f2 effect size of Strategy Implementation Obstacles on Customer Performance Related Indicators was 0.526. The f2 effect size of Strategy Implementation Obstacles on Internal Performance Related Indicators was 0.756. Moreover, the f2 effect size of Strategy Implementation Obstacles on Organization Performance Related
Table 6. f2 Effect sizes of structural model constructs.
Construct |
Dependent Construct |
f2 Effect Size |
Strategy Implementation Obstacles |
Customer Performance Related |
0.526 |
Strategy Implementation Obstacles |
Financial Performance Related |
0.616 |
Strategy Implementation Obstacles |
Internal Performance Related |
0.756 |
Indicators were 0.212. As a rule of thumb, values higher than 0.02, 0.15, and 0.35 depict small, medium, and large f2 effect sizes (Cohen, 1988). As such, all the structural model’s predictor constructs demonstrated a large and medium-sized effect on their related dependent constructs. The Stone-Geisser Q2 value was assessed to determine the predictive relevance of the structural model through a cross-validated redundancy approach. This evaluation is only applicable to reflectively measured endogenous constructs. The Q2 values for the endogenous constructs have been determined above zero; thus, predictive relevance was established (Hair et al., 2019). The Stone-Geisser Q2 values of the model’s reflective measurement constructs are listed in Table 7. Finally, the assessment of the structural model was conducted by examining the statistical significance and relevance of path coefficients. Bias-corrected and accelerated (BCa) bootstrapping was applied to the model with 5000 samples in order to determine these assessment criteria. Strategy Implementation Obstacles to Financial Performance Related indicators demonstrated a p-value of p < 0.001 and a path coefficient of 0.618.
Strategy Implementation Obstacles to Customer Performance Related indicators demonstrated a p-value of p < 0.001 and a path coefficient of 0.587. Strategy Implementation Obstacles to Internal Performance Related Indicators demonstrated a p-value of p < 0.001 and a path coefficient of 0.656. As no p-values exceeded the 0.05 significance level, all structural path models established statistically significant positive effects between latent variables.
Furthermore, the positive relevance of all path coefficients was determined as all values exceeded 0 on a scale of 0 to 1. The p-values and path coefficients of the structural model are given in Table 8. The final results of the structural model are given in Figure 1.
Table 7. Q2 values of reflective endogenous constructs.
Reflective Endogenous Construct |
Q2 Value |
Customer Performance Related |
0.230 |
Financial Performance Related |
0.283 |
Internal Performance Related |
0.286 |
Table 8. Statistical significance & relevance of structural model path coefficients.
Structural Model Path |
p |
Path Coefficients |
SIOs to CPR |
p < 0.001 |
0.587 |
SIOs to FPR |
p < 0.001 |
0.618 |
SIOs to IPR |
p < 0.001 |
0.656 |
5. Research Questions and Findings
5.1. Research Findings
Research Question One aimed to determine if a statistically significant relationship exists between barriers to strategy implementation and organizational performance indicators. The Partial Least Squares Structural Equation Modeling (PLS-SEM) was employed to test the hypotheses. Below are the results for each hypothesis.
Research Hypothesis 1A: The null hypothesis stated no relationship exists between barriers to strategy implementation and Financial Performance-Related (FPR) indicators. The PLS-SEM model (Figure 1) results rejected the null hypothesis, identifying a statistically significant positive effect between strategy implementation obstacles and FPR indicators in U.S. organizations (p < 0.001). Thus, the alternative hypothesis was accepted, indicating a significant relationship between strategy implementation obstacles and FPR indicators.
Research Hypothesis 1B: The null hypothesis posited no relationship between barriers to strategy implementation and Customer Performance-Related (CPR) indicators. However, the PLS-SEM model results rejected this null hypothesis, as a statistically significant positive effect was found between strategy implementation obstacles and CPR indicators in U.S. organizations (p < 0.001). Consequently, the alternative hypothesis was accepted, affirming a significant relationship between the two variables.
Research Hypothesis 1C: The null hypothesis stated no relationship exists
Figure 1. Results of the PLS-SEM model.
between barriers to strategy implementation and Internal Performance-Related (IPR) indicators. The PLS-SEM model results rejected the null hypothesis, revealing a statistically significant positive effect between strategy implementation obstacles and IPR indicators in U.S. organizations (p < 0.001). Accordingly, the alternative hypothesis, suggesting a significant relationship between these variables, was accepted.
Research Question Two investigated whether there was a statistically significant difference in strategy implementation obstacles between small and large organizations. An independent sample t-test was employed to assess differences in participants’ responses regarding strategy implementation obstacles across these organizational sizes. The significance level was set at 0.05. The t-test compared the mean responses of small-scale and large-scale organizations, with organizational size as the grouping variable and strategy implementation obstacles as the independent variable. A Strategy Implementation Obstacle Variable (SIOV) was created by averaging responses to 41 obstacles from the Likert scale. The analysis used data from 346 respondents, including 160 from large-scale and 186 from small-scale organizations. The null hypothesis for research question two posited no significant difference in strategy implementation obstacles between small-scale and large-scale organizations. The t-test results supported this hypothesis, with p = 0.453, p = 0.453, p = 0.453, indicating no significant difference in the average responses to 41 strategy implementation obstacles between the two types of organizations (t (344) = 0.751, t (344) = 0.751, t (344) = 0.751, p = 0.453, p = 0.453, p = 0.453). The study analyzed strategy implementation obstacles separately for small-scale and large-scale organizations to explore this further. Major obstacles were identified for both small-scale (mean > 3.74) and large-scale organizations (top 10 based on mean). Differences in obstacle means between the two organization sizes were also examined, and significant variations were noted. The analysis revealed that the top three obstacles for small-scale organizations were SIO35, SIO40, and SIO34, while for large-scale organizations, they were SIO19, SIO35, and SIO2. The greatest differences between small and large organizations were found in SIO19 and SIO38, with SIO19 being particularly prominent for large-scale organizations. SIO17 was identified as having an equal impact on both organization types, whereas obstacles with similar impact across sizes included SIO24, SIO26, SIO20, SIO6, SIO32, and SIO40.
5.2. Practical Assessment of Research Questions
Research question one investigated the relationship between strategy implementation obstacles and performance indicators—Financial Performance Related (FPR), Customer Performance Related (CPR), and Internal Performance Related (IPR). The null hypothesis posited no significant relationship between these obstacles and FPR indicators. Contrary to this hypothesis, the study found a significant impact of strategy implementation obstacles on financial performance (p < 0.001, p < 0.001, p < 0.001). This finding aligns with previous research indicating that strategy implementation barriers directly affect financial outcomes (Köseoglu et al., 2018; Radomska, 2014). Key obstacles identified include lack of necessary skills, short-term thinking, and employee conflicts. Addressing these obstacles can enhance financial performance by focusing on critical areas like employee training and senior management engagement (Hrebiniak & Joyce, 2005; Brenes et al., 2008). Further examination of CPR indicators revealed that strategy implementation obstacles significantly impact customer satisfaction and loyalty. The null hypothesis suggested no significant relationship; however, the results contradicted this, aligning with findings from Ittner and Larcker (1998), Neely, Gregory and Platts (2005), and Parmenter (2015) that effective strategy implementation positively influences customer-related outcomes. For instance, addressing obstacles related to poor communication and lack of coordination can improve customer satisfaction and overall performance (Ho et al., 2014; Vigfússon et al., 2021). Finally, the study assessed IPR indicators, revealing significant correlations with strategy implementation obstacles. These findings are consistent with Cândido and Santos (2019), who highlighted the impact of human factors such as employee commitment and communication on internal performance. The results underscore the importance of addressing internal obstacles like inadequate training and poor communication to enhance organizational performance (Kohtamäki et al., 2012; Alharthy et al., 2017).
Research question two aimed to determine whether there were significant differences in strategy implementation obstacles between small-scale and large-scale organizations. The null hypothesis proposed no significant differences, and the results supported this with p = 0.453, p = 0.453, p = 0.453. Both types of organizations face similar obstacles, although with varying priorities. For example, large-scale organizations reported greater challenges with SIO19 (multiple simultaneous projects), while small-scale organizations faced issues such as SIO35 (lack of necessary skills) (Creasap, 2011; Pearce & Robinson, 2004). The study identified specific areas of disparity, with SIO19 being a significant obstacle for large-scale organizations, indicating unique challenges compared to small-scale ones. Conversely, SIO17 affected both organization types equally, highlighting a common issue that requires universal strategies (Creasap, 2011). The findings suggest that while overall obstacles are similar, tailored strategies addressing specific concerns of different organizational sizes are necessary to improve strategy implementation and performance outcomes.
6. Implications for Future Study
This study explored the relationships between the latent variables of Obstacles to Strategy Implementation and Organizational Performance Indicators within a population of small-scale and large-scale businesses in the United States. The limited participation of individuals within the U.S. raises the need for future research to investigate these variables in a global context. Researchers can use surveys targeting global samples to achieve this, effectively reaching populations from various countries. It is important to recognize that obstacles to strategy implementation can differ across countries due to factors such as political environment and technical considerations that impact businesses. Therefore, future studies should aim to gather larger and more diverse samples from global populations to gain insights into obstacles to strategy implementation at a global scale. In the event that the findings from these studies diverge, the conclusions drawn from this study would be specific to small and large businesses within the United States. Furthermore, future research can focus on specific industries at a global level to explore potential variations in the results. Obtaining samples from desired industries would enable industry-specific conclusions regarding the primary obstacles to strategy execution. Additionally, expanding the understanding of organizational structures is crucial. This involves examining the structures of franchises, branded, chain, and ownership/management.
7. Limitations of the Study
The study focuses on managers and C-level employees, potentially overlooking critical insights from lower-level employees who also play key roles in strategy implementation. By excluding a broader range of employees, the research may offer an incomplete view of the challenges involved. The omission of perspectives from middle management and operational staff further narrows the scope of the findings. Additionally, the limited sample size of 346 respondents may impact the reliability and robustness of the results, as larger samples typically provide more representative data and clearer insights into broader trends. Other factors, such as market dynamics, regulatory changes, and technological advancements, may also influence strategy implementation but were not specifically addressed in this study. Furthermore, the research does not delve into the effects of different organizational structures, such as franchises, multinational corporations, or small and medium-sized enterprises (SMEs), all of which present distinct challenges and dynamics in the context of strategy implementation. A more detailed analysis of these varying organizational types could offer deeper insights into the complexities of strategic execution
8. Conclusion
This study highlights the crucial factors influencing strategy implementation and their impact on organizational performance. The absence of literature addressing strategic performance factors and their contribution to effective execution has been a significant barrier to successful strategy implementation. External factors, such as client and supplier dynamics, competition, and business uncertainty, further complicate the process. Identifying these critical factors is essential for diagnosing issues within performance-evaluation systems. The high failure rates in Strategy Decision-making and Implementation (SDI), ranging from 40% to 90%, underscore the need to understand the factors that affect SDI. Contrary to previous studies, this research underscores the importance of strategic consensus, communication, shared values, and organizational commitment as key elements in strategy implementation. These findings support the view that barriers to strategy execution reflect an organization’s dominant logic and are influenced by dynamic managerial capabilities, including human capital, social capital, and managerial cognition. Using data from C-level employees, managers, and executives in both small- and large-scale organizations in the U.S., the study found significant positive relationships between strategy implementation obstacles and performance indicators across financial, customer, and internal dimensions (p < .001). This indicates that overcoming these obstacles can significantly improve financial performance, customer satisfaction, and internal efficiency. Organizations can leverage these insights to enhance performance by addressing and mitigating strategy implementation obstacles through innovative practices. By doing so, they can improve their sustainability, viability, and revenue streams.
Appendix A.
Table A1. Reflective measurement constructs and respective indicative factors.
Reflective Measurement Construct |
Respective indicators |
Financial Performance Related (FPR) |
Competitive position (FPR1) |
|
Market share (FPR2) |
|
Overall firm performance and success (FPR3) |
|
Sales growth (FPR4) |
|
Return on equity (FPR5) |
|
Return on sales (FPR6) |
|
Return on assets (FPR7) |
|
Growth in profit after tax (FPR 8) |
Customer Performance Related (CPR) |
Customer satisfaction (CPR1) |
|
Decrease of customers’ complaints (CPR2) |
|
Quality felt by customers (CPR3) |
|
Customer loyalty (CPR4) |
Internal Performance Related (IPR) |
Training programs (IPR1) |
|
Promotion for employees (IPR2) |
|
Employee satisfaction (IPR3) |
|
Employee turnover ratio (IPR4) |
Organization Performance Related (OPR) |
Quality of operations or services (OPR1) |
|
Speed of operations or services (OPR2) |
|
Number of new service or product (OPR3) |
Table A2. Reflective measurement constructs’ outer loadings.
Reflective Measurement Construct |
Indicator |
Initial Outer loadings |
Customer Performance Related (CPR) |
CPR1 |
0.605 |
|
CPR2 |
0.653 |
|
CPR3 |
0.601 |
|
CPR4 |
0.787 |
Financial Performance Related (FPR) |
FPR1 |
0.578 |
|
FPR2 |
0.769 |
|
FPR3 |
0.533 |
|
FPR4 |
0.618 |
|
FPR5 |
0.733 |
|
FPR6 |
0.73 |
|
FPR7 |
0.655 |
|
FPR8 |
0.754 |
Internal Performance Related (IPR) |
IPR1 |
0.704 |
|
IPR2 |
0.624 |
|
IPR3 |
0.568 |
|
IPR4 |
0.754 |
Organization Performance Related (OPR) |
OPR1 |
0.596 |
|
OPR2 |
0.657 |
|
OPR3 |
0.681 |
Strategy Implementation Obstacles |
SIO1 |
0.588 |
|
SIO2 |
0.561 |
|
SIO3 |
0.467 |
|
SIO4 |
0.585 |
|
SIO5 |
0.641 |
|
SIO6 |
0.766 |
|
SIO7 |
0.52 |
|
SIO8 |
0.612 |
|
SIO9 |
0.64 |
|
SIO10 |
0.543 |
|
SIO11 |
0.546 |
|
SIO12 |
0.697 |
|
SIO13 |
0.727 |
|
SIO14 |
0.584 |
|
SIO15 |
0.591 |
|
SIO16 |
0.636 |
|
SIO17 |
0.625 |
|
SIO18 |
0.623 |
|
SIO19 |
0.611 |
|
SIO20 |
0.688 |
|
SIO21 |
0.879 |
|
SIO22 |
0.664 |
|
SIO23 |
0.684 |
|
SIO24 |
0.709 |
|
SIO25 |
0.696 |
|
SIO26 |
0.865 |
|
SIO27 |
0.587 |
|
SIO28 |
0.536 |
|
SIO29 |
0.669 |
|
SIO30 |
0.707 |
|
SIO31 |
0.762 |
|
SIO32 |
0.589 |
|
SIO33 |
0.651 |
|
SIO34 |
0.744 |
|
SIO35 |
0.678 |
|
SIO36 |
0.875 |
|
SIO37 |
0.832 |
|
SIO38 |
0.759 |
|
SIO39 |
0.632 |
|
SIO40 |
0.704 |
|
SIO41 |
0.612 |