Methodology of Safety Behavior Management from a Cross-Culture Perspective ()
1. Introduction
This paper is derived from Chapter 4 of the thesis, which focuses on the Methodology of Safety Behavior Management from a Cross-Culture Perspective. The chapter outlines the research approach, design, and methods employed to investigate safety behavior in underground mines across different cultural contexts, specifically in China and Gabon. The study utilizes quantitative research methodology, supported by a theoretical framework that examines the relationship between cultural factors, individual safety behaviors, and organizational safety outcomes.
This section explains the key concepts and principles of the research methodologies employed in the study. It explains the quantitative research methodology, including its theoretical framework, definitions, advantages, and limitations, while contrasting it with qualitative approaches. The chapter also discusses foundational assumptions about quantitative research and the methodological strategies used. Additionally, it covers methods for determining sample size, sampling protocols, and population selection, outlining how these factors define the study’s parameters. Demographic information of respondents, such as age and educational attainment, is provided, along with other distinguishing characteristics.
Furthermore, the chapter details the strategies used for data collection, examining the reliability and validity of these methods. It emphasizes the importance of developing an initial data set method that provides a broad framework for organizing information, ultimately guiding discoveries toward knowledge growth.
2. Research Approach
The methods and strategies employed in research, ranging from particular procedures for data collection, processing, and interpretation to a variety of theoretical presumptions, are known as research approaches [1]. These decisions could be taken in a different order than the one that follows or appears logical. Therefore, the decision of which method to employ for research on a particular subject still needs to be made. The researcher’s decision is influenced by his or her theoretical presumptions, the investigative methods used, and the specific methods used to acquire, examine, and evaluate data. The type of topic being studied, the background and experiences of the researchers, and the intended audience all influence the research process [2]. Mixed method studies, qualitative studies, and quantitative studies are the three primary categories of research.
3. Quantitative Research
This approach involves empirical statements, which are factual assertions based on observation or Quantitative research is a methodical process that analyzes and clarifies various circumstances using numerical data gathered from observations. Quantitative research is the collection and analysis of numerical data to illustrate, explain, forecast, or regulate the trajectory of phenomena being studied. Numerical data analysis is challenging because it necessitates a systematic approach [1]. These investigations make it possible to make predictions, analyze the causal linkages between variables, and extrapolate the findings to the whole population. This tactic usually draws a large number of audiences simultaneously in the smallest amount of time. Deductive reasoning is embraced by quantitative research [3]. Instead of concentrating on what should be done in these situations, experience and descriptive statements that explain the significance of such events in practical terms [2]. Additionally, it incorporates other techniques. Furthermore, it conducts empirical testing to ascertain the extent to which a certain policy or program satisfies a norm or standard [4]. Lastly, mathematical operations are performed on the gathered numerical data.
Furthermore, the goal of both qualitative and quantitative research methods is to describe a particular occurrence. However, the analytical tools used in qualitative research rely on statistical analysis and quantitative methods. In a variety of fields, such as psychology, education, biology, physics, and natural science, quantitative research aims to characterize a feature by collecting numerical data, such as the number and percentage of the data [5]. Additionally, non-numerical data can be converted into numerical forms with the use of specially designed tools. As a result, methods encourage the gathering of quantitative data about participants’ opinions and attitudes, among other things. Quantitative methods are procedures that are used to determine social reality and to collect numerical data using specialized inquiries for specific reasons [6]. Different forms of quantitative research are depicted in Figure 1.
Figure 1. Quantitative research.
4. Qualitative Research
These approaches use naturalistic and interpretive approaches to many topics of debate and constantly attempt to solve scientific and practical problems in communities. These methods demonstrate the patterns and difficulties that people encounter in life (Figure 2) using a variety of empirical data, such as case studies, firsthand accounts, and narratives [7]. They examine the profound significance and underlying causes of these inexplicable events. To better comprehend a particular occurrence, qualitative research involves gathering, evaluating, and interpreting comprehensive narrative or visual data [2]. It examines people’s opinions, actions, and interactions. It gathers and analyzes text and textual data. This process is time-consuming because a lot of time is spent on each participant, but it requires fewer participants than other approaches, where the issue being studied is exploratory [8]. It is a comprehensive investigation of many aspects of a phenomenon with the goal of learning about objects in their native habitat. Inductive logic is used in this process. As a result, the qualitative approach can generate original ideas, viewpoints, and hypotheses [9]. It lacks generalization since its primary focus is on the conclusions drawn from events that had taken place in specific settings, without taking into account potential outcomes that might arise later or in different circumstances.
![]()
Figure 2. Qualitative research.
5. Mixed Methods
In order to gain insight into a subject, mixed-method approaches integrate the use of both qualitative and quantitative methodologies, which might vary based on the study’s goal and research objectives [10]. Nonetheless, both approaches may receive equal consideration, or one approach may be given priority among those selected for integration [2]. These methods allow academics to tackle challenging research issues in a variety of fields. When the alternative is insufficient for study, these methods are useful because they combine the advantages of qualitative and quantitative approaches [1] [11] (see Figure 3). Blended approaches are advantageous for researchers with different methodological preferences in today’s multidisciplinary research environments. These days, a variety of disciplines, including psychology and healthcare, use mixed techniques. However, it need not be expressly referred to as mixed techniques; instead, it might be concealed [2]. By fully utilizing these approaches to their fullest capacity, researchers may ensure that they are advocating for the use of mixed methodologies in an effective manner.
![]()
Figure 3. Mixed methods approach.
Based on the benefits and drawbacks of each technique, a quantitative strategy was selected for this study after a review of qualitative, quantitative, and mixed method studies. The ability to compare goals with results and to measure accuracy is another advantage of quantitative research. Therefore, the research methodology needed to be appropriate for the study’s goals and scope; therefore, a quantitative approach was chosen based on its intended purpose. This survey’s statistical approach made it possible to fully interpret the findings. When collecting data, researchers should show respect and decency to both study sites and participants [12]. Ethically sound population sampling and participant anonymity, guaranteeing that every participant received the utmost secrecy, characterized the quantitative methodology that supported this study. Furthermore, a quantitative approach was ideally suited for this study since it was necessary to gather a large number of respondents from the target group.
5.1. Research Design
According to Myers et al. (2013), research design is the process by which a researcher chooses the techniques and approaches used to carry out their investigation [13]. It primarily takes into account the aims and objectives that accurately represent the constraints of time, place, money, and research staff availability. Additionally, the preferences and inclinations of the researcher and the evaluators have a significant impact on research design. Research designs are frameworks for gathering and analyzing data, according to Taherdoost (2022) [2]. Wisenthige (2023) claimed that several indicators are needed to check whether the chosen study design will succeed in achieving its goals, which further supports this idea [14]. These factors include the ability to locate necessary information and the applicability of the research topic and analysis approach. Furthermore, Ranganathan & Aggarwal (2018) shed light on this idea by stating that a study design is a flexible blueprint that connects philosophical frameworks with investigative strategies and, secondarily, data collection techniques [15].
Because survey design methodology enables the collection of standardized data from a single group, it was used for this investigation. Asking people specific questions about the companies or places that are engaged in the surveying process is part of the data gathering process [15]. Qualitative research also frequently uses this technique. These researchers claim that this kind of study entails evaluating the traits of a population sample that might be representative of the target group under investigation [2]. In addition to other statistical techniques utilizing the data gathered, a questionnaire consisting of a series of structured questions is offered to respondents during in-person interviews. When thinking about employing such an approach, it is strongly advised to enable an individual’s attitudes, beliefs, or ideas regarding specific topics based on an acceptable sampling process [2]. Following data collection from a sample, the results are extrapolated to the entire population of interest. Thus, the opinions of the majority of citizens are represented in this area. Selecting a representative sample, creating and distributing questionnaires, and evaluating results are some of the tasks involved in information gathering.
5.2. Target Population
Workers below the managerial level in underground mines in China and Gabon made up the study’s target group.
5.2.1. Sampling Technique
A sample is a subset of the total population. Since it can occasionally be impractical to gather data on every member of the group, sampling is frequently utilized in population research. It is expensive and time-consuming to conduct a census of a large population, claims Rahi (2017) [16]. As a result, sampling is frequently employed as a more economical and effective substitute. According to Berndt (2020), a common sample flaw is that non-specialists are reluctant to assume that the results are representative of any particular population [17]. The sample strategy used will determine any additional restrictions. According to Golzar and Tajik (2022), a representative sample has been chosen so that its key attributes closely resemble those of the population it represents [18]. The two main kinds of sampling techniques are probability and nonprobability.
All methods in which selection occurs at random are included in probability sampling [19]. This entails creating a procedure or system that guarantees different members of the target group have an equal chance of getting selected. One major benefit of probability sampling techniques is that, when properly executed, they ensure an objective sample that fairly represents the target population [17]. As a result, even when discussing a larger society, researchers can use estimates derived from random sampling with confidence and without bias. By their very nature, probability sampling techniques can be challenging. Specifically, probability samples ought to be fairly big, requiring significant labor, financial, and time inputs [18]. Furthermore, it takes extraordinary talent to create probability samples effectively. Basic random sampling, systematic random sampling, stratified random sampling, and cluster sampling are a few examples.
In statistics, a non-probability technique refers to any method used to choose survey participants outside random selection. Convenience sampling is a common technique used in psychological research to choose participants [18]. Regardless of any plans, this means selecting participants for a study based on their accessibility and proximity to the study [20] for a thorough rundown. This would include an example of convenience sampling that is frequently observed in developmental research: asking student volunteers to take part in studies. Convenience sampling has the benefit of being economical, effective, and simple to use [20]. However, the inability to draw generalizations from the sample is convenience sampling’s primary flaw [21]. All forms of convenience samples have comparable advantages and disadvantages, but to varying degrees. Convenience sampling has advantages and disadvantages that are opposite to those of random sampling. Convenience samples are comparatively less expensive, quicker, and easier to conduct, although probability samples typically yield data that is highly generalizable [21] [22]. While case-study research uses non-probability sampling, the majority of survey-based research uses probability sampling [18].
A convenience sample strategy was used in this study. This is a sensible approach to obtain samples without the need for complicated procedures and at a reduced cost. Because we were unable to obtain funding to conduct the study, the methodology employed in this work enabled us to collect data at a low cost by choosing readily available sample units. We had a few assistants who were inexperienced with complex statistical selection techniques to assist with the data collection process. Their efforts were supported by the ease of convenience sampling, which can be carried out without specific expertise and requires no preparation.
5.2.2. The Questionnaire
A questionnaire with three sections and an introduction served as the main instrument for gathering data for this study. Every element of the questionnaire and its layout will be thoroughly explained. A questionnaire was used for data collection and hypothesis testing in order to meet the goals of this study, since it made it simple to quantify the results and provided pertinent information for evaluating the hypotheses. In general, the questionnaire’s measuring items were primarily drawn from earlier studies. Three components made up the questionnaire: Section 1 asked five demographic questions, and Section 2 asked safety-related questions (safety behavior, safety competency, and management safety commitment). The survey was designed to be distributed via a link over an internet server. The internet is superior to traditional paper-and-pencil surveys, claim Weigold et al. (2021) [23]. Additionally, lowering survey administration expenses, minimizing data errors, cutting down on the time it takes to collect and analyze data, and improving convenience. Studies have shown that responses from online surveys are just as reliable as those from phone or mail surveys for behavior prediction. Through their work on WhatsApp groups, those who expressed interest in participating received a pre-alert about the poll and were asked to complete an online questionnaire. Additionally, only those who had not completed the survey within the allotted time received multiple reminders. Online surveys have been compared to different methods of contacting respondents [23]. This study explored the possibility that respondents’ internet expertise could be related to their response rates to the primary research questionnaire. Because of their professional work experience, it was presumed that participants were computer savvy. In order to help the respondents and increase response rate accuracy, the current survey used several technological design elements that were taken from previous research. In order to increase reaction rates, this design also incorporated other strategies mentioned by Daikeler et al. (2020) [24].
All interested participants were sent to the survey site via an internet link that the researcher provided. Respondents received WhatsApp notifications about the survey. Refer to the prelude and introduction sections on the top page of the survey program for further information about the survey. Our research tool’s design was impacted by practical considerations such as participant weariness, time limits, and the length of the primary study questionnaire [25]. According to Daikeler et al. (2020), a lengthy survey instrument may overwhelm respondents, making it harder for them to understand and ultimately resulting in poor response rates [24]. To quantify each of these identified components, numerous sections and scales had to be created. By taking this approach, we intended to minimize these issues during the thirty to forty minutes that were allotted for completing this specific questionnaire.
5.2.3. Part I: Demographic Survey
A demographic survey was carried out to determine the demographic profile of the sample and assess the connections between safety behavior and situational, individual, and specific characteristics. Section 1 of the research questionnaire gathered demographic information about the participants, including age, gender, education, marital status, and years of employment. The response categories for these demographic questions were carefully designed to align with the study’s eligibility criteria, ensuring participants met the necessary requirements for inclusion. Many of the study’s assumptions could be evaluated with the use of the extra demographic data.
5.2.4. Part II: Safety Survey
This section contains questions about management’s commitment to safety, safety behavior, and safety competency. As mentioned earlier, safety behavior in this study includes both safety participation and safety compliance. The safety involvement and compliance scale offered by [26] [27] is the most widely utilized. The scale has been updated in response to input from survey participants. Data on safety behavior was collected using nine metrics. Four of the nine measures evaluated safety compliance, and five evaluated safety involvement. On a five-point Likert scale, with 1 denoting “strongly disagree” and 5 denoting “strongly agree,” participants indicated how much they agreed with each item. For the nine safety behavior-related items, the Cronbach’s alpha (α) is higher.
Eight items on a 5-point Likert scale were used to evaluate the person’s safety competency. Strong reliability was indicated by the scale reliability test’s acceptable Cronbach’s alpha value. Items were taken from Bensonch et al., 2022, and improved upon [28]. Eight questions based on participant opinions were used to assess respondents’ perceptions of management’s ability to motivate employees to perform more securely. Vinodkumar and Bhasi (2010) provided the management safety commitment questions, which were evaluated on a five-point Likert scale with 1 denoting “strongly disagree” and 5 denoting “strongly agree.” [29] The current study’s alpha coefficient for the scale is acceptable.
5.3. Pilot Study
A pilot study’s objective is to collect initial data and assess possible research techniques, instruments, and protocols in advance of a more extensive investigation. Pilot studies, which are conducted to identify any shortcomings or issues with the research instruments and technique prior to the start of the main study, are among the most important components of any research project [30]. When faced with contradictory approaches, researchers may become well-versed in the specifics of the methodology and utilize that information to make decisions, such as whether to employ online surveys or interviews. In November 2022, we carried out an initial examination in one of the Prestea mines. The pilot meticulously adhered to the preliminary study process, which included evaluating a condensed version of the entire survey. Twenty-five employees participated in the pilot survey, and two research assistants were hired. We invited workers who were free and accessible to participate in the study, allowing them enough time to make up their minds. The consent form was signed by the participants to show their agreement. The research assistant reported that data collection went smoothly and that the response rate was recorded.
The study assistants had to assist the mine workers in filling out the questionnaire. Verifying that the questionnaire items accurately matched the goals of the study was essential. The questionnaire’s appropriateness, clarity, well-defined questions, comprehension, and consistent presentation were evaluated during the pilot trial. Consent forms and statements from personality and safety surveys were tested for comprehension. The respondents took an average of 15 to 20 minutes to complete the surveys. They made every effort to respond to all of the questions, however, some were missed. For a number of questions, there were notable discrepancies in the responses, which were primarily caused by the questions’ ambiguity, which caused participants to misunderstand them. Questions on safety behavior and safety competency were used to make these observations. Typographical errors were identified and fixed. In general, participants in this pilot study had little difficulty understanding the items on the questionnaire. This pilot study demonstrated the practicality of the research methodology. The initiative did not appear to be very annoying to the mine workers, nor did it have a large impact on staff time, from a managerial perspective.
5.4. Conceptual Framework
Based on the literature reviewed, a conceptual framework was created to highlight important theories and ideas about how mining affects the economy. The theoretical underpinnings of the review included the resource curse theory, the sustainable development framework, and social impact assessment standards. The analysis and interpretation of the results were guided by these variables.
Data Collection Methods
The mining company NGM was selected using the entire population sampling method. Using comprehensive population sampling, the entire research population was taken into account during the selection process [31]. The mining businesses were chosen due to their presence in Gabon, where they maintained records on magnesium’s benefits.
For data analysis purposes, we had to sample a sufficiently large set of businesses for our research project. Spearman and point biserial coefficients were employed. Power was calculated using the Spearman correlation, which has the largest sample size [32]. We anticipated seeing an average effect size of 0.3, per Cohen (1988) [33]. There was a standard alpha level of 0.05. The Spearman correlation has roughly 91% power, just like the Pearson correlation [34]. After applying the following factors to G*Power, we found that a sample size of 102 instances was optimal for the study [35]. We were able to get information on magnesium mines because our study employed historical data, which we recognized as constraints while doing the statistical analysis. Cautiously, more fact analyses were conducted.
6. Data Analysis Procedures
6.1. Test the Hypothesis
The following assumptions were put to the test by the study using correlation and linear regression:
H0 = There is no relationship between typical OSHA issues and manganese mining operations.
H0: The determination of management and the mining workforce to adhere to occupational safety and health regulations is unrelated.
To assess each hypothesis, significance thresholds (alpha) of 10%, 5%, and 1% were employed. Confidence ranges of 90%, 95%, and 99% were also employed. The hypothesis test is deemed statistically significant when the P-value is less than the standard of statistical significance (alpha). The conclusion of the null hypothesis cannot be included in the confidence interval for results to be deemed statistically significant [36]. The 90% confidence level, according to (Sauro, 2015), is employed as a comparability and technological certainty when analyzing survey data because 90% confidence for a symmetrical claim is equal to 95% confidence for a skewed claim [37]. Although the study made use of survey data, when analyzing miners’ answers, a 90% confidence level was selected as a commercial assurance. [38] employed a 95% confidence level to illustrate that if numerous samples from a single group are utilized to repeat the query, the true population mean will be attained. The researchers adopted the 99% confidence level because poor decision-making in the manganese mining business might result in fatalities or significant injuries, according to [37]. This level is typically utilized for circumstances where a faulty decision could lead to damage or death. In order to ensure a higher degree of precision in the views held by small-scale miners, information was gathered by visiting nearly all of Ntotroso’s precious metal processing facilities. 90%, 95%, and 99% confidence intervals were used to test each of the offered hypotheses, and 10%, 5%, and 1% significance (alpha) values corresponded to each.
A statistical method called correlation analysis evaluates the degree of relationship between common health and safety concerns and manganese mining. A significant association exists between multiple parameters if there is a high correlation between them. As such, it can be described by utilizing the statistical data that is now available to analyze the degree of the correlation [39]. This method relies entirely on linear regression analysis, a statistical technique that may be applied to any number of independent or explanatory variables to define the relationship between an intervening variable. Metrics were employed in the research effort to assess the relationship. Furthermore, linear regression was applied. This holds significance when attempting to ascertain the precise quantity of a factor exclusively through the possible appraisal of another element [40]. The equation for linear regression is:
(1)
whereas,
represents what can be expected of the subject variable
in light of any factor in the independent variable
,
is the intercept or the y-value expected when x is 0,
is the regression coefficient,
is the independent variable (the one for which we hypothesize an effect on
).
The logistic statistic’s estimation error, or how far it deviates from our estimate, is expressed as
. Through linear regression, the best-fit line in the data is found by looking for the correlation value (
) that maximizes the aggregate prediction error (
) [40]. This method was used for each hypothesis. Using logistic regression and correlation analysis, the connection between uncorrelated variables and dependent factors was examined. In this section, the dependent variable
was the adoption of OSH practices by small-scale manganese miners; the independent variables
were socio-cultural traits, shared interests in OSH, managerial commitment, training, and demographic traits. OSH risks in Gabon’s ASGM operations. Risks related to auditory, mental, arbitrary, natural, chemical, and psychological aspects of gold mining in Gabon have all been evaluated and made public.
6.2. Mathematical Formulas for Descriptive Analysis
Our research is grounded in both theoretical and empirical investigations. We employed a statistical method to choose all 510 survey participants for our investigation. In order to identify the strategies employed to strike a balance between environmental sustainability, financial success, and safety, we perform a case study and analyze the correlations between variables using regression and descriptive statistics. The Cochran calculation was utilized to determine the sizes of the data sets [41]. This was employed due to the ambiguous settlement.
(2)
= Size of the sample,
= the desired confidence level is the abscissa of the normal curve that cuts off an area at the tails
, e.g., 95%,
is the desired degree of accuracy,
is the estimated percentage of an attribute that the population possesses,
is
.
where,
Therefore,
The linear regression equation is:
whereas,
represents what can be expected from the subject variable
given any factor in the independent variable
,
is the intercept or the y-value expected when x is 0,
is the regression coefficient.
The dependent variable
in this section was the adoption of OSH practices by small-scale manganese miners; the independent variables
were sociocultural characteristics, common interests in OSH, managerial commitment, training, and de-risking.
is our estimate of the estimation error of the logistic statistic, or how much does it differ from our estimate? Logistic regression finds the best-fit line in the data in search of the correlation parameter (
) that optimizes the aggregate prediction error
[40]. This technique was used to evaluate each hypothesis.
The study commenced with the generation of descriptive data and the analysis of possible issues related to multicollinearity, heterogeneity, and oscillation following the administration and collection of information. The suitability of employing fixed vs random effects was then evaluated. We utilize the variation inflation factor, or VIF, to test multicollinearity. The mathematical formula for manually calculating VIF is,
(3)
whereas,
is residual correlation regression.
In order to address heterogeneity, we use the mathematical formula.
Mathematically,
is expressed as
, where
denotes the between-trial heterogeneity,
denotes some common sampling error across trials, and
is the total variation in the meta-analysis.
Pearson’s Chi-Square Test:
The Pearson Chi-Square test is employed to ascertain if the variables are connected.
(4)
Odd Ratio: The basic predictor and compositional factors model, contextual factors model, and working conditions model were the first three models in which the odds ratios (OR) were constructed. When the odds ratio (OR) was one, increasing the predictor’s value had no effect on the likelihood of developing occupational health issues; when the OR was greater than one, it indicated that there was a greater chance of developing occupational health issues; and when the OR was less than one, there was a lower chance of developing occupational health issues.
(5)
Model 1: Logistic Regression
(6)
Logistic Curve
(7)
(8)
Model 2: Probit Regression: When the dependent variable is binary, as we assume in probit regression, the regression function is modeled by the cumulative standard normal distribution function
(9)
Model 3: Logit Regression: The population logit regression function is;
(10)
6.3. Validity and Reliability
Validity
In order to determine whether there was a causal relationship between the independent and dependent variables of the study, the researcher consulted with the university supervisor to assess the validity of the study instrument. To increase the validity of the questionnaire and convince the respondents to participate in the study, the researcher also administered it himself and explained the topics to the respondents. This approach is consistent with Greener’s (2008) recommendation [42]. The researcher will also conduct a principal factor analysis to independently confirm the ratings of these constructs. Factors will be recovered using covariance matrices and Varimax rotations to aid in the interpretation of initial factor patterns. The factorial validity of the scales will be demonstrated by factor loadings. According to the results, each of the five items fits exactly into one factor. In order to make the factor loading output more understandable, the factor loadings of the lowest values, 0.5, were given in this case.
7. Reliability
Fifteen members of the target population who were omitted from the final sample of respondents will participate in a pilot test conducted by the researcher prior to the final empirical analysis. This test will help identify any inconsistencies between research instruments, research questions and techniques, which will then be modified and adjusted. The most widely used scale reliability test metric, Cronbach’s alpha, was used to assess data reliability. The accepted value for Cronbach’s alpha
was 0.70 as reported by Nunnally in 1978 [43]. For the final analysis of the information obtained from the selected respondents, the test will be run once more. The 26 items will be assessed for overall internal consistency, and the results revealed a high alpha value. Since the Alpha value is greater than 0, it can be concluded that the questionnaire will be reliable and consistent.
(11)
where
is the standardized factor loadings of measurement item is the number of items in a factor and
is the measurement error of i
(12)
8. Informed Consent
Getting informed consent from the research sample requires protecting participant data confidentiality [44] [45]. Participants received a thorough explanation of the procedures that would be followed to safeguard the confidentiality of their information as well as the expectations surrounding their involvement in the study. Before being permitted to take part in the study, participants had to complete and sign an Informed Consent Form Participants completed a form that recorded the date, their contact details, and the researcher’s name. Comprehensive details on protecting participants’ identities and privacy, as well as the eventual disposal of materials, are included in the Informed Consent Agreement [46] [47]. Throughout the study, all participant data were kept private, and only the researcher had access to it.
9. Data Analysis
Data cleaning is a crucial step in the data analysis process that replaces missing values and handles outliers to guarantee the dataset’s quality. In order to determine whether or not the data is representative, the analysis then looks at the response rate, a crucial parameter in survey-based research. Descriptive statistics are calculated when the foundation for data integrity has been established. Measures of central tendency, variability, correlations, averages, and standard deviations among variables are all included in these statistics. In this context, SPSS 26 was utilized. This offers a fundamental comprehension of the characteristics of the data.
The normality test, which can verify the assumption of a normal distribution needed for the majority of parametric statistical tests, shall be research/conducted. Following that, common method bias will be examined to account for distortions that can result from variations in the measurement procedure alone, ensuring that the results are unaffected by the data collection method. The foundation of this study is factor analysis, from which various branches branch out into exploratory and confirmatory techniques (Figure 4). The goal of exploratory factor analysis is to find the underlying structure in observed variables by essentially classifying them according to patterns of correlation. Through established criteria, Confirmatory Factor Analysis, which is modeled after hypothesis testing, determines whether there is, in fact, an influence regarding suggested factors.
Figure 4. Data analysis method.
10. Mathematical Model Building
Developing a mathematical model for cross-cultural safety behavior management entails determining the connections between cultural elements, personal safety practices, and overall safety results. A collection of formulas and parameters that explain how cultural factors influence safety behavior and the consequent effect on safety performance can be used to develop this model.
10.1. Define Variables and Parameters
With components signifying values on cultural dimensions, let C be the vector of cultural dimensions:
(13)
where:
: Power Distance,
: Uncertainty Avoidance,
: Individualism vs. Collectivism,
: Masculinity vs. Femininity,
: Long-term Orientation.
Let B be the individual safety behavior vector, representing key aspects of safety behavior
(14)
where:
: Compliance with safety protocols,
: Risk reporting behavior,
: Proactive risk identification,
: Adherence to communication practices in emergencies.
10.2. Define Cultural Influence Function
Model the influence of cultural dimensions C on individual safety behaviors B with a linear or nonlinear function:
(15)
where:
is the function mapping cultural factors to safety behaviors,
is a random error term to capture variability in individual responses.
A common approach is a linear model:
(16)
where:
are coefficients representing the influence of each cultural dimension
on behavior
.
is an intercept term representing the baseline safety behavior.
10.3. Safety Behavior Outcomes
Define an outcome variable SB representing overall safety performance, which is influenced by individual behaviors B:
(17)
where
can be a weighted sum or nonlinear function:
(18)
are weights for each safety behavior,
is a baseline term for overall safety performance,
is a random error term to account for other factors not captured in the model.
10.4. Multilevel Model for Cross-Cultural Variations
To account for differences in cultural influences across different regions or organizations, use a hierarchical or multilevel model:
(19)
denotes the specific cultural group or organization.
and
can vary by cultural group, allowing for different cultural impacts on safety behavior within each group.
10.5. Optimization for Safety Improvement
To optimize safety performance
by adjusting cultural and behavioral interventions:
Define an objective function to maximize
, subject to constraints on resources, training, and other factors.
(20)
Subject to:
(21)
Predictive Model for Safety Outcomes
A predictive model can be created by fitting the parameters
,
, and to
data, enabling estimation of safety performance SSS based on known cultural dimensions and safety behavior profiles.
For instance:
If C and B values are known for a particular organization, we can predict S to assess potential safety outcomes.
Machine learning techniques like linear regression, logistic regression, or neural networks can be applied to fit and predict safety outcomes from this data.
11. Feedback and Learning Mechanism
Incorporate feedback by updating model parameters as more data becomes available, allowing the model to adapt to evolving cultural and behavioral factors:
Define an update rule, e.g., using gradient descent, to iteratively adjust parameters α, β, and γ based on observed outcomes.
Structural Equation of the Study:
To mathematically model safety behavior management from a cross-cultural perspective, we need to consider factors such as cultural influences, individual behaviors, organizational policies, and external environmental factors. Below is a structured mathematical approach that can be adapted for this purpose.
11.1. Define Variables and Parameters
Individual-Level Factors:
: Safety behavior of individual
(binary or continuous, depending on modeling approach)
: Awareness level of individual
regarding safety protocols,
: Cultural influence on individual
safety behavior, which might vary depending on cultural background,
: External factors impacting
(e.g., perceived risk in the environment),
: Personal attitudes or perceptions of individual
towards safety.
Latent Variables and Observed Indicators
Let’s define:
: Safety Behavior: the endogenous latent variable representing safety behaviors of individuals,
: Cultural Influence: the exogenous latent variable representing cross-cultural factors affecting safety behavior,
: Organizational Support: the exogenous latent variable for organizational factors impacting safety,
: Personal Attitudes: the exogenous latent variable for personal safety attitudes and perceptions.
11.2. Measurement Equations
For each latent variable, we model its relationship with observed indicators.
Safety Behavior (
):
where
are observed indicators of safety behavior (e.g., adherence to safety protocols, use of protective equipment, risk-taking behaviors),
are factor loadings, and
are measurement errors.
Cultural Influence (
):
where
represent indicators of cultural influence (e.g., collectivism vs. individualism, power distance), with
as loadings and
as errors.
Organizational Support (
):
where
are indicators of organizational support (e.g., safety training, management involvement).
Personal Attitudes (
):
where
represent personal attitudes towards safety.
11.3. Structural Equations
The structural equations model the causal relationships between the latent variables.
Safety Behavior (
) is influenced by Cultural Influence (
), Organizational Support (
), and Personal Attitudes (
):
where
are path coefficients representing the effects of each exogenous variable on safety behavior, and
is the structural error term for
.
11.4. Covariances and Correlations
To capture cross-cultural interactions:
where
are covariances between the latent variables.
12. Open Mathematical Model of Mining Mine Optimization
One of the most crucial phases in mine design is surface mining planning, which becomes a complex and demanding optimization problem for large mineral resources. In these kinds of situations, grouping mining blocks—the smallest mining units-into larger units is a popular strategy. We examine limited block clustering using an integer nonlinear programming model wherein the size and shape of individual clusters are within a preset range, and blocks are physically connected inside a cluster in order to minimize degree deviations. We then offer a population-iterated local search strategy to solve this nonlinear model and obtain a close-to-optimal solution. Application of the suggested model and solution methodology to a case study of a 40,947-block gold-silver deposit was conducted. The mining planner can handle the production planning problem faster by grouping the mining blocks into 1966 clusters.
Mathematical Formulation
Mining planning has shown interest in mining block aggregation, which is an effort to combine smaller mining blocks into larger mining units. The reduction of the surface production planning problem’s size is the primary goal of mining block aggregation. The following factors make the aggregation approach a good choice for planning: 1) more practical schedules are produced; 2) the planning problem can be solved faster; 3) it is easily adaptable to include case-specific functions; and 4) it is simple to implement [48]. An illustration of a mining block. Figure 5 depicts clustering for a two-dimensional block model. The 16-block original block model is shown in Figure 5(a), and the aggregated block model with four clusters is shown in Figure 5(b). The quality and rock quality of these blocks that are grouped together should be comparable. The different clusters should not be separated from one another, and both the horizontal and vertical extensions must adhere to the mining plan’s restrictions. Above all, the stability of the mining faces must be consistent with vertical clustering. Where there are mathematical limits, these are described in further depth.
![]()
Figure 5. Model of clustering blocks with a 45-degree wall slope in mind. Color versions are accessible on the internet.
An integer nonlinear programming (INLP) model has been created in this work to cluster mining blocks into more substantial units with minimal technical limitations. The clusters that are created as a result are geometrically continuous and adhere to predetermined dimensions for size and shape. The objective function, decision variables, and restrictions that are part of the suggested mathematical model are covered in the sections that follow.
Sets and Indexes:
: Block index
,
: Cluster index
,
: Ore block index
,
: Waste block index
,
: Set of priority blocks for block
; this set contains blocks that should be mined before block
is reached,
: Set of blocks between block
and block
in each bench (horizontal level).
Parameters:
: Degree of block
,
: Height of block
,
: Tonnage of block
,
: Index for maximum cluster weight,
: Index for maximum vertical cluster extension,
: Binary variable indicating whether block
is ore or waste. It is equal to one if block
is ore, and zero otherwise.
Decision variables:
: represents the assignment of block
to cluster
.
is equal to one if block
is in cluster
; and zero otherwise.
: Binary variable representing whether blocks
and
are adjacent in cluster
. If adjacent, equal to one; and zero otherwise.
Objective function: In general, combining mining blocks into bigger units reduces the resolution of block specifications (such as mineral grade). As a result, a number of issues arise throughout the mining schedule, which has a big impact on how well the mining plans are produced. Therefore, the objective function is defined as the reduction of the sum of within-cluster degree variance by taking each block’s degree into account as an input to the model. The defined objective function is as follows:
(22)
Constraints: Several restrictions apply to the developed mathematical clustering model in order to produce clusters that are satisfactory from an operational perspective.
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
Block I should be assigned to one and only one cluster in accordance with Equation (23). Because of the wall slope restriction, the second constraint (Equation (24)) sets the priority relations for the order of cluster extraction. It must be carried out in accordance with the open pit production plan to ensure that the pit walls’ slope stays below a set threshold. Based on the geotechnical characteristics of the mine, this threshold—also known as the safety slope—is established. Furthermore, availability constraints must be followed by the extraction order.
As stated differently, in order to allow access to the material, the units at the upper level of the pit must be excavated before the lower level [48]. When it comes to the model’s priority requirement, block
, which is the priority of block
and a member of
should be positioned in either the cluster n that contains block
or its antecedent. Stated differently, block
for block
is assigned to cluster n if and only if block
is assigned to the same cluster (cluster
) or predecessors. To make sure that the geometric continuity of the clusters is satisfied, Equations (25) and (26) are assumed.
The set of blocks (
) between non-adjacent blocks (
and
) that are at the same level and cluster is assigned to the same cluster based on these requirements. As a result, the clusters that were produced would be mathematically continuous with the rounded forms. This limitation represents a significant advancement over earlier studies. This limitation is applicable in both horizontal and vertical orientations. Therefore, in order to obtain a workable three-dimensional output, the continuity constraint is applied to all three directions.
A two-dimensional block model with formed clusters having a continuous shape is shown in Figure 6(a). Blocks that have the same color are put together to form clusters since each color denotes a cluster. Non-adjacent blocks 7 and 9 can be assigned to the same cluster in order to describe the continuity requirement as it is described in the model. This is because Block 8, which is in the same cluster as them and is situated between them, is regarded as a member of set
. A block model where the continuity condition is not satisfied is seen in Figure 6(b). The blocks positioned between them, namely blocks 8 and 9, are in separate clusters in this picture, however, the non-adjacent blocks 7 and 10 are in the same cluster. As a result, the cluster that results is geometrically disconnected.
The maximum vertical length of clusters is limited by Equation (27). This constraint states that the cluster’s vertical dimension cannot be greater than the maximum value of
. Mining operational considerations define
as a positive, non-zero integer. An operationally impractical cluster to mine is one whose vertical dimension exceeds
. The clusters’ maximum tonnage is governed by Equation (28). The total amount of material collected from each cluster cannot exceed
tons under this constraint. The maximum capacity of the aggregated blocks must be taken into account due to the mine’s limited loading and unloading equipment. Equation (29) takes the purpose of clustered blocks into account. Clusters must have a predetermined destination because mining planning uses them. Therefore, blocks serving comparable purposes (such as processing plants or landfills) must be included in the cluster. Stated differently, only ore blocks belong in ore clusters, and only waste blocks belong in trash clusters. The assignment of block
to cluster
is represented by the decision variable in the mathematical model, which is described by Equation (30). If block
is in cluster
, then
in this equation equals one; if not, it equals zero.
![]()
Figure 6. (a) Block model featuring ongoing clustering; (b) Block model featuring non-constant grouping.
The above mathematical model has a nonlinear objective function, the number of clusters is not known ahead of time, and the number of binary decision variables grows as the problem size increases, making the population iterated local search algorithm solving it a difficult problem. Many methods have been developed in the field of metaheuristics to solve difficult optimization problems [49]. These meta-heuristics each contribute to an effective search procedure, but they are not inherently antagonistic. By merging the mechanisms of two or more meta-heuristics, this idea offers the chance to create a new algorithm [50]. A population iterated local search
algorithm was created and applied in our study to address actual cases of the issue. Thierens (2004) presents the general idea of the
algorithm [50]. A modified version of the
algorithm is described in this paper based on the features of the clustering problem. By utilizing the data present in the population of nearby solutions,
aims to increase the effectiveness of the local search
algorithm. The
metaheuristic’s population extension is called
.
uses a local search procedure to explore the neighborhood of the current solution to find the local optimum.
disrupts the generated solution upon reaching a local optimum and relaunches the search from the fresh solution. The perturbation ought to be sufficiently big to prevent the local search from returning to the same local optimum during the subsequent iteration. To avoid having the search characteristics resemble a multirun local search algorithm, the perturbation should not be too large [51]. The
method will be constrained to investigating low-dimensional neighborhoods by utilizing the population idea in
. As a result, it is possible to attain the desired outcome faster [50]. The following are the steps in the suggested PILS algorithm.
13. Local Search Heuristics
A local search method finds the local optimal solution (
) after a constructive heuristic yields the initial answer (
). Blocks are moved between clusters according to the first solution in order to investigate nearby solutions.
13.1. Population Generation
The first phase involves identifying any surrounding solution with a greater objective function values than the main solution (
). Moreover, the solutions that minimize the goal function the most effectively are regarded as the chosen ones. The creation of a population of chosen solutions is the end consequence.
Perturbation: Specifically chosen solutions are subjected to the perturbation. The optimal result is chosen and turned into one of the chosen solutions using a local search method. Furthermore, the clusters that are chosen are ascertained by recognizing the groups within which their blocks shift throughout the breach.
Therefore, the only way to find neighboring solutions is to move the blocks of the chosen clusters. By using this procedure, the search space is smaller. As a result, the optimal solution is approached in a reasonable amount of processing time. The optimal clustering scheme is chosen by repeating the perturbation process for each of the chosen solutions. In Figure 7, the
algorithm is displayed.
Figure 7. PILS flowchart for algorithm.
13.2. Initial Cluster Generation
To create main block clusters, a constructive clustering method was created. Aggregation in this process commences at a corner of the mine block model and concludes when every block is aggregated. Figure 8 displays the flow diagram for the first cluster formation.
13.3. Creating Neighborhood Solutions
It has been recommended that new clustering schemes be developed in this step so that the optimal solution can be chosen. All blocks that are near cluster n but in separate clusters are chosen and given the name hn set throughout the process. A member of the set hn is relocated from its original cluster to a cluster that constructs new aggregated blocks based on the original clustering strategy. A member of the set hn is relocated from its original cluster n in the first phase. The model’s constraints on the newly created clusters are examined in the following step, and if all of the constraints are met, the resulting clusters are recognized as a new solution. A new cluster creation example is shown in Figure 9. As can be observed, block 6 (in the blue cluster) in the first scenario (Figure 9(a)) is next to block 3 (in the red cluster). Block 6 separates from the blue cluster and combines with the red cluster to generate new clusters in the subsequent step (Figure 9(b)). A new clustering scheme would be accepted if the resulting clusters satisfied all of the clustering model’s restrictions (Figure 9(c)).
![]()
Figure 8. Diagram for establishing major clusters.
Figure 9. The process involves forming new clusters through block movement: initial clusters are established, movement restrictions prevent additional cluster formation, and the resulting clusters are validated once all conditions are met.
14. Conclusions
This paper has presented the methodology used in Chapter 4 of the thesis, which focuses on the Methodology of Safety Behavior Management from a Cross-Culture Perspective. The study employs a quantitative research approach, supported by a survey design and a mathematical model to explore the relationship between cultural factors, individual safety behaviors, and organizational safety outcomes. The findings from this study will be discussed in the subsequent chapter of the thesis.
This paper provides a detailed overview of the methodology used in Chapter 4 of the thesis, including the research approach, design, and data analysis procedures.