Methodology of Safety Behavior Management from a Cross-Culture Perspective

Abstract

Global growth in the mining industry is driving the demand for innovative industrial processes, skilled labor, and advanced management capabilities that are aimed at improving productivity. However, these advancements have also made mining one of the most high-risk and unpredictable sectors worldwide. Despite the implementation of risk management strategies, large-scale mining projects often fail due to unrecognized or underestimated risks. This study addresses these challenges by exploring a systematic risk assessment and safety management approach in the mining sector, using Nouvelle Gabon Mining in Gabon as a case study. We analyze the dispersion of identified risks and uncertainties that are often overlooked in traditional safety frameworks. Through a hierarchical classification of hazards, we illustrate the risk impacts across various operational levels. Advanced decision-making techniques, including multiple criteria ranking with alternative trace (MCRAT) and perimeter similarity (RAPS), are employed and tested against multiple-criteria decision-making (MCDM) approaches to assess their effectiveness in hazard control. In addition, this study integrates a cross-cultural perspective, examining how cultural dimensions such as individualism vs. collectivism, power distance, and uncertainty avoidance influence safety behavior, compliance, and risk perception. By analyzing safety behaviors across diverse cultural settings, we find that culturally adaptive safety management strategies significantly enhance compliance and reduce incident rates. Drawing on data from high-risk industries like mining, construction, and manufacturing, our research emphasizes the importance of incorporating cultural considerations into safety management frameworks to create safer workplaces globally. Furthermore, we propose an early warning model for manganese mining hazards based on an optimized adaptive neuro-fuzzy inference system (ANFIS), designed to predict and control risks at multiple levels within Gabon’s manganese mines, offering a robust, data-driven tool for hazard management and global safety improvement. Key strategies for improving safety management include cultural sensitivity in safety training, cross-cultural leadership styles, and the cultural adaptation of safety communication. First, safety training should be tailored to align with cultural norms and values to improve engagement and adherence to safety protocols across diverse employee groups. Second, leadership approaches must be adapted to cultural differences, aligning communication and motivation strategies to foster an inclusive and effective safety culture. Lastly, safety communication methods should be customized to cultural contexts, using visual aids and storytelling for high-context cultures, while employing clear, direct written communication for low-context cultures.

Share and Cite:

Pamela, N. , Yang, L. , Tsoni, C. , Othniel, L. and Merimee, B. (2025) Methodology of Safety Behavior Management from a Cross-Culture Perspective. Open Journal of Safety Science and Technology, 15, 57-86. doi: 10.4236/ojsst.2025.152005.

1. Introduction

This paper is derived from Chapter 4 of the thesis, which focuses on the Methodology of Safety Behavior Management from a Cross-Culture Perspective. The chapter outlines the research approach, design, and methods employed to investigate safety behavior in underground mines across different cultural contexts, specifically in China and Gabon. The study utilizes quantitative research methodology, supported by a theoretical framework that examines the relationship between cultural factors, individual safety behaviors, and organizational safety outcomes.

This section explains the key concepts and principles of the research methodologies employed in the study. It explains the quantitative research methodology, including its theoretical framework, definitions, advantages, and limitations, while contrasting it with qualitative approaches. The chapter also discusses foundational assumptions about quantitative research and the methodological strategies used. Additionally, it covers methods for determining sample size, sampling protocols, and population selection, outlining how these factors define the study’s parameters. Demographic information of respondents, such as age and educational attainment, is provided, along with other distinguishing characteristics.

Furthermore, the chapter details the strategies used for data collection, examining the reliability and validity of these methods. It emphasizes the importance of developing an initial data set method that provides a broad framework for organizing information, ultimately guiding discoveries toward knowledge growth.

2. Research Approach

The methods and strategies employed in research, ranging from particular procedures for data collection, processing, and interpretation to a variety of theoretical presumptions, are known as research approaches [1]. These decisions could be taken in a different order than the one that follows or appears logical. Therefore, the decision of which method to employ for research on a particular subject still needs to be made. The researcher’s decision is influenced by his or her theoretical presumptions, the investigative methods used, and the specific methods used to acquire, examine, and evaluate data. The type of topic being studied, the background and experiences of the researchers, and the intended audience all influence the research process [2]. Mixed method studies, qualitative studies, and quantitative studies are the three primary categories of research.

3. Quantitative Research

This approach involves empirical statements, which are factual assertions based on observation or Quantitative research is a methodical process that analyzes and clarifies various circumstances using numerical data gathered from observations. Quantitative research is the collection and analysis of numerical data to illustrate, explain, forecast, or regulate the trajectory of phenomena being studied. Numerical data analysis is challenging because it necessitates a systematic approach [1]. These investigations make it possible to make predictions, analyze the causal linkages between variables, and extrapolate the findings to the whole population. This tactic usually draws a large number of audiences simultaneously in the smallest amount of time. Deductive reasoning is embraced by quantitative research [3]. Instead of concentrating on what should be done in these situations, experience and descriptive statements that explain the significance of such events in practical terms [2]. Additionally, it incorporates other techniques. Furthermore, it conducts empirical testing to ascertain the extent to which a certain policy or program satisfies a norm or standard [4]. Lastly, mathematical operations are performed on the gathered numerical data.

Furthermore, the goal of both qualitative and quantitative research methods is to describe a particular occurrence. However, the analytical tools used in qualitative research rely on statistical analysis and quantitative methods. In a variety of fields, such as psychology, education, biology, physics, and natural science, quantitative research aims to characterize a feature by collecting numerical data, such as the number and percentage of the data [5]. Additionally, non-numerical data can be converted into numerical forms with the use of specially designed tools. As a result, methods encourage the gathering of quantitative data about participants’ opinions and attitudes, among other things. Quantitative methods are procedures that are used to determine social reality and to collect numerical data using specialized inquiries for specific reasons [6]. Different forms of quantitative research are depicted in Figure 1.

Figure 1. Quantitative research.

4. Qualitative Research

These approaches use naturalistic and interpretive approaches to many topics of debate and constantly attempt to solve scientific and practical problems in communities. These methods demonstrate the patterns and difficulties that people encounter in life (Figure 2) using a variety of empirical data, such as case studies, firsthand accounts, and narratives [7]. They examine the profound significance and underlying causes of these inexplicable events. To better comprehend a particular occurrence, qualitative research involves gathering, evaluating, and interpreting comprehensive narrative or visual data [2]. It examines people’s opinions, actions, and interactions. It gathers and analyzes text and textual data. This process is time-consuming because a lot of time is spent on each participant, but it requires fewer participants than other approaches, where the issue being studied is exploratory [8]. It is a comprehensive investigation of many aspects of a phenomenon with the goal of learning about objects in their native habitat. Inductive logic is used in this process. As a result, the qualitative approach can generate original ideas, viewpoints, and hypotheses [9]. It lacks generalization since its primary focus is on the conclusions drawn from events that had taken place in specific settings, without taking into account potential outcomes that might arise later or in different circumstances.

Figure 2. Qualitative research.

5. Mixed Methods

In order to gain insight into a subject, mixed-method approaches integrate the use of both qualitative and quantitative methodologies, which might vary based on the study’s goal and research objectives [10]. Nonetheless, both approaches may receive equal consideration, or one approach may be given priority among those selected for integration [2]. These methods allow academics to tackle challenging research issues in a variety of fields. When the alternative is insufficient for study, these methods are useful because they combine the advantages of qualitative and quantitative approaches [1] [11] (see Figure 3). Blended approaches are advantageous for researchers with different methodological preferences in today’s multidisciplinary research environments. These days, a variety of disciplines, including psychology and healthcare, use mixed techniques. However, it need not be expressly referred to as mixed techniques; instead, it might be concealed [2]. By fully utilizing these approaches to their fullest capacity, researchers may ensure that they are advocating for the use of mixed methodologies in an effective manner.

Figure 3. Mixed methods approach.

Based on the benefits and drawbacks of each technique, a quantitative strategy was selected for this study after a review of qualitative, quantitative, and mixed method studies. The ability to compare goals with results and to measure accuracy is another advantage of quantitative research. Therefore, the research methodology needed to be appropriate for the study’s goals and scope; therefore, a quantitative approach was chosen based on its intended purpose. This survey’s statistical approach made it possible to fully interpret the findings. When collecting data, researchers should show respect and decency to both study sites and participants [12]. Ethically sound population sampling and participant anonymity, guaranteeing that every participant received the utmost secrecy, characterized the quantitative methodology that supported this study. Furthermore, a quantitative approach was ideally suited for this study since it was necessary to gather a large number of respondents from the target group.

5.1. Research Design

According to Myers et al. (2013), research design is the process by which a researcher chooses the techniques and approaches used to carry out their investigation [13]. It primarily takes into account the aims and objectives that accurately represent the constraints of time, place, money, and research staff availability. Additionally, the preferences and inclinations of the researcher and the evaluators have a significant impact on research design. Research designs are frameworks for gathering and analyzing data, according to Taherdoost (2022) [2]. Wisenthige (2023) claimed that several indicators are needed to check whether the chosen study design will succeed in achieving its goals, which further supports this idea [14]. These factors include the ability to locate necessary information and the applicability of the research topic and analysis approach. Furthermore, Ranganathan & Aggarwal (2018) shed light on this idea by stating that a study design is a flexible blueprint that connects philosophical frameworks with investigative strategies and, secondarily, data collection techniques [15].

Because survey design methodology enables the collection of standardized data from a single group, it was used for this investigation. Asking people specific questions about the companies or places that are engaged in the surveying process is part of the data gathering process [15]. Qualitative research also frequently uses this technique. These researchers claim that this kind of study entails evaluating the traits of a population sample that might be representative of the target group under investigation [2]. In addition to other statistical techniques utilizing the data gathered, a questionnaire consisting of a series of structured questions is offered to respondents during in-person interviews. When thinking about employing such an approach, it is strongly advised to enable an individual’s attitudes, beliefs, or ideas regarding specific topics based on an acceptable sampling process [2]. Following data collection from a sample, the results are extrapolated to the entire population of interest. Thus, the opinions of the majority of citizens are represented in this area. Selecting a representative sample, creating and distributing questionnaires, and evaluating results are some of the tasks involved in information gathering.

5.2. Target Population

Workers below the managerial level in underground mines in China and Gabon made up the study’s target group.

5.2.1. Sampling Technique

A sample is a subset of the total population. Since it can occasionally be impractical to gather data on every member of the group, sampling is frequently utilized in population research. It is expensive and time-consuming to conduct a census of a large population, claims Rahi (2017) [16]. As a result, sampling is frequently employed as a more economical and effective substitute. According to Berndt (2020), a common sample flaw is that non-specialists are reluctant to assume that the results are representative of any particular population [17]. The sample strategy used will determine any additional restrictions. According to Golzar and Tajik (2022), a representative sample has been chosen so that its key attributes closely resemble those of the population it represents [18]. The two main kinds of sampling techniques are probability and nonprobability.

All methods in which selection occurs at random are included in probability sampling [19]. This entails creating a procedure or system that guarantees different members of the target group have an equal chance of getting selected. One major benefit of probability sampling techniques is that, when properly executed, they ensure an objective sample that fairly represents the target population [17]. As a result, even when discussing a larger society, researchers can use estimates derived from random sampling with confidence and without bias. By their very nature, probability sampling techniques can be challenging. Specifically, probability samples ought to be fairly big, requiring significant labor, financial, and time inputs [18]. Furthermore, it takes extraordinary talent to create probability samples effectively. Basic random sampling, systematic random sampling, stratified random sampling, and cluster sampling are a few examples.

In statistics, a non-probability technique refers to any method used to choose survey participants outside random selection. Convenience sampling is a common technique used in psychological research to choose participants [18]. Regardless of any plans, this means selecting participants for a study based on their accessibility and proximity to the study [20] for a thorough rundown. This would include an example of convenience sampling that is frequently observed in developmental research: asking student volunteers to take part in studies. Convenience sampling has the benefit of being economical, effective, and simple to use [20]. However, the inability to draw generalizations from the sample is convenience sampling’s primary flaw [21]. All forms of convenience samples have comparable advantages and disadvantages, but to varying degrees. Convenience sampling has advantages and disadvantages that are opposite to those of random sampling. Convenience samples are comparatively less expensive, quicker, and easier to conduct, although probability samples typically yield data that is highly generalizable [21] [22]. While case-study research uses non-probability sampling, the majority of survey-based research uses probability sampling [18].

A convenience sample strategy was used in this study. This is a sensible approach to obtain samples without the need for complicated procedures and at a reduced cost. Because we were unable to obtain funding to conduct the study, the methodology employed in this work enabled us to collect data at a low cost by choosing readily available sample units. We had a few assistants who were inexperienced with complex statistical selection techniques to assist with the data collection process. Their efforts were supported by the ease of convenience sampling, which can be carried out without specific expertise and requires no preparation.

5.2.2. The Questionnaire

A questionnaire with three sections and an introduction served as the main instrument for gathering data for this study. Every element of the questionnaire and its layout will be thoroughly explained. A questionnaire was used for data collection and hypothesis testing in order to meet the goals of this study, since it made it simple to quantify the results and provided pertinent information for evaluating the hypotheses. In general, the questionnaire’s measuring items were primarily drawn from earlier studies. Three components made up the questionnaire: Section 1 asked five demographic questions, and Section 2 asked safety-related questions (safety behavior, safety competency, and management safety commitment). The survey was designed to be distributed via a link over an internet server. The internet is superior to traditional paper-and-pencil surveys, claim Weigold et al. (2021) [23]. Additionally, lowering survey administration expenses, minimizing data errors, cutting down on the time it takes to collect and analyze data, and improving convenience. Studies have shown that responses from online surveys are just as reliable as those from phone or mail surveys for behavior prediction. Through their work on WhatsApp groups, those who expressed interest in participating received a pre-alert about the poll and were asked to complete an online questionnaire. Additionally, only those who had not completed the survey within the allotted time received multiple reminders. Online surveys have been compared to different methods of contacting respondents [23]. This study explored the possibility that respondents’ internet expertise could be related to their response rates to the primary research questionnaire. Because of their professional work experience, it was presumed that participants were computer savvy. In order to help the respondents and increase response rate accuracy, the current survey used several technological design elements that were taken from previous research. In order to increase reaction rates, this design also incorporated other strategies mentioned by Daikeler et al. (2020) [24].

All interested participants were sent to the survey site via an internet link that the researcher provided. Respondents received WhatsApp notifications about the survey. Refer to the prelude and introduction sections on the top page of the survey program for further information about the survey. Our research tool’s design was impacted by practical considerations such as participant weariness, time limits, and the length of the primary study questionnaire [25]. According to Daikeler et al. (2020), a lengthy survey instrument may overwhelm respondents, making it harder for them to understand and ultimately resulting in poor response rates [24]. To quantify each of these identified components, numerous sections and scales had to be created. By taking this approach, we intended to minimize these issues during the thirty to forty minutes that were allotted for completing this specific questionnaire.

5.2.3. Part I: Demographic Survey

A demographic survey was carried out to determine the demographic profile of the sample and assess the connections between safety behavior and situational, individual, and specific characteristics. Section 1 of the research questionnaire gathered demographic information about the participants, including age, gender, education, marital status, and years of employment. The response categories for these demographic questions were carefully designed to align with the study’s eligibility criteria, ensuring participants met the necessary requirements for inclusion. Many of the study’s assumptions could be evaluated with the use of the extra demographic data.

5.2.4. Part II: Safety Survey

This section contains questions about management’s commitment to safety, safety behavior, and safety competency. As mentioned earlier, safety behavior in this study includes both safety participation and safety compliance. The safety involvement and compliance scale offered by [26] [27] is the most widely utilized. The scale has been updated in response to input from survey participants. Data on safety behavior was collected using nine metrics. Four of the nine measures evaluated safety compliance, and five evaluated safety involvement. On a five-point Likert scale, with 1 denoting “strongly disagree” and 5 denoting “strongly agree,” participants indicated how much they agreed with each item. For the nine safety behavior-related items, the Cronbach’s alpha (α) is higher.

Eight items on a 5-point Likert scale were used to evaluate the person’s safety competency. Strong reliability was indicated by the scale reliability test’s acceptable Cronbach’s alpha value. Items were taken from Bensonch et al., 2022, and improved upon [28]. Eight questions based on participant opinions were used to assess respondents’ perceptions of management’s ability to motivate employees to perform more securely. Vinodkumar and Bhasi (2010) provided the management safety commitment questions, which were evaluated on a five-point Likert scale with 1 denoting “strongly disagree” and 5 denoting “strongly agree.” [29] The current study’s alpha coefficient for the scale is acceptable.

5.3. Pilot Study

A pilot study’s objective is to collect initial data and assess possible research techniques, instruments, and protocols in advance of a more extensive investigation. Pilot studies, which are conducted to identify any shortcomings or issues with the research instruments and technique prior to the start of the main study, are among the most important components of any research project [30]. When faced with contradictory approaches, researchers may become well-versed in the specifics of the methodology and utilize that information to make decisions, such as whether to employ online surveys or interviews. In November 2022, we carried out an initial examination in one of the Prestea mines. The pilot meticulously adhered to the preliminary study process, which included evaluating a condensed version of the entire survey. Twenty-five employees participated in the pilot survey, and two research assistants were hired. We invited workers who were free and accessible to participate in the study, allowing them enough time to make up their minds. The consent form was signed by the participants to show their agreement. The research assistant reported that data collection went smoothly and that the response rate was recorded.

The study assistants had to assist the mine workers in filling out the questionnaire. Verifying that the questionnaire items accurately matched the goals of the study was essential. The questionnaire’s appropriateness, clarity, well-defined questions, comprehension, and consistent presentation were evaluated during the pilot trial. Consent forms and statements from personality and safety surveys were tested for comprehension. The respondents took an average of 15 to 20 minutes to complete the surveys. They made every effort to respond to all of the questions, however, some were missed. For a number of questions, there were notable discrepancies in the responses, which were primarily caused by the questions’ ambiguity, which caused participants to misunderstand them. Questions on safety behavior and safety competency were used to make these observations. Typographical errors were identified and fixed. In general, participants in this pilot study had little difficulty understanding the items on the questionnaire. This pilot study demonstrated the practicality of the research methodology. The initiative did not appear to be very annoying to the mine workers, nor did it have a large impact on staff time, from a managerial perspective.

5.4. Conceptual Framework

Based on the literature reviewed, a conceptual framework was created to highlight important theories and ideas about how mining affects the economy. The theoretical underpinnings of the review included the resource curse theory, the sustainable development framework, and social impact assessment standards. The analysis and interpretation of the results were guided by these variables.

Data Collection Methods

The mining company NGM was selected using the entire population sampling method. Using comprehensive population sampling, the entire research population was taken into account during the selection process [31]. The mining businesses were chosen due to their presence in Gabon, where they maintained records on magnesium’s benefits.

For data analysis purposes, we had to sample a sufficiently large set of businesses for our research project. Spearman and point biserial coefficients were employed. Power was calculated using the Spearman correlation, which has the largest sample size [32]. We anticipated seeing an average effect size of 0.3, per Cohen (1988) [33]. There was a standard alpha level of 0.05. The Spearman correlation has roughly 91% power, just like the Pearson correlation [34]. After applying the following factors to G*Power, we found that a sample size of 102 instances was optimal for the study [35]. We were able to get information on magnesium mines because our study employed historical data, which we recognized as constraints while doing the statistical analysis. Cautiously, more fact analyses were conducted.

6. Data Analysis Procedures

6.1. Test the Hypothesis

The following assumptions were put to the test by the study using correlation and linear regression:

H0 = There is no relationship between typical OSHA issues and manganese mining operations.

H0: The determination of management and the mining workforce to adhere to occupational safety and health regulations is unrelated.

To assess each hypothesis, significance thresholds (alpha) of 10%, 5%, and 1% were employed. Confidence ranges of 90%, 95%, and 99% were also employed. The hypothesis test is deemed statistically significant when the P-value is less than the standard of statistical significance (alpha). The conclusion of the null hypothesis cannot be included in the confidence interval for results to be deemed statistically significant [36]. The 90% confidence level, according to (Sauro, 2015), is employed as a comparability and technological certainty when analyzing survey data because 90% confidence for a symmetrical claim is equal to 95% confidence for a skewed claim [37]. Although the study made use of survey data, when analyzing miners’ answers, a 90% confidence level was selected as a commercial assurance. [38] employed a 95% confidence level to illustrate that if numerous samples from a single group are utilized to repeat the query, the true population mean will be attained. The researchers adopted the 99% confidence level because poor decision-making in the manganese mining business might result in fatalities or significant injuries, according to [37]. This level is typically utilized for circumstances where a faulty decision could lead to damage or death. In order to ensure a higher degree of precision in the views held by small-scale miners, information was gathered by visiting nearly all of Ntotroso’s precious metal processing facilities. 90%, 95%, and 99% confidence intervals were used to test each of the offered hypotheses, and 10%, 5%, and 1% significance (alpha) values corresponded to each.

A statistical method called correlation analysis evaluates the degree of relationship between common health and safety concerns and manganese mining. A significant association exists between multiple parameters if there is a high correlation between them. As such, it can be described by utilizing the statistical data that is now available to analyze the degree of the correlation [39]. This method relies entirely on linear regression analysis, a statistical technique that may be applied to any number of independent or explanatory variables to define the relationship between an intervening variable. Metrics were employed in the research effort to assess the relationship. Furthermore, linear regression was applied. This holds significance when attempting to ascertain the precise quantity of a factor exclusively through the possible appraisal of another element [40]. The equation for linear regression is:

y= β 0 + β 1 x+ε (1)

whereas,

Y represents what can be expected of the subject variable ( y ) in light of any factor in the independent variable ( x ) ,

β 0 is the intercept or the y-value expected when x is 0,

β 1   is the regression coefficient,

X is the independent variable (the one for which we hypothesize an effect on y ).

The logistic statistic’s estimation error, or how far it deviates from our estimate, is expressed as ε . Through linear regression, the best-fit line in the data is found by looking for the correlation value ( β 1   ) that maximizes the aggregate prediction error ( ε ) [40]. This method was used for each hypothesis. Using logistic regression and correlation analysis, the connection between uncorrelated variables and dependent factors was examined. In this section, the dependent variable ( y ) was the adoption of OSH practices by small-scale manganese miners; the independent variables ( x ) were socio-cultural traits, shared interests in OSH, managerial commitment, training, and demographic traits. OSH risks in Gabon’s ASGM operations. Risks related to auditory, mental, arbitrary, natural, chemical, and psychological aspects of gold mining in Gabon have all been evaluated and made public.

6.2. Mathematical Formulas for Descriptive Analysis

Our research is grounded in both theoretical and empirical investigations. We employed a statistical method to choose all 510 survey participants for our investigation. In order to identify the strategies employed to strike a balance between environmental sustainability, financial success, and safety, we perform a case study and analyze the correlations between variables using regression and descriptive statistics. The Cochran calculation was utilized to determine the sizes of the data sets [41]. This was employed due to the ambiguous settlement.

n 0 = Z 2 pq e 2 (2)

n n = Size of the sample,

Z 2 = the desired confidence level is the abscissa of the normal curve that cuts off an area at the tails 1α , e.g., 95%,

e is the desired degree of accuracy,

p is the estimated percentage of an attribute that the population possesses,

q is 1p .

where,

Zscore =1.05

p=0.5

q =1p

e= 0.05

Therefore,

n 0 = Z 2 pq e 2

n 0 = ( 2.05 ) 2 ( 0.5 )( 0.5 ) ( 0.05 ) 2 =510

The linear regression equation is:

y= β 0 + β 1 x+ε

whereas,

Y represents what can be expected from the subject variable  ( y ) given any factor in the independent variable ( x ) ,

β 0 is the intercept or the y-value expected when x is 0,

β 1   is the regression coefficient.

The dependent variable  ( y ) in this section was the adoption of OSH practices by small-scale manganese miners; the independent variables ( x ) were sociocultural characteristics, common interests in OSH, managerial commitment, training, and de-risking. ε is our estimate of the estimation error of the logistic statistic, or how much does it differ from our estimate? Logistic regression finds the best-fit line in the data in search of the correlation parameter ( β 1   ) that optimizes the aggregate prediction error ( ε ) [40]. This technique was used to evaluate each hypothesis.

The study commenced with the generation of descriptive data and the analysis of possible issues related to multicollinearity, heterogeneity, and oscillation following the administration and collection of information. The suitability of employing fixed vs random effects was then evaluated. We utilize the variation inflation factor, or VIF, to test multicollinearity. The mathematical formula for manually calculating VIF is,

VI F i = 1 1 R j (3)

whereas, R j is residual correlation regression.

In order to address heterogeneity, we use the mathematical formula.

Mathematically, I 2 is expressed as I 2 = τ 2 / ( σ 2 + τ 2 ) , where   τ 2 denotes the between-trial heterogeneity, σ 2 denotes some common sampling error across trials, and ( σ 2 + τ 2 ) is the total variation in the meta-analysis.

Pearson’s Chi-Square Test:

The Pearson Chi-Square test is employed to ascertain if the variables are connected.

x 2 = ( o i n p i ) 2 n p i (4)

Odd Ratio: The basic predictor and compositional factors model, contextual factors model, and working conditions model were the first three models in which the odds ratios (OR) were constructed. When the odds ratio (OR) was one, increasing the predictor’s value had no effect on the likelihood of developing occupational health issues; when the OR was greater than one, it indicated that there was a greater chance of developing occupational health issues; and when the OR was less than one, there was a lower chance of developing occupational health issues.

odds= P 1P (5)

Model 1: Logistic Regression

log( odds )=logit( P )=ln( P 1P ) (6)

Logistic Curve

P= e a+bX 1+ e a+bX   or  P= 1 1+ e ( a+bX ) (7)

2LogL

x 2 =2L L R ( 2L L F )=2ln( likelihood R likelihood F ) (8)

Model 2: Probit Regression: When the dependent variable is binary, as we assume in probit regression, the regression function is modeled by the cumulative standard normal distribution function Φ( )

E( Y/X )=P( Y= 1 X )=Φ( β 0 + β 1 X ) (9)

Model 3: Logit Regression: The population logit regression function is;

P( Y=1/ X 1 , X 2 ,, X k )=F( β 0 + β 1 X 1 +,,+ β k X k ) (10)

= 1 1+ e ( β 0 + β 1 X 1 +,,+ β k X k )

6.3. Validity and Reliability

Validity

In order to determine whether there was a causal relationship between the independent and dependent variables of the study, the researcher consulted with the university supervisor to assess the validity of the study instrument. To increase the validity of the questionnaire and convince the respondents to participate in the study, the researcher also administered it himself and explained the topics to the respondents. This approach is consistent with Greener’s (2008) recommendation [42]. The researcher will also conduct a principal factor analysis to independently confirm the ratings of these constructs. Factors will be recovered using covariance matrices and Varimax rotations to aid in the interpretation of initial factor patterns. The factorial validity of the scales will be demonstrated by factor loadings. According to the results, each of the five items fits exactly into one factor. In order to make the factor loading output more understandable, the factor loadings of the lowest values, 0.5, were given in this case.

7. Reliability

Fifteen members of the target population who were omitted from the final sample of respondents will participate in a pilot test conducted by the researcher prior to the final empirical analysis. This test will help identify any inconsistencies between research instruments, research questions and techniques, which will then be modified and adjusted. The most widely used scale reliability test metric, Cronbach’s alpha, was used to assess data reliability. The accepted value for Cronbach’s alpha ( α ) was 0.70 as reported by Nunnally in 1978 [43]. For the final analysis of the information obtained from the selected respondents, the test will be run once more. The 26 items will be assessed for overall internal consistency, and the results revealed a high alpha value. Since the Alpha value is greater than 0, it can be concluded that the questionnaire will be reliable and consistent.

CR= ( i=1 n F L i ) 2 ( i=1 n F L i ) 2 +( i=1 n M E i ) (11)

where FL is the standardized factor loadings of measurement item is the number of items in a factor and ME is the measurement error of i

ME=( 1F L i 2 ) (12)

8. Informed Consent

Getting informed consent from the research sample requires protecting participant data confidentiality [44] [45]. Participants received a thorough explanation of the procedures that would be followed to safeguard the confidentiality of their information as well as the expectations surrounding their involvement in the study. Before being permitted to take part in the study, participants had to complete and sign an Informed Consent Form Participants completed a form that recorded the date, their contact details, and the researcher’s name. Comprehensive details on protecting participants’ identities and privacy, as well as the eventual disposal of materials, are included in the Informed Consent Agreement [46] [47]. Throughout the study, all participant data were kept private, and only the researcher had access to it.

9. Data Analysis

Data cleaning is a crucial step in the data analysis process that replaces missing values and handles outliers to guarantee the dataset’s quality. In order to determine whether or not the data is representative, the analysis then looks at the response rate, a crucial parameter in survey-based research. Descriptive statistics are calculated when the foundation for data integrity has been established. Measures of central tendency, variability, correlations, averages, and standard deviations among variables are all included in these statistics. In this context, SPSS 26 was utilized. This offers a fundamental comprehension of the characteristics of the data.

The normality test, which can verify the assumption of a normal distribution needed for the majority of parametric statistical tests, shall be research/conducted. Following that, common method bias will be examined to account for distortions that can result from variations in the measurement procedure alone, ensuring that the results are unaffected by the data collection method. The foundation of this study is factor analysis, from which various branches branch out into exploratory and confirmatory techniques (Figure 4). The goal of exploratory factor analysis is to find the underlying structure in observed variables by essentially classifying them according to patterns of correlation. Through established criteria, Confirmatory Factor Analysis, which is modeled after hypothesis testing, determines whether there is, in fact, an influence regarding suggested factors.

Figure 4. Data analysis method.

10. Mathematical Model Building

Developing a mathematical model for cross-cultural safety behavior management entails determining the connections between cultural elements, personal safety practices, and overall safety results. A collection of formulas and parameters that explain how cultural factors influence safety behavior and the consequent effect on safety performance can be used to develop this model.

10.1. Define Variables and Parameters

With components signifying values on cultural dimensions, let C be the vector of cultural dimensions:

C=[ C 1 , C 2 ,, C n ] (13)

where:

C 1 : Power Distance,

C 2 : Uncertainty Avoidance,

C 3 : Individualism vs. Collectivism,

C 4 : Masculinity vs. Femininity,

C 5 : Long-term Orientation.

Let B be the individual safety behavior vector, representing key aspects of safety behavior

B=[ B 1 , B 2 ,, B m ] (14)

where:

B 1 : Compliance with safety protocols,

B 2 : Risk reporting behavior,

B 3 : Proactive risk identification,

B 4 : Adherence to communication practices in emergencies.

10.2. Define Cultural Influence Function

Model the influence of cultural dimensions C on individual safety behaviors B with a linear or nonlinear function:

B=f( C )+ (15)

where:

f( C ) is the function mapping cultural factors to safety behaviors,

is a random error term to capture variability in individual responses.

A common approach is a linear model:

B i = j=1 n α ij C j + β i + i (16)

where:

α ij   are coefficients representing the influence of each cultural dimension C j on behavior B i .

β i is an intercept term representing the baseline safety behavior.

10.3. Safety Behavior Outcomes

Define an outcome variable SB representing overall safety performance, which is influenced by individual behaviors B:

S=g( B )+η (17)

where g( B ) can be a weighted sum or nonlinear function:

S= i=1 m γ i B i +δ+η (18)

γ i   are weights for each safety behavior,

δ is a baseline term for overall safety performance,

η is a random error term to account for other factors not captured in the model.

10.4. Multilevel Model for Cross-Cultural Variations

To account for differences in cultural influences across different regions or organizations, use a hierarchical or multilevel model:

B i,k = j=1 n α ij,k C j,k + β i,k + i,k (19)

k denotes the specific cultural group or organization.

α ij,k   and β i,k can vary by cultural group, allowing for different cultural impacts on safety behavior within each group.

10.5. Optimization for Safety Improvement

To optimize safety performance S by adjusting cultural and behavioral interventions:

Define an objective function to maximize S , subject to constraints on resources, training, and other factors.

max C,B S= i=1 m γ i B i +δ (20)

Subject to:

j=1 n α ij C j + β i resource constraint (21)

Predictive Model for Safety Outcomes

A predictive model can be created by fitting the parameters α ij , β i   , and to γ i   data, enabling estimation of safety performance SSS based on known cultural dimensions and safety behavior profiles.

For instance:

If C and B values are known for a particular organization, we can predict S to assess potential safety outcomes.

Machine learning techniques like linear regression, logistic regression, or neural networks can be applied to fit and predict safety outcomes from this data.

11. Feedback and Learning Mechanism

Incorporate feedback by updating model parameters as more data becomes available, allowing the model to adapt to evolving cultural and behavioral factors:

Define an update rule, e.g., using gradient descent, to iteratively adjust parameters α, β, and γ based on observed outcomes.

Structural Equation of the Study:

To mathematically model safety behavior management from a cross-cultural perspective, we need to consider factors such as cultural influences, individual behaviors, organizational policies, and external environmental factors. Below is a structured mathematical approach that can be adapted for this purpose.

11.1. Define Variables and Parameters

Individual-Level Factors:

B i : Safety behavior of individual i (binary or continuous, depending on modeling approach)

A i : Awareness level of individual i regarding safety protocols,

C i : Cultural influence on individual i safety behavior, which might vary depending on cultural background,

E i : External factors impacting i (e.g., perceived risk in the environment),

P i : Personal attitudes or perceptions of individual i towards safety.

Latent Variables and Observed Indicators

Let’s define:

η 1 : Safety Behavior: the endogenous latent variable representing safety behaviors of individuals,

ξ 1 : Cultural Influence: the exogenous latent variable representing cross-cultural factors affecting safety behavior,

ξ 2 : Organizational Support: the exogenous latent variable for organizational factors impacting safety,

ξ 3 : Personal Attitudes: the exogenous latent variable for personal safety attitudes and perceptions.

11.2. Measurement Equations

For each latent variable, we model its relationship with observed indicators.

Safety Behavior ( η 1 ):

B 1 = λ B 1 η 1 + B 1

B 2 = λ B 2 η 1 + B 2

B 3 = λ B 3 η 1 + B 3

where B 1 , B 2 , B 3 are observed indicators of safety behavior (e.g., adherence to safety protocols, use of protective equipment, risk-taking behaviors), λ B 1 , λ B 2 , λ B 3 are factor loadings, and B 1 , B 2 , B 3 are measurement errors.

Cultural Influence ( ξ 1 ):

C 1 = λ C 1 ξ 1 + δ C 1

C 2 = λ C 2 ξ 1 + δ C 2

where C 1 , C 2 represent indicators of cultural influence (e.g., collectivism vs. individualism, power distance), with λ C 1 , λ C 2 as loadings and δ C 1 , δ C 2 as errors.

Organizational Support ( ξ 2 ):

O 1 = λ O 1 ξ 2 + δ O 1

O 2 = λ O 2 ξ 2 + δ O 2

where O 1 , O 2 are indicators of organizational support (e.g., safety training, management involvement).

Personal Attitudes ( ξ 3 ):

P 1 = λ P 1 ξ 3 + δ P 1

P 2 = λ P 2 ξ 3 + δ P 2

where P 1 , P 2 represent personal attitudes towards safety.

11.3. Structural Equations

The structural equations model the causal relationships between the latent variables.

Safety Behavior ( η 1 ) is influenced by Cultural Influence ( ξ 1 ), Organizational Support ( ξ 2 ), and Personal Attitudes ( ξ 3 ):

η 1 = γ 11 ξ 1 + γ 12 ξ 2 + γ 13 ξ 3 + ζ 1

where γ 11 , γ 12 , γ 13 are path coefficients representing the effects of each exogenous variable on safety behavior, and ζ 1   is the structural error term for η 1 .

11.4. Covariances and Correlations

To capture cross-cultural interactions:

Cov( ξ 1 , ξ 2 )= ϕ 12

Cov( ξ 1 , ξ 3 )= ϕ 13

Cov( ξ 2 , ξ 3 )= ϕ 23

where ϕ 12 , ϕ 13 , ϕ 23 are covariances between the latent variables.

12. Open Mathematical Model of Mining Mine Optimization

One of the most crucial phases in mine design is surface mining planning, which becomes a complex and demanding optimization problem for large mineral resources. In these kinds of situations, grouping mining blocks—the smallest mining units-into larger units is a popular strategy. We examine limited block clustering using an integer nonlinear programming model wherein the size and shape of individual clusters are within a preset range, and blocks are physically connected inside a cluster in order to minimize degree deviations. We then offer a population-iterated local search strategy to solve this nonlinear model and obtain a close-to-optimal solution. Application of the suggested model and solution methodology to a case study of a 40,947-block gold-silver deposit was conducted. The mining planner can handle the production planning problem faster by grouping the mining blocks into 1966 clusters.

Mathematical Formulation

Mining planning has shown interest in mining block aggregation, which is an effort to combine smaller mining blocks into larger mining units. The reduction of the surface production planning problem’s size is the primary goal of mining block aggregation. The following factors make the aggregation approach a good choice for planning: 1) more practical schedules are produced; 2) the planning problem can be solved faster; 3) it is easily adaptable to include case-specific functions; and 4) it is simple to implement [48]. An illustration of a mining block. Figure 5 depicts clustering for a two-dimensional block model. The 16-block original block model is shown in Figure 5(a), and the aggregated block model with four clusters is shown in Figure 5(b). The quality and rock quality of these blocks that are grouped together should be comparable. The different clusters should not be separated from one another, and both the horizontal and vertical extensions must adhere to the mining plan’s restrictions. Above all, the stability of the mining faces must be consistent with vertical clustering. Where there are mathematical limits, these are described in further depth.

Figure 5. Model of clustering blocks with a 45-degree wall slope in mind. Color versions are accessible on the internet.

An integer nonlinear programming (INLP) model has been created in this work to cluster mining blocks into more substantial units with minimal technical limitations. The clusters that are created as a result are geometrically continuous and adhere to predetermined dimensions for size and shape. The objective function, decision variables, and restrictions that are part of the suggested mathematical model are covered in the sections that follow.

Sets and Indexes:

i : Block index ( i I ) ,

n : Cluster index ( nN ) ,

o : Ore block index ( o O ) ,

w : Waste block index ( w W ) ,

F i : Set of priority blocks for block i ; this set contains blocks that should be mined before block i is reached,

K : Set of blocks between block i and block j in each bench (horizontal level).

Parameters:

g i : Degree of block i ,

Z i : Height of block i ,

b i : Tonnage of block i ,

M : Index for maximum cluster weight,

L : Index for maximum vertical cluster extension,

y i [ 0,1 ] : Binary variable indicating whether block i is ore or waste. It is equal to one if block i is ore, and zero otherwise.

Decision variables:

X i n [  0,1 ] : represents the assignment of block i to cluster n . X i n is equal to one if block i is in cluster n ; and zero otherwise.

q ij n  [ 0,1 ] : Binary variable representing whether blocks i and  j are adjacent in cluster n . If adjacent, equal to one; and zero otherwise.

Objective function: In general, combining mining blocks into bigger units reduces the resolution of block specifications (such as mineral grade). As a result, a number of issues arise throughout the mining schedule, which has a big impact on how well the mining plans are produced. Therefore, the objective function is defined as the reduction of the sum of within-cluster degree variance by taking each block’s degree into account as an input to the model. The defined objective function is as follows:

Minimize i=1 I n=1 N [ g i ( i=1 I g i × x i n i=1 I x i n ) ] 2 × X i n (22)

Constraints: Several restrictions apply to the developed mathematical clustering model in order to produce clusters that are satisfactory from an operational perspective.

n=1 N X i n 1  iI (23)

r=1 R X j r X i n 0  nN&i,  jI| j F i (24)

X i n + X i n + X j n ( 1+ q ij n )  nN&i,  jI| ij (25)

X k n q ij n   nN&kK&i,jI (26)

X i n z i X j n z j L  nN&i,jI (27)

i=1 I X i n b i M  nN (28)

i=1 I X i n y o=1 O X o n ( 1y ) w=0 W X w n =0  nN (29)

X i n [ 0,1 ]  nN&iI (30)

Block I should be assigned to one and only one cluster in accordance with Equation (23). Because of the wall slope restriction, the second constraint (Equation (24)) sets the priority relations for the order of cluster extraction. It must be carried out in accordance with the open pit production plan to ensure that the pit walls’ slope stays below a set threshold. Based on the geotechnical characteristics of the mine, this threshold—also known as the safety slope—is established. Furthermore, availability constraints must be followed by the extraction order.

As stated differently, in order to allow access to the material, the units at the upper level of the pit must be excavated before the lower level [48]. When it comes to the model’s priority requirement, block  j , which is the priority of block  i and a member of F i should be positioned in either the cluster n that contains block  i or its antecedent. Stated differently, block  i for block  j is assigned to cluster n if and only if block j F i is assigned to the same cluster (cluster n ) or predecessors. To make sure that the geometric continuity of the clusters is satisfied, Equations (25) and (26) are assumed.

The set of blocks ( set K ) between non-adjacent blocks ( i and j ) that are at the same level and cluster is assigned to the same cluster based on these requirements. As a result, the clusters that were produced would be mathematically continuous with the rounded forms. This limitation represents a significant advancement over earlier studies. This limitation is applicable in both horizontal and vertical orientations. Therefore, in order to obtain a workable three-dimensional output, the continuity constraint is applied to all three directions.

A two-dimensional block model with formed clusters having a continuous shape is shown in Figure 6(a). Blocks that have the same color are put together to form clusters since each color denotes a cluster. Non-adjacent blocks 7 and 9 can be assigned to the same cluster in order to describe the continuity requirement as it is described in the model. This is because Block 8, which is in the same cluster as them and is situated between them, is regarded as a member of set K . A block model where the continuity condition is not satisfied is seen in Figure 6(b). The blocks positioned between them, namely blocks 8 and 9, are in separate clusters in this picture, however, the non-adjacent blocks 7 and 10 are in the same cluster. As a result, the cluster that results is geometrically disconnected.

The maximum vertical length of clusters is limited by Equation (27). This constraint states that the cluster’s vertical dimension cannot be greater than the maximum value of L . Mining operational considerations define L as a positive, non-zero integer. An operationally impractical cluster to mine is one whose vertical dimension exceeds L . The clusters’ maximum tonnage is governed by Equation (28). The total amount of material collected from each cluster cannot exceed M tons under this constraint. The maximum capacity of the aggregated blocks must be taken into account due to the mine’s limited loading and unloading equipment. Equation (29) takes the purpose of clustered blocks into account. Clusters must have a predetermined destination because mining planning uses them. Therefore, blocks serving comparable purposes (such as processing plants or landfills) must be included in the cluster. Stated differently, only ore blocks belong in ore clusters, and only waste blocks belong in trash clusters. The assignment of block i to cluster  n is represented by the decision variable in the mathematical model, which is described by Equation (30). If block i is in cluster  n , then X i n in this equation equals one; if not, it equals zero.

Figure 6. (a) Block model featuring ongoing clustering; (b) Block model featuring non-constant grouping.

The above mathematical model has a nonlinear objective function, the number of clusters is not known ahead of time, and the number of binary decision variables grows as the problem size increases, making the population iterated local search algorithm solving it a difficult problem. Many methods have been developed in the field of metaheuristics to solve difficult optimization problems [49]. These meta-heuristics each contribute to an effective search procedure, but they are not inherently antagonistic. By merging the mechanisms of two or more meta-heuristics, this idea offers the chance to create a new algorithm [50]. A population iterated local search ( PILS ) algorithm was created and applied in our study to address actual cases of the issue. Thierens (2004) presents the general idea of the PILS algorithm [50]. A modified version of the PILS algorithm is described in this paper based on the features of the clustering problem. By utilizing the data present in the population of nearby solutions, PILS aims to increase the effectiveness of the local search ( LS ) algorithm. The ILS metaheuristic’s population extension is called PILS . ILS uses a local search procedure to explore the neighborhood of the current solution to find the local optimum. ILS disrupts the generated solution upon reaching a local optimum and relaunches the search from the fresh solution. The perturbation ought to be sufficiently big to prevent the local search from returning to the same local optimum during the subsequent iteration. To avoid having the search characteristics resemble a multirun local search algorithm, the perturbation should not be too large [51]. The ILS method will be constrained to investigating low-dimensional neighborhoods by utilizing the population idea in PILS . As a result, it is possible to attain the desired outcome faster [50]. The following are the steps in the suggested PILS algorithm.

13. Local Search Heuristics

A local search method finds the local optimal solution ( s ) after a constructive heuristic yields the initial answer ( S 0 ). Blocks are moved between clusters according to the first solution in order to investigate nearby solutions.

13.1. Population Generation

The first phase involves identifying any surrounding solution with a greater objective function values than the main solution ( S 0 ). Moreover, the solutions that minimize the goal function the most effectively are regarded as the chosen ones. The creation of a population of chosen solutions is the end consequence.

Perturbation: Specifically chosen solutions are subjected to the perturbation. The optimal result is chosen and turned into one of the chosen solutions using a local search method. Furthermore, the clusters that are chosen are ascertained by recognizing the groups within which their blocks shift throughout the breach.

Therefore, the only way to find neighboring solutions is to move the blocks of the chosen clusters. By using this procedure, the search space is smaller. As a result, the optimal solution is approached in a reasonable amount of processing time. The optimal clustering scheme is chosen by repeating the perturbation process for each of the chosen solutions. In Figure 7, the PILS algorithm is displayed.

Figure 7. PILS flowchart for algorithm.

13.2. Initial Cluster Generation

To create main block clusters, a constructive clustering method was created. Aggregation in this process commences at a corner of the mine block model and concludes when every block is aggregated. Figure 8 displays the flow diagram for the first cluster formation.

13.3. Creating Neighborhood Solutions

It has been recommended that new clustering schemes be developed in this step so that the optimal solution can be chosen. All blocks that are near cluster n but in separate clusters are chosen and given the name hn set throughout the process. A member of the set hn is relocated from its original cluster to a cluster that constructs new aggregated blocks based on the original clustering strategy. A member of the set hn is relocated from its original cluster n in the first phase. The model’s constraints on the newly created clusters are examined in the following step, and if all of the constraints are met, the resulting clusters are recognized as a new solution. A new cluster creation example is shown in Figure 9. As can be observed, block 6 (in the blue cluster) in the first scenario (Figure 9(a)) is next to block 3 (in the red cluster). Block 6 separates from the blue cluster and combines with the red cluster to generate new clusters in the subsequent step (Figure 9(b)). A new clustering scheme would be accepted if the resulting clusters satisfied all of the clustering model’s restrictions (Figure 9(c)).

Figure 8. Diagram for establishing major clusters.

Figure 9. The process involves forming new clusters through block movement: initial clusters are established, movement restrictions prevent additional cluster formation, and the resulting clusters are validated once all conditions are met.

14. Conclusions

This paper has presented the methodology used in Chapter 4 of the thesis, which focuses on the Methodology of Safety Behavior Management from a Cross-Culture Perspective. The study employs a quantitative research approach, supported by a survey design and a mathematical model to explore the relationship between cultural factors, individual safety behaviors, and organizational safety outcomes. The findings from this study will be discussed in the subsequent chapter of the thesis.

This paper provides a detailed overview of the methodology used in Chapter 4 of the thesis, including the research approach, design, and data analysis procedures.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Leavy, P. (2023) Research Design: Quantitative, Qualitative, Mixed Methods, Arts-Based, and Community-Based Participatory Research Approaches. Guilford Press.
[2] Taherdoost, H. (2022) Research Methodologies: An Overview. International Journal of Academic Research in Management, 11, 10-27.
[3] Kotronoulas, G., Miguel, S., Dowling, M., Fernández-Ortega, P., Colomer-Lahiguera, S., Bağçivan, G., et al. (2023) An Overview of the Fundamentals of Data Management, Analysis, and Interpretation in Quantitative Research. Seminars in Oncology Nursing, 39, Article 151398.
https://doi.org/10.1016/j.soncn.2023.151398
[4] Sardana, S. and Singhania, V. (2023) Mapping the Field of Research on Entrepreneurial Success: A Bibliometric Study and Future Research Agenda. International Journal of Business Science and Applied Management, 18, 53-79.
https://doi.org/10.69864/ijbsam.18-2.176
[5] Mulisa, F. (2022) Quantitative Research in Education. Educational Research Review, 17, 45-59.
[6] Wallwey, C. and Kajfez, R. (2023) Quantitative Methods in Engineering Education. Journal of Engineering Education, 112, 23-40.
[7] Khoa, B.T., Hung, B.P. and Brahmi, M.H. (2023) Qualitative Research in Social Sciences: Data Collection, Data Analysis and Report Writing. International Journal of Public Sector Performance Management, 12, 187-209.
https://doi.org/10.1504/ijpspm.2023.132247
[8] Fischer, C.T. and Guzel, A. (2023) Interpretive Approaches in Qualitative Research. Qualitative Psychology, 10, 78-92.
[9] Aman Mezmir, E. (2020) Qualitative Data Analysis: An Overview of Data Reduction, Data Display, and Interpretation. Research on Humanities and Social Sciences, 10, 15-27.
[10] Stoecker, R. and Avila, E. (2021) Mixed Methods in Community-Based Research. Community Development Journal, 56, 456-470.
[11] Williams, C. (2007) Mixed Methods in Social Research. Social Science Research, 36, 456-472.
[12] Creswell, J.W. (2016) Qualitative Inquiry and Research Design: Choosing among Five Approaches. Sage Publications.
[13] Myers, J.L., Well, A.D. and Lorch Jr., R.F. (2013) Research Design and Statistical Analysis. Routledge.
[14] Wisenthige, K. (2023) Research Design. In: Saliya, C.A., Ed., Social Research Methodology and Publishing Results: A Guide to Non-Native English Speakers, IGI Global, 74-93.
https://doi.org/10.4018/978-1-6684-6859-3.ch006
[15] Ranganathan, P. and Aggarwal, R. (2018) Understanding the Properties of Diagnostic Tests—Part 2: Likelihood Ratios. Perspectives in Clinical Research, 9, 99-102.
https://doi.org/10.4103/picr.picr_41_18
[16] Rahi, S. (2017) Research Design and Methods: A Systematic Review of Research Paradigms, Sampling Issues and Instruments Development. International Journal of Eco-nomics & Management Sciences, 6, 1-5.
[17] Berndt, A.E. (2020) Sampling Methods. Journal of Human Lactation, 36, 224-226.
https://doi.org/10.1177/0890334420906850
[18] Golzar, J., Noor, S. and Tajik, O. (2022) Convenience Sampling. International Journal of Education & Language Studies, 1, 72-77.
[19] Stratton, S.J. (2023) Population Sampling: Probability and Non-Probability Techniques. Prehospital and Disaster Medicine, 38, 147-148.
https://doi.org/10.1017/s1049023x23000304
[20] Jager, J., Xia, Y., Putnick, D.L. and Bornstein, M.H. (2025) Improving Generalizability of Developmental Research through Increased Use of Homogeneous Convenience Samples: A Monte Carlo Simulation. Developmental Psychology.
https://doi.org/10.1037/dev0001890
[21] Zickar, M.J. and Keith, M.G. (2023) Innovations in Sampling: Improving the Appropriateness and Quality of Samples in Organizational Research. Annual Review of Organizational Psychology and Organizational Behavior, 10, 315-337.
https://doi.org/10.1146/annurev-orgpsych-120920-052946
[22] Penn, J.M., Petrolia, D.R. and Fannin, J.M. (2023) Hypothetical Bias Mitigation in Representative and Convenience Samples. Applied Economic Perspectives and Policy, 45, 721-743.
https://doi.org/10.1002/aepp.13374
[23] Weigold, A., Weigold, I.K., Jang, M. and Thornton, E.M. (2021) College Students’ and Mechanical Turk Workers’ Environmental Factors While Completing Online Surveys. Quality & Quantity, 56, 2589-2612.
https://doi.org/10.1007/s11135-021-01237-0
[24] Daikeler, J., Bošnjak, M. and Lozar Manfreda, K. (2019) Web versus Other Survey Modes: An Updated and Extended Meta-Analysis Comparing Response Rates. Journal of Survey Statistics and Methodology, 8, 513-539.
https://doi.org/10.1093/jssam/smz008
[25] Lehdonvirta, V., Oksanen, A., Räsänen, P. and Blank, G. (2020) Social Media, Web, and Panel Surveys: Using Non‐Probability Samples in Social and Policy Research. Policy & Internet, 13, 134-155.
https://doi.org/10.1002/poi3.238
[26] Griffin, M.A. and Neal, A. (2000) Perceptions of Safety at Work: A Framework for Linking Safety Climate to Safety Performance, Knowledge, and Motivation. Journal of Occupational Health Psychology, 5, 347-358.
https://doi.org/10.1037//1076-8998.5.3.347
[27] Ochoa Pacheco, P., Coello-Montecel, D. and Andrei, D.M. (2022) Validation of the Spanish Version of the Neal, Griffin and Hart Safety Behavior Scale. International Journal of Occupational Safety and Ergonomics, 29, 1402-1415.
https://doi.org/10.1080/10803548.2022.2131277
[28] Bensonch, C., Argyropoulos, C.D., Dimopoulos, C., Varianou Mikellidou, C. and Boustras, G. (2022) Analysis of Safety Climate Factors and Safety Compliance Relationships in the Oil and Gas Industry. Safety Science, 151, Article 105744.
https://doi.org/10.1016/j.ssci.2022.105744
[29] Vinodkumar, M.N. and Bhasi, M. (2010) Safety Management Practices and Safety Behaviour: Assessing the Mediating Role of Safety Knowledge and Motivation. Accident Analysis & Prevention, 42, 2082-2093.
https://doi.org/10.1016/j.aap.2010.06.021
[30] In, J. (2017) Introduction of a Pilot Study. Korean Journal of Anesthesiology, 70, 601-605.
https://doi.org/10.4097/kjae.2017.70.6.601
[31] Leedy, P.D. and Ormrod, J.E. (2015) Practical Research. Pearson Education Limited.
[32] Hajian-Tilaki, K. (2014) Sample Size Estimation in Diagnostic Test Studies of Biomedical Informatics. Journal of Biomedical Informatics, 48, 193-204.
https://doi.org/10.1016/j.jbi.2014.02.013
[33] Cohen, E. (1988) Traditions in the Qualitative Sociology of Tourism. Annals of Tourism Research, 15, 29-46.
https://doi.org/10.1016/0160-7383(88)90069-2
[34] Siegel, S. and Castellan. N.J. (1988) The Case of K Related Samples. In: Siegel, S. and Castellan Jr., N.J., Eds., Nonparametric Statistics for Behavioral Sciences, McGraw-Hill, 170-174.
[35] Buchner, A., Erdfelder, E., Faul, F. and Lang, A.G. (2014) G* Power 3.1 Manual.
https://www.psychologie.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch-Naturwissenschaftliche_Fakultaet/Psychologie/AAP/gpower/GPowerManual.pdf
[36] Greenland, S., Senn, S.J., Rothman, K.J., Carlin, J.B., Poole, C., Goodman, S.N., et al. (2016) Statistical Tests, P Values, Confidence Intervals, and Power: A Guide to Misinterpretations. European Journal of Epidemiology, 31, 337-350.
https://doi.org/10.1007/s10654-016-0149-3
[37] Sauro, J. (2015) SUPR-Q: A Comprehensive Measure of the Quality of the Website User Experience. Journal of Usability Studies, 10, 68-86.
[38] Tan, F., Song, J., Wang, C., Fan, Y. and Dai, H. (2019) Titanium Clasp Fabricated by Selective Laser Melting, CNC Milling, and Conventional Casting: A Comparative in Vitro Study. Journal of Prosthodontic Research, 63, 58-65.
https://doi.org/10.1016/j.jpor.2018.08.002
[39] Franzese, M. and Iuliano, A. (2019) Correlation Analysis. In: Ranganathan, S., Gribskov, M., Nakai, K. and Schönbach, C., Eds., Encyclopedia of Bioinformatics and Computational Biology, Elsevier, 706-721.
https://doi.org/10.1016/b978-0-12-809633-8.20358-0
[40] Zou, K.H., Tuncali, K. and Silverman, S.G. (2003) Correlation and Simple Linear Regression. Radiology, 227, 617-628.
https://doi.org/10.1148/radiol.2273011499
[41] Cochran, W.G. (1963) Methodological Problems in the Study of Human Populations. Annals of the New York Academy of Sciences, 107, 476-489.
https://doi.org/10.1111/j.1749-6632.1963.tb13293.x
[42] Greener, S. (2008) Business Research Methods. BookBoon.
[43] Nunnally, J.C. (1978) Psychometric Theory. 2nd Edition, McGraw-Hill.
[44] Arellano, L., Alcubilla, P. and Leguízamo, L. (2023) Ethical Considerations in Informed Consent. In: EthicsScientific Research, Ethical Issues, Artificial Intelligence and Education, IntechOpen, 1.
https://doi.org/10.5772/intechopen.1001319
[45] O’ Sullivan, L., Feeney, L., Crowley, R.K., Sukumar, P., McAuliffe, E. and Doran, P. (2021) An Evaluation of the Process of Informed Consent: Views from Research Participants and Staff. Trials, 22, Article No. 544.
https://doi.org/10.1186/s13063-021-05493-1
[46] Gupta, S., Kamboj, S. and Bag, S. (2023) Role of Risks in the Development of Responsible Artificial Intelligence in the Digital Healthcare Domain. Information Systems Frontiers, 25, 2257-2274.
https://doi.org/10.1007/s10796-021-10174-0
[47] Xu, A., Baysari, M.T., Stocker, S.L., Leow, L.J., Day, R.O. and Carland, J.E. (2020) Researchers’ Views On, and Experiences With, the Requirement to Obtain Informed Consent in Research Involving Human Participants: A Qualitative Study. BMC Medical Ethics, 21, Article No. 93.
https://doi.org/10.1186/s12910-020-00538-7
[48] Tabesh, M. (2015) Aggregation and Mathematical Programming for Long-Term Open Pit Production Planning. PhD Thesis, University of Alberta.
[49] Osman, I.H. and Kelly, J.P. (1997) Meta-Heuristics Theory and Applications. Journal of the Operational Research Society, 48, 657-657.
https://doi.org/10.1057/palgrave.jors.2600781
[50] Thierens, D. (2004) Population-Based Iterated Local Search: Restricting Neighborhood Search by Crossover. Genetic and Evolutionary ComputationGECCO 2004, Seattle, 26-30 June 2004, 234-245.
https://doi.org/10.1007/978-3-540-24855-2_21
[51] Lourenço, H.R., Martin, O. and Stützle, T. (2001) A Beginner’s Introduction to Iterated Local Search. Proceeding of the 4th Metaheuristics International Conference, Vol. 2, Porto, 16 July 2001, 1-6.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.