The Acceptance and Use of Computer Based Assessment in Higher Education

Abstract

Computer Based Assessment (CBA) is being a very popular method to evaluate students’ performance at the university level. This research aims to examine the constructs that affect students’ intention to use the CBA. The proposed model is based on previous technology models such as Technology Acceptance Model (TAM), Theory of Planned Behavior (TPB), and Unified Theory of Acceptance and Usage of Technology (TAUT). The proposed CBA model is based on nine variables: Goal Expectancy, Social Influence, Facilitating Conditions, Computer Self Efficacy, Content, Perceived Usefulness, Perceived Ease of Use, Perceived Playfulness, and Behavioral Intention. Data were collected using a survey questionnaire from 546 participants who had used the computer based exam system at the University of Jordan. Results indicate that Perceived Playfulness has a direct effect on CBA use. Perceived Ease of Use, Perceived Usefulness, Computer Self Efficacy, Social Influence, Facilitating Conditions, Content and Goal Expectancy have only indirect effects. The study concludes that a system is more likely to be used by students if it is playful and CBA is more likely to be playful when it is easy to use and useful. Finally, the studied acceptance model for computer based assessment explains approximately only 10% of the variance of behavioral intention to use CBA.

Share and Cite:

Maqableh, M. , Masa’deh, R. and Mohammed, A. (2015) The Acceptance and Use of Computer Based Assessment in Higher Education. Journal of Software Engineering and Applications, 8, 557-574. doi: 10.4236/jsea.2015.810053.

1. Introduction

Student assessment is a very essential element in any learning model. Instructors evaluate students and learning output to direct and motivate them based on their achievement [1] [2] . There are two main types of students’ assessment: Summative and Formative. Summative assessment aims to provide the sum-up of the teaching and learning, whereas formative assessment aims to study the feedback about the progresses of students and instructors [3] . Moreover, there are two main types of assessments systems: Paper Based System (PBS) and Computer Based System (CBS). PBS is being disassociated gradually from learning practices because of continuous dissemination of Information and Communications Technology (ICT) [2] . At the same time, CBS is being replacing the PBS due to the popularity of ICT. Students prefer CBS instead of PBS as they believe that it would be exciting, interactive, secure, precise, smooth and credible [4] .

Communications and computer technologies have been developed very quickly and it is being widespread and is used for several purposes [5] [6] . Information and Communications Technology (ICT) is used intensively in higher education at several aspects such as students’ evaluation and electronic learning [7] [8] . Computer Based Assessment (CBA) systems are implemented using ICT tools and applications [4] . CBA is considered as a very important tool to evaluate students at specific point and to help learners in identifying the gap between required standard and actual level of the learners [7] . Currently, CBA is being adopted by many institutions replacing the traditional paper and pen assessment for students [9] . Therefore, secondary and higher education are evaluating students’ performance and achievement using CBA systems very intensively. CBA has several competitive advantages such as security, cost, and accuracy. Moreover, it reduces the required efforts and times for exams generation, scheduling, marking, and results recording and analyzing [2] [10] . CBA systems are provided from several international vendors from all over the world. It has been implemented to support various technologies, educational environments, and cultures.

CBA is being a main part of electronic learning and assessment systems in higher education institutions. Therefore, it is very essential to investigate the factors that affect the students’ attitude toward using CBA in order to implement CBA systems successfully. This research aims to examine the factors that influence the students’ attitude toward using CBA system in Jordan. Recent studies have shown that Perceived Usefulness, Perceived Ease of Use, Perceived Playfulness, and Perceived Importance each has a significant role in Behavioral Intention to use CBA [2] [4] [7] [11] -[15] .

The paper is organized as follows. In Section 2, a review of theoretical background of CBA is presented. Section 3 discusses the hypotheses development. Section 4 explains the research methodology in details. In Section 5, research results are shown. Section 6 discusses the results of collected date based on the proposed model. Finally, discussion and conclusions are drawn in Section 7.

2. Theoretical Background

Computer based assessment and the factors that influence students’ intention behavior have been studied insensitively in the literature. Many researchers focus on studying the effect of some influencing factors such as Perceived Usefulness, Perceived Ease of Use, and Perceived Playfulness [4] [7] [10] [14] [16] -[32] . (M. Thelwall, 2000) introduces a survey on the reasons of using computer assessment and focus on randomly generated open access test [16] . The students are allowed to practice in their own free time before apply the same test in real. This research concludes that random-based tests have major advantages over fixed ones. Moreover, this research paper proofs the flexibility of CBA as a learning tool.

In 2002, C. Jantz et al. measure and examine the effectiveness of Interactive Multimedia (IMM) using a quasi-experimental pretest/post-test [17] . Results showed the significant increase in knowledge, attitude, and total scores between pre and post tests for the intervention participants and they had greater increases than control group. This study support the use of IMM in nutrition education and it considered as the basis to continue developing computer-based assessments. (R. Mayer 2002) studied the assessment of computer in problem solving by referring to Bloom’s taxonomy for learning and teaching and assessing [33] . The study examines the cognitive consequences of participating in after-school computer club. He proofs the possibility to produce computer- based assessments of problem-solving transfer in different ways like: assessment of computer literacy (Near Transfer) and assessment in problem-solving strategies for new games (Far Transfer). The study discovers the usefulness of taxonomy in creating assessments that covers the range of problem-solving transfer when the goal is to include problem solving transfer measurements.

Later on, a Web-based Educational System (WEAS) based on Bloom’s theory was introduced and tested on science courses [18] . The system facilitates Human-Computer Interaction (HCI) techniques between students and teacher. Gikandi et al. were reviewed 18 key empirical studies on online assessment in higher education from year 2004 to year 2011 [34] . The survey focuses on the application of formative assessment within blended and online context. The main findings were extracted from the literature; the enhancement of the learner engagement with high experience and valuable background due to effective online formative assessment.

(Terzis and Economides, 2011) built a model to investigate students’ intention to use Computer Based Assessment (CBA) called Computer Based Assessment Acceptance Model (CBAAM) [4] . The model was built upon previous acceptance models like: Technology Acceptance Model (TAM), Theory Planned Behavior (TPB), and Unified Theory of Acceptance and Usage Technology (UTAUT). They added two additional variables (Content and Goal Expectancy) on current measurement variables. A survey questionnaire applied on a sample of 173 participants enrolled in introductory course about informatics for the purpose of test data. Findings showed that Perceived Ease of Use and Perceived Playfulness directly affected CBA, while other variables have indirect effect on CBA. (V. Terzis et al., 2011) study extends the previous model (CBAAM) by considering the gender in the measurements [35] . The results showed that both genders motivated to use CBA while it is playful and has clear contents relative to the course.

(M. Alquraan, 2012) investigates different learning assessment methods used in higher education. Samples of 736 undergraduate students from four well-known universities in Jordan were engaged in the investigation process [21] . The results showed that the most common used assessment method used is the paper-pencil test while some scientific and medical colleges used other assessments but, still use paper-pencil tests. Moreover, the study suggests the use of modern assessment tools and methods to improve traditionalism in higher education assessment methods. Another research group conducted a study at Ilorin university-Nigeria on undergraduate chemistry students [22] . A sample of 48 chemistry student evaluated using Computer Based Test (CBT) and a questioner was carried out for investigation. Findings showed that 95.8% of the students were satisfied of using CBT while 75% complained about anxiety of their computers. On the other side, about 29.2% were not fully accepted the testing mode. From the testing analysis, it is obvious that a satisfactory about immediate scoring, fastness and transparency in marking exists.

In 2012, conducted a study to identify how personality affects technology acceptance. It is a combination between CBAAM and Big Five Inventory Question (BFI) for the purpose of analyzing the effect of the five personality factors upon CBA’s [14] . A survey questioner with BFI questions was applied on 117 participants. Results indicated the negative effect of Neuroticism on Perceived Usefulness and Goal Expectancy. In addition, Social influence and Perceived Ease of Use were determined by Agreeableness. Moreover, Perceived Importance is explained by Extroversion and Openness.

A dynamic CBA system for fluid mechanics course were conducted and assessment data were collected before and after applying the system [36] . The performance improvements were measured by the relative of correctly answered question in Fundamentals of Engineering (FE) Exam to National Average. Results showed that, for the same sample, the students increased from below national level with 94% mean and 6% standard deviation to above one with 100% mean and 2% of standard deviation. In fluid mechanics it was much higher than in other subjects and students performance was more than the top tier programs in USA. A notable improvement in student achievement due to the use of this system and instructor time also was reduced. Authors suggested refining pre- and post-tests to relate them to metacognitive learning. The study showed the advantages of applying CBA system and a new measure for problem solving skills was conducted which is the FE exam.

Another research was conducted to compare between traditional assessment and learning and educational software [37] . The study was applied on a state primary school at north Cyprus. Two main groups were under test, the first group consists of 26 students and taught using traditional lecture-based and the second one consists of 29 students and taught using educational software called Frizbi Mathematics 4. Scores on achievement were recorded 3 times; when starting the study, after intervention and after 4 months. Using some ANOVAs analysis results compared and results showed that and compared using different variables and variations. The final findings gave evidence that Frizbi Mathematics 4 which is computer-based educational software that includes self-automated assessments is an effective tool for both assessments and learning. (V. Terzis et al., 2013) investigate the continuance acceptance in CBA context by checking out users expectations before and after interaction with the system [15] . The results in confirmation in both Ease of Use and Playfulness, they are the direct determinants of CBA. Moreover, all other indirect CBA determinants also were confirmed and discussed in details.

(E. Quellmalz, 2014) includes a section in chapter in the education encyclopedia which talked about assessments in the next generation of science standards, where science phenomena needs more flexible, dynamic and more complex representation [24] . Furthermore, students need a way to check out the effectiveness of the HCI. The migration of CBA from computer to other mobility devices could be effective tools for evidence of learning data collection. Modern Technology will enhance both assessments of and for learning.

3. Hypotheses Development

3.1. CBAAM Model

Based on previous Technology Acceptance Models such as TAM, TPB and UTAUT, a new model called Computer Bases Assessment Acceptance Model (CBAAM) was proposed [4] . The model used multiple constructs from the existing models but added two new variables which are: Content and Goal Expectancy. Figure 1 demonstrates the research’s conceptual framework and the hypothesized relationships between the adopted constructs.

This model combined the following constructs to study the acceptance of a CBA:

H1: Perceived Playfulness will have a positive effect on the Behavioural Intention to use CBA.

H2: Perceived Usefulness will have a positive effect on the Behavioural Intention to use CBA.

H3: Perceived Usefulness will have a positive effect on Perceived Playfulness.

H4: Perceived Ease of Use will have a positive effect on the Behavioural Intention to use CBA.

H5: Perceived Ease of Use will have a positive effect on Perceived Usefulness.

H6: Perceived Ease of Use will have a positive effect on Perceived Playfulness.

H7: Computer Self Efficacy will have a positive effect on Perceived Ease of Use.

H8: Social Influence will have a positive effect on Perceived Usefulness.

H9: Facilitating Conditions will have a positive effect on Perceived Ease of Use.

H10: Goal Expectancy will have a positive effect on Perceived Usefulness.

H11: Goal Expectancy will have a positive effect on Perceived Playfulness.

H12: Content will have a positive effect on Perceived Usefulness.

H13: Content will have a positive effect on Perceived Playfulness.

H14: Content will have a positive effect on Goal Expectancy.

H15: Content will have a positive effect on the Behavioral Intention to Use CBA.

The following sections describe the research model constructs.

3.1.1. Perceived Playfulness

Moon and Kim (2001) extended TAM by adding the construct Perceived Playfulness [38] . This construct is defined by three dimensions:

Figure 1. Research model.

・ Concentration: Determines whether the user is concentrated on the activity.

・ Curiosity: Determines if the system aroused the user’s cognitive curiosity [39] .

・ Enjoyment: Determines whether the user is enjoying the interaction with the system or not.

Although the previous three dimensions are interdependent and linked, each of them alone does not reflect total interaction of users with the system. A successful implementation of a CBA is able to hold Users’ concentration, curiosity and enjoyment. Therefore, CBAAM assumed that the Behavioral Intention is positively affected by the perceived playfulness as in the following hypothesis:

H1: Perceived Playfulness will have a positive effect on the Behavioral Intention.

3.1.2. Perceived Usefulness

As mentioned before, Perceived Usefulness is used to measure how much a person believes that his/her job performance will increase when he uses a particular computer system. Many evidences were provided by researchers on the effect of Perceived Usefulness on the Behavioral Intention of users to use a learning system [40] -[42] . CBAAM also assumes that a learner’s concentration, curiosity and enjoyment will increase as a result of using a useful system which leads to the following hypotheses:

H2: Perceived Usefulness will have a positive effect on the Behavioral Intention to use CBA.

H3: Perceived Usefulness will have a positive effect on Perceived Playfulness.

3.1.3. Perceived Ease of Use

It was also discussed that Perceived Ease of Use is used to measure the person’s belief that using a computer system requires no effort. Previous research showed that Perceived ease of use has a direct effect on Perceived Usefulness and Behavioral Intention [12] [43] . CBAAM assumes that Perceived Ease of Use will have a positive influence on Perceived Playfulness because a system that can be used without much effort will smoothly enable users to use it without any disturbance. For the previous effects of Perceived Ease of Use, the following hypotheses were made:

H4: Perceived Ease of Use will have a positive effect on the Behavioral Intention to use CBA.

H5: Perceived Ease of Use will have a positive effect on Perceived Usefulness.

H6: Perceived Ease of Use will have a positive effect on Perceived Playfulness.

3.1.4. Computer Self Efficacy

Research results show that there is a link between Computer Self Efficacy (CSE) and Perceived Ease of Use [12] [44] [45] . Therefore, CSE has an impact on Perceived Ease of Use and also an indirect impact on Behavioral Intention. The following hypothesis was made:

H7: Computer Self Efficacy will have a positive effect on Perceived Ease of Use.

3.1.5. Social Influence

Social Influence can be defined as the effect of people’s opinion, superior and peers influence. There are three elements that define Social Influence which are: Subjective Norm (SN), Image and Voluntariness [46] . To measure Social Influence, Previous models used the constructs: Social Factors (MPCU), Image (IDT), Subjective Norm (TRA, TPB, C-TAM-TPB, and TAM2) [47] . According to TAM2, Subjective Norm and Image has an influence of how users see a system as a useful one while Subjective Norm has no impact on Behavioral Intention if users are using a system voluntarily. UTAUT considered Social Influence one of the four more constructs that have direct effect on Behavioral Intention.

In CBAAM it was assumed that Social Influence has a direct impact on Perceived Usefulness. This was concluded based on the fact that students usually feel insecure using a CBA, and they are affected by the opinion of their friends, colleagues and seniors. Also, students discuss Perceived Usefulness and its added value as the main topic regarding a CBA. The CBA in CBAAM is voluntary, so as proposed by TAM2 that it has no impact on Behavioral Intention, in CBAAM they did not study its effect on it. The only hypothesis regarding Social Influence is:

H8: Social Influence will have a positive effect on Perceived Usefulness.

3.1.6. Facilitating Conditions

Facilitating Conditions (FCs) are defined as the set of factors that affect the person’s belief to perform a procedure. There are many aspects of (FC); one of them is the technical support such as helpdesks or Online support services [4] . Other factors are resources such as time and money [48] .

In CBAAM, FC was defined as the support that is provided during a CBA. If users face difficulties while using a CBA, support must be given to help them overcome these difficulties. This support includes having an expert to answer students’ questions and queries if the CBA is used in a university. For the previous reasons, the following hypothesis was made:

H9: Facilitating Conditions will have a positive effect on Perceived Ease of Use.

3.1.7. Goal Expectancy

In distance learning, the need of self-direction and goal orientation was highlighted by many studies [49] [50] . Self-management of learning was proposed by [49] as the degree to which a person feels he/she is able to engage in autonomous learning and is self-disciplined. In terms of Technology Acceptance, learning goal orientation was proposed by [50] as a construct that affects learning acceptance. Also, Personal Outcome Expectations was introduced by [51] as an ancestor of Intention of use [51] . This was based on [52] work, which proposed that a person’s motivation to do an act is increased with increased outcome expectancy [52] . Finally, [53] emphasized this theory by showing that a person’s actions are strongly influenced by his/her expectations regarding the consequences of these actions [53] .

In CBAAM, a new construct called Goal Expectancy (GE) was introduced motivated by the previously mentioned studies. This construct defines a person’s belief that he/she is prepared well to use a CBA. GE has two aspects based on two types of assessment (summative and formative). In summative assessment (which is experimented in their study), the first dimension measures a student’s satisfaction of his/her preparation. Students have to study and prepare themselves in order to be able to answer the questions in the assessment. The second dimension measures the student’s desired success level. Each student before the assessment predicts his performance based on his/her preparation and put a percentage of correct answers as a goal that will give him satisfying performance.

It is assumed that GE highly influences Perceived Usefulness. However; this influence is dependent on the type of assessment. In Summative Assessment, GE has an impact on Usefulness because students can understand the questions and answer them. On the other hand, this is not applicable on Formative Assessment because what adds the value is the feedback provided by the CBA to enable students from understanding their learning material. Therefore, in Formative Assessment, GE has a negative impact on Perceived Usefulness as students use it to learn more than to test their knowledge.

Moreover, this model assumes that Perceived Playfulness will be positively impacted by GE. In order for students to meet their expectations of good performance they will concentrate more with the CBA, they will also be able to answer the questions correctly and will enjoy the interaction with the system more if they are well prepared. The following hypotheses are assumed:

H10: Goal Expectancy will have a positive effect on Perceived Usefulness.

H11: Goal Expectancy will have a positive effect on Perceived Playfulness.

3.1.8. Content

The last construct in this model is the content. (C. Ong et al., 2004) introduced content as an important construct in learners’ satisfaction [54] . This construct examines whether the content is up-to-date, sufficient, useful and satisfies users’ needs. In CBAAM, two dimensions of the content are studied; the course content and the questions content. Regarding course content, it is believed that it highly affects the perceived usefulness and playfulness of the CBA system. The content of the course can determine whether it is useful or not, interesting or not and finally difficult or not. In this model also, questions content are examined to determine if they are clear, easy to understand and related to the content of the course.

These dimensions of the content are proposed only in this model. Previous models examined content for different purposes. Therefore, the model assumes the content will affect Perceived Usefulness and Playfulness, Goal Expectancy and Behavioral Intention as in the following hypotheses:

H12: Content will have a positive effect on Perceived Usefulness.

H13: Content will have a positive effect on Perceived Playfulness.

H14: Content will have a positive effect on Goal Expectancy.

H15: Content will have a positive effect on the Behavioral Intention to Use CBA.

(K. Weinerth et al., 2014) examined the usability when applying CBA [55] . They discuss the impact of usability on CBA since no sufficient research in this issue. This review insures that currently few studies about the interaction between use-ability and test use training if not neglected. Table 1 shows the frequency usage of usability extracted from this review.

4. Research Methodology

The study involved 546 students from which 340 were females (62.3%) and 206 were males (37.7%). Most of the students’ age was between 17 and 23 years old. The students had a CBA exam that consisted of 45 multiple choice questions each of which has four possible answers. The questions displayed to students were randomly generated, and the assessment duration was 45 minutes after which every student had to answer a survey with 34 questions.

Table 1. Constructs and measurement items.

The current research uses a Structural Equation Modeling (SEM) approach based on AMOS 20.0 to study the causal relationships and to test the hypotheses between the observed and latent constructs in the proposed research model. SEM can be divided into two sub-models: a measurement model and a structural model. While the measurement model defines relationships between the observed and unobserved variables, the structural model identifies relationships among the unobserved/latent variables by specifying which latent variables directly or indirectly influence changes in other latent variables in the model [56] [57] . Furthermore, the structural equation modeling process consisted of two components: validating the measurement model and fitting the structural model. While the former is accomplished through confirmatory factor analysis, the latter was accomplished by path analysis with latent variables [58] . Using a two-step approach assures that only the constructs retained from the survey that have good measures (validity and reliability) will be used in the structural model [57] .

The basis for data collection and analysis is a field study in which respondents answered all items on a five point Likert-scales ranging from 1 (strongly disagree) to 5 (strongly agree). Furthermore, elements used to consider each of the constructs were primarily obtained from prior research. These elements provided a valued source for data gathering and measurement as their reliability and validity have been verified through previous research and peer reviews. The model of Behavioral Intention (BI) to Use CBA constructs and their corresponding items (i.e. Perceived Usefulness (PU), Perceived Ease of Use (PE), Computer Self Efficacy (CS), Social Influence (SI), Facilitating Conditions (FC), Content (CT), Goal Expectancy (GY), Perceived Playfulness (PP) were adapted from [4] . Table 1 shows the measured constructs and the items measuring each construct.

Sample and Procedure

Empirical data for this study was collected through paper-based survey in Jordan. Specifically, a survey questionnaire was used to gather data for hypotheses testing from at the University of Jordan. Before implementing the survey, the instrument was reviewed by three lecturers who are specialized in the Management Information Systems (MIS) discipline in order to identify problems with wording, content, and question ambiguity. After some changes were made based on their suggestions, the modified questionnaire was piloted on ten students who are studying at the university. Based on the feedback of this pilot study, minor edits were introduced to the survey questions, and the questionnaires were distributed to the participants. As per ethics policies, all potential participants were briefed about the nature of the work and were requested to provide explicit approval. The population of this study consists of all students who studied Introduction to Electronic Commerce Course as elective course during the first semester 2013-2014 from the University of Jordan located in Jordan, which counts of more than 570 according to the university’s registration unit. The sample size of this study was determined based on the rules of thumb for using SEM within AMOS 20.0 in order to obtain reliable and valid results. (R. Kline, 2010) suggested that a sample of 200 or larger is suitable for a complicated path model [59] . Furthermore, taking into account the complexity of the model which considers the number of constructs and variables within the model and after eliminating the incomplete surveys, our sample size (546) meets the recommended guidelines of [59] -[61] . The demographic data of the respondents are reported in Table 2.

As showed in Table 2, the demographic profile of the respondents for this study revealed that the sample consisted of more females; most of them between 17 and less than 23 years old, in their second and third academic years, and most of them use different types of IT more than 3 hours.

5. Research Results

5.1. Descriptive Statistics

All the 30 items were tested for their means, standard deviations, skewness, and kurtosis. The descriptive statistics presented below in Table 3 indicate a positive disposition towards the items. While the standard deviation (SD) values ranged from 0.75222 to 1.21275, these values indicate a narrow spread around the mean. Also, the mean values of all items were greater than the midpoint (2.5) and ranged from 2.8553 (GY1) to 4.4377 (CS3). However, after careful assessment by using skewness and kurtosis, the data were found to be normally distributed. Indeed, skewness and kurtosis were normally distributed since all of the values were inside the adequate ranges for normality (i.e. −1.0 to +1.0) for skewness, and less than 10 for kurtosis [59] . Furthermore, the ordering of the items in terms of their means values, and their ranks based on three ranges (i.e. 1 - 2.33 low; 2.34 - 3.67 medium; and 3.68 - 5 high) are provided.

Table 2. Demographic data for respondents.

Table 3. Mean, standard deviation of scale items.

Table 4. Measurement model fit indices.

Table 4 shows different types of goodness of fit indices in assessing this study initial specified model. It demonstrates that the research constructs fits the data according to the absolute, incremental, and parsimonious model fit measures, comprising chi-square per degree of freedom ratio (x2/df), Incremental Fit Index (IFI), Tucker-Lewis Index (TLI), Comparative Fit Index (CFI), and Root Mean Square Error of Approximation (RMSEA). The researchers examined the standardized regression weights for the research’s indicators and found that all indicators had a high loading towards the latent variables. Moreover, since all of these items did meet the minimum recommended value of factor loadings of 0.50; and RMSEA less than 0.10 [57] [59] [62] , they were all included for further analysis; except SI4, GY1, and PP4 which had loadings of 0.405, 0.376, and 0.163 respectively, thus excluded from further analysis. Therefore, the measurement model showed a better fit to the data (as shown in Table 4). For instance, x2/df was 1.990, the IFI = 0.96, TLI = 0.95, CFI = 0.96; and RMSEA 0.043 indicated better fit to the data considering all loading items.

5.2. Measurement Model

Confirmatory factor analysis (CFA) was conducted to check the properties of the instrument items. Indeed, prior to analyzing the structural model, a CFA based on AMOS 20.0 was conducted to first consider the measurement model fit and then assess the reliability, convergent validity and discriminant validity of the constructs [63] . The outcomes of the measurement model are presented in Table 5, which encapsulates the standardized factor loadings, measures of reliabilities and validity for the final measurement model.

5.2.1. Unidimensionality

Unidimensionality is the extent to which the study indicators deviate from their latent variable. An examination of the unidimensionality of the research constructs is essential and is an important prerequisite for establishing construct reliability and validity analysis [64] . Moreover, in line with [56] , this research assessed unidimensionality using the factor loading of items of their respective constructs. Table 5 shows solid evidence for the unidimensionality of all the constructs that were specified in the measurement model. All loadings were above 0.50, except SI4, GY1, and PP4, which is the criterion value recommended by [62] . These loadings confirmed that 27 items were loaded satisfactory on their constructs.

Table 5. Properties of the final measurement model.

5.2.2. Reliability

Reliability analysis is related to the assessment of the degree of consistency between multiple measurements of a variable, and could be measured by Cronbach alpha coefficient and composite reliability [57] . Some scholars (e.g. [65] ) suggested that the values of all indicators or dimensional scales should be above the recommended value of 0.60. Table 5 indicates that all Cronbach Alpha values for the nine variables exceeded the recommended value of 0.60 [65] demonstrating that the instrument is reliable. Furthermore, as shown in Table 5, composite reliability values ranged from 0.66 to 0.88, and were all greater than the recommended value of more than 0.60 or greater than 0.70 as suggested by (P. Holmes-Smith, 2001) [66] . Consequently, according to the above two tests, all the research constructs in this study are considered reliable.

As shown above, since the measurement model has a good fit; convergent validity and discriminant validity can now be assessed in order to evaluate if the psychometric properties of the measurement model are adequate.

5.2.3. Content, Convergent, and Discriminant Validity

Although reliability is considered as a necessary condition of the test of goodness of the measure used in research, it is not sufficient [67] -[69] , thus validity is another condition used to measure the goodness of a measure. Validity refers to which an instrument measures is expected to measure or what the researcher wishes to measure [70] . Indeed, the items selected to measure the nine variables were validated and reused from previous researches. Therefore, the researchers relied upon in enhancing the validity of the scale was to benefit from a pre-used scale that is developed from other researchers. In addition, the questionnaire items were reviewed by four instructors of the Business Faculty at the University of Jordan. The feedback from the chosen group for the pre-test contributed to enhanced content validity of the instrument. Moreover, in order to enhance the content validity of the instrument, seven academics were asked to give their feedback about the questionnaire, thus confirming that the knowledge presented in the content of each question was relevant to the studied topic.

Furthermore, as convergent validity test is necessary in the measurement model to determine if the indicators in a scale load together on a single construct; discriminant validity test is another main one to verify if the items developed to measure different constructs are actually evaluating those constructs [71] . As shown in Table 5, all items were significant and had loadings more than 0.50 on their underlying constructs. Moreover, the standard errors for the items ranged from 0.032 to 0.220 and all the item loadings were more than twice their standard error. Discriminant validity was considered using several tests. First, it could be examined in the measurement model by investigating the shared Average Variance Extracted (AVE) by the latent constructs. The correlations among the research constructs could be used to assess discriminant validity by examining if there were any extreme large correlations among them which would imply that the model has a problem of discriminant validity. If the AVE for each construct exceeds the square correlation between that construct and any other constructs then discriminant validity is occurred [72] . As shown in Table 5, this study showed that the AVEs of all the constructs were above the suggested level of 0.50, implying that all the constructs that ranged from 0.50 to 0.72 were responsible for more than 50 percent of the variance in their respected measurement items, which met the recommendation that AVE values should be at least 0.50 for each construct [65] [66] . Furthermore, as shown in Table 6, discriminant validity was confirmed as the AVE values were more than the squared correlations for each set of constructs. Thus, the measures significantly discriminate between the constructs.

5.3. Structural Model and Hypotheses Testing

In order to examine the structural model it is essential to investigate the statistical significance of the standardized regression weights (i.e. t-value) of the research hypotheses (i.e. the path estimations) at 0.05 level (see Table 7); and the coefficient of determination (R2) for the research endogenous variables as well.

Table 6. AVE and square of correlations between constructs.

Note: Diagonal elements are the average variance extracted for each of the nine constructs. Off-diagonal elements are the squared correlations between constructs.

Table 7. Summary of proposed results for the theoretical model.

The coefficient of determination for Goal Expectancy, Perceived Usefulness, Perceived Playfulness, Perceived Ease of Use, and Behavioral Intention to Use were 0.34, 0.20, 0.47, 0.22, and 0.10 respectively, which indicates that the model does quite account for the variation of the proposed model.

6. Discussion

Nowadays, students’ learning performance and outcome are evaluated using CBA rather than PBA. Our research purpose is to explore and identify the influential factors that affect the students’ attitude toward using CBA in higher education. Researchers are working in this research area to help institutions to have a successful implementation for CBA. In the literature, Perceived Usefulness, Perceived Ease of Use, Perceived Playfulness, and Perceived Importance considered as a main elements in Behavioral Intention to use CBA [2] [4] [7] [11] - [15] .

The study shows that Perceived Playfulness has direct impact on Behavioral Intention, while the constructs which have indirect impact on Behavioral Intention are Perceived Usefulness, Perceived Ease of Use, Content, Computer Self Efficacy, Facilitating Conditions, Social Influence and Goal Expectancy (see Table 8). The content construct which was used in this manner for the first time in this model did not have a direct impact on Behavioral Intention as the hypothesis of this study suggests. However; other hypothesis suggested regarding content were confirmed. Content has a direct effect on Perceived Usefulness, Playfulness and Goal Expectancy which indicates an indirect influence on Behavioral Intention.

Regarding Goal Expectancy, it was shown that students find a CBA useful and playful when they have good expectations from the system. Moreover, the positive effect of Social Influence on Perceived Usefulness provided by TAM2 was also supported by this model. Additionally, Perceived Ease of Use is positively impacted by Computer Self Efficacy and Facilitating Conditions as shown by the study. Furthermore, Perceived Ease of Use has a direct impact on Perceived Usefulness and Perceived Playfulness. While previous studies show that Perceived Usefulness and Perceived Ease of Use have a direct impact on Behavioral Intention, the study of this model shows that they have only an indirect impact through Perceived Playfulness.

Therefore, the results of this study confirm the results’ of prior study conducted by [4] related to role of Perceived Playfulness, Perceived Usefulness, Content, Computer Self Efficacy, Facilitating Conditions, Social Influence and Goal Expectancy on students Behavioral Intention to Use CBA and contradict with the results related to the role of Perceived Ease of Use. Table 9 summarizes the results concluded by this study and (Terzis & Economides, 2011) study, the table lists the 15 hypotheses and whether they were supported by the model or not. The study concludes that a system is more likely to be used by students if it is playful which confirms previous studies. Also, a CBA is more likely to be playful when it is easy to use and useful.

Table 8. R2 and direct, indirect and total effects.

Table 9. Summary of our research results and (terzis & economides, 2011) [4] results.

7. Conclusions

This study investigated the factors that influenced the students’ behavior toward intention to use a computer based assessment in higher education. The tested model and measurement were supported from the collected data. Our research results demonstrate that Perceived Playfulness has a direct effect on Behavioral Intention to Use CBA, which aligns with [4] [48] [54] [29] [30] . Perceived usefulness has no direct effect on Behavioral Intention to Use CBA, which aligns with [4] and contradicts with [29] [31] [32] [54] . On the other hand, Perceived Ease of Use has no direct effect on Behavioral Intention to Use CBA, which contradicts with [4] . Furthermore, content has no direct effect on Behavioral Intention to Use CBA, while content has a direct effect on Goal Expectancy, Perceived Ease of use, and Perceived Playfulness, which align with [4] . Also, Perceive Ease of Use has direct effect on Perceived Usefulness and Perceived Playfulness. Furthermore, Perceived Ease of Use is positively impacted by Computer Self Efficacy and Facilitating Conditions. Moreover, Perceived Usefulness is positively impacted by Goal Expectancy and Social Influence as shown by the study. Finally, Perceived Playfulness is positively impacted by Perceived Usefulness and Goal Expectancy.

The study shows that Perceived Playfulness has a direct effect on CBA use. Perceived Ease of Use, Perceived Usefulness, Computer Self Efficacy, Social Influence, Facilitating Conditions, Content and Goal Expectancy have only indirect effects. Consequently, educators and developers have to achieve the students’ playfulness through using CBA. The study concludes that a system is more likely to be used by students if it is playful and CBA is more likely to be playful when it is easy to use and useful. Finally, the studied acceptance model for computer based assessment explains approximately only 10% of the variance of Behavioral Intention to Use CBA. Therefore, researchers need to investigate other variables that affect the Behavioural Intention.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Joosten-ten Brinke, D., van Bruggen, J., Hermans, H., Burgers, J., Giesbers, B., Koper, R. and Latour, I. (2007) Modeling Assessment for Re-Use of Traditional and New Types of Assessment. Computers in Human Behavior, 23, 2721-2741.
http://dx.doi.org/10.1016/j.chb.2006.08.009
[2] Siozos, P., Palaigeorgiou, G., Triantafyllakos, G. and Despotakis, T. (2009) Computer Based Testing Using “Digital Ink”: Participatory Design of a Tablet PC Based Assessment Application for Secondary Education. Computers & Education, 52, 811-819.
http://dx.doi.org/10.1016/j.compedu.2008.12.006
[3] Moridis, C.N. and Economides, A.A. (2009) Mood Recognition during Online Self-Assessment Test. IEEE Transactions on Learning Technologies, 2, 50-61.
http://dx.doi.org/10.1109/TLT.2009.12
[4] Terzis, V. and Economides, A.A. (2011) The Acceptance and Use of Computer Based Assessment. Computers & Education, 56, 1032-1044.
http://dx.doi.org/10.1016/j.compedu.2010.11.017
[5] Maqableh, M. (2012) Analysis and Design Security Primitives Based on Chaotic Systems for eCommerce. Durham University.
[6] Karajeh, H., Maqableh, M. and Masa’deh, R. (2014) A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century. 3D Research-Springer, 5, 1-9.
http://dx.doi.org/10.1007/s13319-014-0026-3
[7] Deutsch, T., Herrmann, K., Frese, T. and Sandholzer, H. (2012) Implementing Computer-Based Assessment—A Web-Based Mock Examination Changes Attitudes. Computers and Education, 58, 1068-1075.
http://dx.doi.org/10.1016/j.compedu.2011.11.013
[8] Masa’deh, R. (2013) A Structural Equation Modeling Approach for Determining Antecedents and Outcomes of Students’ Attitude toward Mobile Commerce Adoption. Life Science Journal, 10, 2321-2333.
[9] Sieber, V. and Young, D. (2008) Factors Associated with the Successful Introduction of On-Line Diagnostic, Formative and Summative Assessment in the Medical Sciences Division University of Oxford, 267-278.
http://caaconference.co.uk/pastConferences/2008/proceedings/Seiber_V_Young_D_final_formatted_ e1.pdf
[10] Ko, C.C. and Cheng, C.D. (2008) Flexible and Secure Computer-Based Assessment Using a Single Zip Disk. Computers and Education, 50, 915-926.
http://dx.doi.org/10.1016/j.compedu.2006.09.010
[11] Davis, F.D. (1989) Perceived Usefulness Perceived Ease of Use and User Acceptance of Information Technology. MIS Quarterly, 13, 319-340.
http://dx.doi.org/10.2307/249008
[12] Venkatesh, V. and Davis, F.D. (1996) A Model of the Antecedents of Perceived Ease of Use: Development and Test. Decision Science, 27, 451-481.
http://dx.doi.org/10.1111/j.1540-5915.1996.tb01822.x
[13] Kreiter, C.D., Ferguson, K. and Gruppen, L.D. (1999) Evaluating the Usefulness of Computerized Adaptive Testing for Medical In-Course Assessment. Academic Medicine: Journal of the Association of American Medical Colleges, 74, 1125-1128.
http://dx.doi.org/10.1097/00001888-199910000-00016
[14] Terzis, V., Moridis, C.N. and Economides, A.A. (2012) How Student’s Personality Traits Affect Computer Based Assessment Acceptance: Integrating BFI with CBAAM. Computers in Human Behavior, 28, 1985-1996.
http://dx.doi.org/10.1016/j.chb.2012.05.019
[15] Terzis, V., Moridis, C.N. and Economides, A.A. (2013) Continuance Acceptance of Computer Based Assessment through the Integration of User’s Expectations and Perceptions. Computers and Education, 62, 50-61.
http://dx.doi.org/10.1016/j.compedu.2012.10.018
[16] Thelwall, M. (2000) Computer-Based Assessment: A Versatile Educational Tool. Computers & Education, 34, 37-49.
http://dx.doi.org/10.1016/S0360-1315(99)00037-8
[17] Jantz, C., Anderson, J. and Gould, S.M. (2002) Using Computer-Based Assessments to Evaluate Interactive Multimedia Nutrition Education among Low-Income Predominantly Hispanic Participants. Journal of Nutrition Education and Behavior, 34, 252-260.
http://dx.doi.org/10.1016/S1499-4046(06)60103-6
[18] He, L. and Brandt, P. (2007) WEAS: A Web-Based Educational Assessment System. Proceedings of the 45th Annual Southeast Regional Conference, ACM, New York, 126-131.
http://dx.doi.org/10.1145/1233341.1233365
[19] JImoh, R.G., Yussuff, M.A., Akanmu, M.A., Enikuomehin, A.O. and Salman, I.R. (2011) Acceptability of Computer Based Testing (CBT) Mode for Undergraduate Courses in Computer Science. Journal of Science, Technology, Mathematics and Education (JOSTMED), 7, 11-20.
[20] Saleem, H., Beaudry, A. and Croteau, A.M. (2011) Antecedents of Computer Self-Efficacy: A Study of the Role of Personality Traits and Gender. Computers in Human Behavior, 27, 1922-1936.
http://dx.doi.org/10.1016/j.chb.2011.04.017
[21] Alquraan, M.F. (2012) Methods of Assessing Students’ Learning in Higher Education: An Analysis of Jordanian College and Grading System. Education, Business and Society: Contemporary Middle Eastern Issues, 5, 124-133.
http://dx.doi.org/10.1108/17537981211251160
[22] Jimoh, R.G., Shittu, A.K. and Kawu, Y.K. (2012) Students’ Perception of Computer Based Test (CBT) for Examining Undergraduate Chemistry Courses. Journal of Emerging Trends in Computing and Information Sciences, 3, 125-134.
[23] Van Der Kleij, F.M., Eggen, T.J.H.M., Timmers, C.F. and Veldkamp, B.P. (2012) Effects of Feedback in a Computer-Based Assessment for Learning. Computers and Education, 58, 263-272.
http://dx.doi.org/10.1016/j.compedu.2011.07.020
[24] Quellmalz, E. (2014) Computer-Based Assessment. In: Gunston, R., Ed., Encyclopedia of Science Education SE-44-2, Springer, Dordrecht, 1-6.
http://dx.doi.org/10.1007/978-94-007-6165-0_44-2
[25] Terzis, V., Moridis, C.N., Economides, A.A. and Mendez, G.R. (2013) Computer Based Assessment Acceptance: A Cross-Cultural Study in Greece and Mexico. Educational Technology and Society, 16, 411-424.
[26] Abduh, H.Y., Hussin, R., Bin, C. and Dahlan, H.M. (2014) Technology Acceptance for CBT in Secondary Schools of Saudi Arabia, 3-6.
[27] Huff, K.C. (2015) The Comparison of Mobile Devices to Computers for Web-Based Assessments. Computers in Human Behavior, 49, 208-212.
http://dx.doi.org/10.1016/j.chb.2015.03.008
[28] Timmers, C.F., Walraven, A. and Veldkamp, B.P. (2015) The Effect of Regulation Feedback in a Computer-Based Formative Assessment on Information Problem Solving. Computers & Education, 87, 1-9.
http://dx.doi.org/10.1016/j.compedu.2015.03.012
[29] Landry, B.J.L., Griffeth, R. and Hartman, S. (2006) Measuring Student Perceptions of Blackboard Using the Technology Acceptance Model. Decision Sciences Journal of Innovative Education, 4, 87-99.
http://dx.doi.org/10.1111/j.1540-4609.2006.00103.x
[30] Terzis, V., Moridis, C.N. and Economides, A.A. (2011) The Extension of the Computer Based Assessment Acceptance Model with Perceived Importance. Proceedings of the 4th International Conference on Interactive Computer-Aided Blended Learning, Antigua Guatemala, 2-4 November 2011.
[31] Liao, H.L. and Lu, H.P. (2008) The Role of Experience and Innovation Characteristics in the Adoption and Continued Use of E-Learning Websites. Computers and Education, 51, 1405-1416.
http://dx.doi.org/10.1016/j.compedu.2007.11.006
[32] Teo, T. (2009) Modeling Technology Acceptance in Education: A Study of Pre-Service Teachers. Computers & Education, 52, 302-312.
http://dx.doi.org/10.1016/j.compedu.2008.08.006
[33] Mayer, R.E. (2002) A Taxonomy for Computer-Based Assessment of Problem Solving. Computers in Human Behavior, 18, 623-632.
http://dx.doi.org/10.1016/S0747-5632(02)00020-1
[34] Gikandi, J.W., Morrow, D. and Davis, N.E. (2011) Online Formative Assessment in Higher Education: A Review of the Literature. Computers and Education, 57, 2333-2351.
http://dx.doi.org/10.1016/j.compedu.2011.06.004
[35] Terzis, V. and Economides, A.A. (2011) Computer Based Assessment: Gender Differences in Perceptions and Acceptance. Computers in Human Behavior, 27, 2108-2122.
http://dx.doi.org/10.1016/j.chb.2011.06.005
[36] Nirmalakhandan, N. (2013) Improving Problem-Solving Skills of Undergraduates through Computerized Dynamic Assessment. Procedia—Social and Behavioral Sciences, 83, 615-621.
http://dx.doi.org/10.1016/j.sbspro.2013.06.117
[37] Pilli, O. and Aksu, M. (2013) The Effects of Computer-Assisted Instruction on the Achievement, Attitudes and Retention of Fourth Grade Mathematics Students in North Cyprus. Computers and Education, 62, 62-71.
http://dx.doi.org/10.1016/j.compedu.2012.10.010
[38] Moon, J.W. and Kim, Y.G. (2001) Extending the TAM for a World-Wide-Web Context. Information and Management, 38, 217-230.
http://dx.doi.org/10.1016/S0378-7206(00)00061-6
[39] Malone, T.W. (1981) Toward a Theory of Intrinsically Motivating Instruction. Cognitive Science: A Multidisciplinary Journal, 5, 333-369.
http://dx.doi.org/10.1207/s15516709cog0504_2
[40] Lee, Y.C. (2008) The Role of Perceived Resources in Online Learning Adoption. Computers and Education, 50, 1423-1438.
http://dx.doi.org/10.1016/j.compedu.2007.01.001
[41] Ong, C.S. and Lai, J.Y. (2006) Gender Differences in Perceptions and Relationships among Dominants of E-Learning Acceptance. Computers in Human Behavior, 22, 816-829.
http://dx.doi.org/10.1016/j.chb.2004.03.006
[42] Van Raaij, E.M. and Schepers, J.J.L. (2008) The Acceptance and Use of a Virtual Learning Environment in China. Computers & Education, 50, 838-852.
http://dx.doi.org/10.1016/j.compedu.2006.09.001
[43] Agarwal, R. and Prasad, J. (1999) Are Individual Differences Germane to the Acceptance of New Information Technologies? Decision Sciences, 30, 361-391.
http://dx.doi.org/10.1111/j.1540-5915.1999.tb01614.x
[44] Agarwal, R., Sambamurthy, V. and Stair, R.M. (2000) Research Report: The Evolving Relationship between General and Specific Computer Self-Efficacy? An Empirical Assessment. Information Systems Research, 11, 418-430.
http://dx.doi.org/10.1287/isre.11.4.418.11876
[45] Garrido-Moreno, A., Padilla-Mele, A. and Del Aguila-Obra, A.R. (2008) Factors Affecting E-Collaboration Technology Use among Management Students. Computers & Education, 51, 609-623.
[46] Karahanna, E. and Straub, D.W. (1999) The Psychological Origins of Perceived Usefulness and Ease-of-Use. Information & Management, 35, 237-250.
http://dx.doi.org/10.1016/S0378-7206(98)00096-2
[47] Venkatesh, V., Morris, M.G., Davis, G.B. and Davis, F.D. (2003) User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27, 425-478.
[48] Wang, Y.-S., Cheng, M. and Wang, H.-Y. (2008) Investigating the Determinants and Age and Gender Differences in the Acceptance of Mobile Learning. British Journal of Educational Technology, 40, 92-118.
[49] Smith, P.J., Smith, P.J., Murphy, K.L., Murphy, K.L., Mahoney, S.E. and Mahoney, S.E. (2003) Towards Identifying Factors Underlying Readiness for Online Learning: An Exploratory Study. Distance Education, 24, 57-67.
http://dx.doi.org/10.1080/01587910303043
[50] Yi, M.Y. and Hwang, Y. (2003) Predicting the Use of Web-Based Information Systems: Self-Efficacy, Enjoyment, Learning Goal Orientation, and the Technology Acceptance Model. International Journal of Human Computer Studies, 59, 431-449.
http://dx.doi.org/10.1016/S1071-5819(03)00114-9
[51] Shih, H.P. (2008) Using a Cognition-Motivation-Control View to Assess the Adoption Intention for Web-Based Learning. Computers and Education, 50, 327-337.
http://dx.doi.org/10.1016/j.compedu.2006.06.001
[52] Vroom, V.H. (1964) Work and Motivation. 14th Edition, Wiley, New York.
[53] Cahill, S.E. and Bandura, A. (1987) Social Foundations of Thought and Action: A Social Cognitive Theory. Contemporary Sociology, 16, 12.
http://dx.doi.org/10.2307/2071177
[54] Ong, C.S., Lai, J.Y. and Wang, Y.S. (2004) Factors Affecting Engineers’ Acceptance of Asynchronous E-Learning Systems in High-Tech Companies. Information and Management, 41, 795-804.
http://dx.doi.org/10.1016/j.im.2003.08.012
[55] Weinerth, K., Koenig, V., Brunner, M. and Martin, R. (2014) Concept Maps: A Useful and Usable Tool for Computer-Based Knowledge Assessment? A Literature Review with a Focus on Usability. Computers and Education, 78, 201-209.
http://dx.doi.org/10.1016/j.compedu.2014.06.002
[56] Byrne, B.M. (2001) Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming. Lawrence Erlbaum Associates. Mahwah.
[57] Hair, J., Black, W., Babin, B., Anderson, R., Tatham, R. and Black, W. (2010) Multivariate Data Analysis. 7th edition, Prentice-Hall International Inc., Upper Saddle River.
[58] Kline, R.B. (2005) Principles and Practice of Structural Equation Modeling. 2nd Edition, The Guilford Press, New York.
[59] Kline, R.B. (2010) Principles and Practice of Structural Equation Modeling. The Guilford Press, New York.
[60] Krejcie, R.V. and Morgan, D.W. (1970) Determining Sample Size for Research Activities. Education and Psychological Measurement, 30, 607-610.
[61] Pallant, J. (2005) SPSS Survival Guide—A Step by Step Guide to Data Analysis Using SPSS for Windows. Open University Press, Chicago.
[62] Newkirk, H.E., Newkirk, H.E., Lederer, A.L. and Lederer, A.L. (2006) The Effectiveness of Strategic Information Systems Planning under Environmental Uncertainty. Information & Management, 43, 481-501.
http://dx.doi.org/10.1016/j.im.2005.12.001
[63] Arbuckle, J.L. (2009) Amos 18 User’s Guide, 635.
[64] Chou, T.C., Chang, P.L., Cheng, Y.P. and Tsai, C.T. (2007) A Path Model Linking Organizational Knowledge Attributes, Information Processing Capabilities, and Perceived Usability. Information and Management, 44, 408-417.
http://dx.doi.org/10.1016/j.im.2007.03.003
[65] Bagozzi, R. and Yi, Y. (1988) On the Evaluation of Structural Equation Models. Journal of the Academy of Marketing Science, 16, 74-94.
http://dx.doi.org/10.1007/BF02723327
[66] Holmes-Smith, P. (2001) Introduction to Structural Equation Modeling Using LISREL. ACSPRI Winter Training Program, Perth.
[67] Creswell, J.W. (2014) Research Design: Qualitative, Quantitative and Mixed Methods Approaches. Sage, Los Angeles.
[68] Sekaran, U. (2003) Research Methods for Business: A Skill-Building Approach. 4th Edition, John Wiley and Sons, New York.
[69] Sekaran, U. and Roger, B. (2013) Research Methods for Business: A Skill-Building Approach. 6th Edition, John Wiley & Sons, West Sussex.
[70] Blumberg, B., Cooper, D.R. and Schindler, P.S. (2005) Business Research Methods. McGraw Hill, Berkshire, 770.
[71] Gefen, D., Straub, D.W. and Boudreau, M.B. (2000) Structural Equation Modeling and Regression: Guidelines for Research Practice. Communications of the Association for Information Systems, 4, 1-76.
http://www.cis.gsu.edu/~dstraub/Papers/Resume/Gefenetal2000.pdf
[72] Fornell, C. and Larcker, D.F. (1981) Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research (JMR), 18, 39-50.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.