A Satisfaction Evaluation Model for E-Government Solutions and Services ()
1. Introduction
Software has become an essential tool in the daily activities of Saudi Arabia and worldwide. Humans tend to rely on software to complete tasks, such as completing forms and ordering groceries. Such reliance requires that these software solutions are of high quality, which in turn stipulates evaluating and assessing the software from a quality perspective to assure that it will work as expected when needed. Therefore, the International Organization for Standardization developed several standards concerned with software engineering, one of which is the ISO/IEC 25010. ISO/IEC 25010 relates to systems and software engineering—systems and software quality requirements and evaluation (SQuaRE)-system and software quality models [1].
The ISO/IEC 25010 standard, when implemented, plays a significant role in assuring the quality of software [1]. Additionally, the ISO/IEC 25010 defines two models: the product quality model and the quality in use model (QinU). The product quality model describes the static properties of the software and the dynamic properties of the system; it contains eight characteristics segmented into sub-characteristics. The QinU model is concerned with the interaction with the software while using it in a specific context, and contains five characteristics, some of which are segmented into sub-characteristics. In this study, the Satisfaction characteristic is utilized partially to achieve the study’s objective.
In e-government, information systems play the role of facilitating data and information exchange between related parties [2]. Therefore, the majority of e-government programs worldwide rely on software to provide their solutions and services. In Saudi Arabia, the Digital Government Authority (DGA) [3], is the responsible government body that oversees the electronic government program, DGA’s main role is to pave the road for government entities to digitally provide their services with high-quality and efficiency. In addition, DGA is responsible for establishing required laws, rules, and regulations to improve Saudi Arabia’s digital transformation. Moreover, DGA monitors, measures, and evaluates the performance and capabilities of all Saudi Arabian government agencies and entities.
Since the year 2001, The United Nations E-Government Survey was published by the United Nations Department of Economic and Social Affairs [4]. The survey aimed to assess the development of e-government programs in all United Nation (UN) members, based on the relativity of their performance to each other, the reported findings act as a benchmark for UN members to improve aspects related to e-government.
In the most recent survey, published in 2024, Saudi Arabia was in the “Very High” group of the E-Government Development Index (EGDI), with a rating class of VH, and ranked sixth among all UN members with an EGDI of 0.9602. In the previous three reports published in the years 2018, 2020, and 2022, Saudi Arabia was in the “High” group in 2018, and in the “Very High” group in 2020 and 2022, with a rating class of V1 in 2018 and V2 in 2020 and 2022. Saudi Arabia ranked fifty-second in 2018, forty-third in 2020, and thirty-first in 2022, its EGDI score was 0.7119 in 2018, and 0.7991 in 2020, and 0.8539 in 2022. This indicates an increased effort mainly in three dimensions:
1) Telecommunication infrastructure.
2) Human resources’ ability to use and promote information and communications technology (ICT).
3) Attainable online services and content.
Better quality in Saudi Arabian e-government solutions and service may have contributed to its improvement in the EGDI. Hence, there is a need for a tool that enables e-government quality assessment and allows verifying and confirming EGDI results. This study focuses on the third dimension, since making e-government services and solutions attainable requires assuring their quality.
Accessing data and information related to e-government solutions and services may be difficult or troublesome due to several factors, such as the providers’ rules and regulations or the decision maker’s approval and judgment. Therefore, there exists the need to evaluate the quality of e-government solutions and services using other means, one of which is through the users’ perspective, since they are the beneficiaries of these solutions and services.
This work presents a novel model that enables evaluating and assessing e-government solutions and services based on the users’ perspective. The model is constructed by utilizing questionnaire responses, and includes a scoring mechanism and an evaluation scheme. In the OSMM model [5], a maturity score for each element is assigned by the evaluator, and the evaluation grade is the result of summing all maturity scores. In addition, the SQO-OSS quality model [6] divided the evaluation method into two phases: the definition of the evaluation model and the definition of the aggregation method. To obtain results, measurements were aggregated and combined, and the profile-based evaluation method was used.
The suggested model enables understanding and evaluation of the quality of e-government solutions and services. Additionally, since e-government solutions and services rely on software, the model clarifies the nature of e-government usage in accordance with the ISO/IEC 25010. Moreover, this study differs from other studies in the following:
1) It proposes a model that enables study and evaluation of the quality of e-government services and solutions based on the users’ perspective.
2) The proposed model serves as an independent tool to measure, evaluate, and assess e-government services and solutions and provide a different point of view of their level of quality from the recipients’ perspective.
3) It shows the effect of e-government software quality on users’ usage and fulfillment of their requirements.
4) The users’ perspective is identified according to their responses to a questionnaire.
5) It is the first study of its kind in Saudi Arabia: it utilizes the user’s perspective to understand the quality of e-government solutions and services.
Moreover, the proposed model aims to answer the following question:
Does the users’ perspective of e-government solutions and services in any given nation conform with that nations score in the EGDI?
The remainder of this study is structured as follows: the next section presents background and related work. Section 3 describes the model. Section 4 discusses the implementation of the proposed model. Finally, Section 5 contains the conclusion and prospective.
2. Background and Related Work
Software engineering is the field of science that is involved in all aspects of software development, such as gathering user requirements, design, implementation, testing, and quality assurance [7]-[11]. Several factors contribute to the success or failure of a software product, such as its quality, functionality, and cost .
In software engineering, the area concerned with analyzing, measuring, and improving the quality features of the software product is referred to as software quality [12] [13]. To measure or evaluate software quality, two types of models are used: generalized or product-specific. The former is widely used, and the quality is based on rough estimation. In the latter, precise evaluation of the software quality is provided [14].
Producing software with high quality is essential. However, some of the software products available in the market tend to have defects. One of the main activities of quality improvement in software is defect detection, analysis, and resolution. Defects that go undiscovered for extended periods tend to cause more harm [12] [15]. Based on the software’s importance, having a defect in a critical software product, such as software embedded within medical devices, is not acceptable. On the other hand, in a less critical software product, defect resolution may be postponed until the release of a newer version of the software.
The objective of modelling is to quantitatively reason observations [16], Chinn and Kramer [17] defined a model as: “A creative and rigorous structuring of ideas that projects a tentative, and systematic view of phenomena.” Havenga et al. [18] perceive models as a type of knowledge: “[A model] provides a broad theoretical conceptualization that describes the relationship between concepts and is presented symbolically in words and graphic diagrams.” They also explain that there are five attributes that influence model description: “its purpose, concept, definitions, relationship structure, and assumptions.” In addition, they explain that the development of models is a two step process: 1) conceptual meaning construction and 2) theory structuring and contextualizing. The former is achieved through choosing the concept and formulating its criteria. The latter is based on identifying theory-related assumptions, explaining the theory’s context, and designing statements relevant to the relationship.
In his aim to evaluate e-government solutions and services, Alannsary [19] developed a questionnaire based on the Quality-in-Use (QinU) model of the ISO/IEC 25010. The questionnaire contains twenty two closed-ended questions, distributed among three main categories: basic information, e-government service and beneficiary needs, and satisfaction with features of e-government services. The questionnaire covered four aspects of quality: overall satisfaction, usefulness, trust, and pleasure. In addition, descriptive analysis was employed to interpret questionnaire responses.
2.1. E-Government Software
There are several definitions of e-government; however, the most common is based on e-government specifications, where Information Systems are used to facilitate the data and information exchange between related parties [2]. An alternative definition is “The use of IT in government operations, including its effects on public service delivery, citizens’ satisfaction and democratic standards” [20]. It is worth mentioning that some countries in the European Union have included the implementation of e-government services within their strategic plans since the year 1999 [21].
Fakhrye [22] discussed the obstacles and difficulties that prevent the success of e-government projects, and the impact of Ease of Delivery on the Success of Delivery. In addition, The Author described four Government development levels: Traditional Government, Electronic Government, Mobile Government, and Smart Government. Furthermore, the author elected to use the term e-government for electronic, mobile, and smart government.
As mentioned above, DGA oversees the e-government program in Saudi Arabia. Previously, the program was known as Yesser, which was established in the year 2005 as a joint project between the Ministry of Communications and Information Technology (MCIT), The Ministry of Finance (MoF), and the Communication and Information Technology Commission (CITC) [23]. In March 2021, a royal decree in Saudi Arabia established the DGA under the supervision of the Ministry of Communications and Information Technology (MCIT) and moved all Yesser’s projects and initiatives to DGA.
Since its inception, DGA and its predecessor Yesser have encouraged government entities to prepare their services to be electronically accessible. Doing so, these entities required the public to refrain from personally visiting offices, and resort to receiving services through the agency’s portal and mobile application. To successfully abide by such requirements and correctly utilize the services, software solutions developed for the portals and mobile applications need to be of high quality.
Arias and Macada [20] performed a thorough literature review on the quality of service in e-government. Their work is a continuation of similar reviews performed by three different articles. Their technique was based on a qualitative methodology to identify and perform content analysis on e-government service quality literature. The search included the years 2002 to 2016, and they identified 69 published articles, 48 of which were found to be related to evaluating the quality of service in e-government. Additionally, they classified these articles into 28 investigations. Content analysis was performed on the identified 28 investigations. They found that the perception of employees was not completely investigated; on the other hand, the perception of citizens was the most considered, “4 models focus on the supply side vs. 22 focus on the demand side”.
Hidayat Ur Rehman et al. [24] discussed the challenges that developing countries face when developing e-government systems. In their work, the success of e-government systems was assessed based on the perspective of developing countries. In addition, they utilized DeLone and McLean’s IS model by incorporating two variables, the Perceived-Usefulness and Perceived-Trust attributes. Moreover, it is apparent from their work that Information Quality, System Quality, and Service Quality must be attended to and focused on when developing e-government solutions and services.
Kanaan et al. [25] investigated the effects of service quality, security, and privacy on the intention to use e-government services from the perspective of Jordanians. Similar to Hidayat Ur Rehman et al. [24], the quality aspect referred to in the study is composed of three variables: Information quality, System quality, and Service quality. The findings confirmed a powerful relationship between service quality, anticipated security, anticipated privacy, and trust in e-government services. In addition, the intention to use e-government services is heavily impacted by trust.
Alghamdi et al. [26] studied the organizational e-government readiness in Saudi Arabia. Interviews were conducted with leading e-government officials to “survey the perceptions, plans, achievements, and barriers encountered in e-government.” In the study, contrary to existing models, a new model that focused on public sector organizations was introduced to assess the organizational readiness in e-government. The model also covered more factors, such as strategy and process.
Makki and Alqahtani [27] researched barriers that impede Digital Government implementation in Saudi Arabia, and developed a model to categorize the thirteen identified barriers based on their dependability and driving strength, revealing interrelationships between them on several levels using Interpretive Structural Modeling (ISM). The model’s implications include a better understanding of the contextual interrelationships among the barriers, which will aid in supporting existing implementation achievements and opening up the potential for future opportunities.
Alnafjan [28] investigated the software engineering practice adopted by organizations located in Saudi Arabia. Besides sources of hurdles that these organizations face, a questionnaire was designed to assess the software engineering practice adoption. After analyzing the questionnaire results, recommendations that cover the following were provided. These recommendations identified areas of weakness, software processes requiring improvement, and methods to increase acceptance related to concepts and techniques of software engineering, education, and training.
2.2. Software Quality Evaluation from the User’s Perspective
Alannsary [29] explained that quality definitions are complex and can be confusing. Several views play a role in such a definition, such as the specifications’ view and the users’ view. Juran uses a well-known definition of quality: “fitness for intended use” [30]. Therefore, in software quality, individuals and stakeholders may have contrasting views about various aspects and meanings [14].
According to Juran [31], the term “quality” has several definitions, one of which is the features of any product that meet customer expectations and ultimately result in their satisfaction. Therefore, high-quality e-government services and solutions must meet the needs of their users and ultimately result in their satisfaction.
Sivaji et al. [32] selected a Malaysian website managed by the government concerned with providing job opportunities to the citizens, and their work was aimed at measuring the website’s user experience. Additionally, they selected the most appropriate user experience methodologies to achieve such measurement and utilized the ISO/IEC 25010 characteristics to accomplish the study’s objective. Their focus was on the registration process, which is one of the most common uses of the website. The participants of the study were 23 undergraduate students in their final year. Several usability issues were found, such as difficult and complex registration steps, and having to provide too much comprehensive data. As described in the study, focusing on only one user group, undergraduate students, was considered a limitation. Another limitation was that it only examined the website’s registration task.
De Andrade Soares et al. [33] presented Br-GovQual, a user perception service appraisal model. The model explored the relationship between public service, the user, and how the service is provided. Additionally, their model was proven statistically using four public services of Brazil. The model development process utilized quantitative descriptive research in addition to statistical and semantic analysis. The process contained four stages: preliminary instrument development, data collection, statistical analysis, and ground theory. To assess the users’ perceived quality, a questionnaire was used, which was self-administered, which covered five psychometric components, that allowed evaluation of the users’ perception of the service quality. The percentage of participants differed for each service, which was considered a limitation since it is unacceptable to independently discuss results.
Nwasra et al. [34] proposed a conceptual framework that assessed the QinU aspect of websites based on ISO/IEC 25010 and ISO/IEC 25012. The framework explains “the procedural flow between different stakeholders (decision-maker, evaluator, developer, and end-user).” Additionally, the model employs the Quality-in-Use Evaluation Model (QinUEM) to assess QinU attributes.
Acharya and Sinha [35] developed a Mobile Learning framework, recommended a set of metrics to measure quality characteristics, and evaluated the software system based on the ISO/IEC 25010. They also demonstrated the result of applying the framework.
Nakai et al. [36] suggested a quality assessment framework based on the ISO/IEC 25010. The framework included feasible metrics that involved 47 quality metrics and 18 QinU metrics. Additionally, a guideline to implement the framework was defined, and a case study to prove the framework’s validity was shown.
Herrera et al. [37] specified the QinU model for web portals based on associated literature and the ISO/IEC 25010. Since they defined the QinU as “the perception of the quality provided by the final user,” the users’ perspective of the web portal was considered. This allowed evaluation of the quality of the web portal. Their main focus was in assessing user satisfaction, user effectiveness, and user efficiency while using the web portal.
2.3. Evaluation of E-Government Software
To evaluate the success or failure of a software product, researchers and practitioners agree that quality is one of the most important metrics [7] [38]-[41]. Like other products, software quality may be defined in several ways. This is due to the different expectations that stakeholders have [14].
Ziemba et al. [21] stated that for e-government portals to be successfully adopted, it is required that they are of high quality. Therefore, they proposed a framework and used it to evaluate the quality of three Polish e-government portals. The framework utilizes the ISO/IEC 25010 standard. Specifically, the product quality model. As described by the authors, the small sample size was considered a limitation of their work, even though data analysis was not restricted. Additionally, the framework was only tested and verified by employees of e-government portals.
Fath-Allah et al. published several studies related to e-government quality [42]-[45]. Their work reviewed the literature to compare and analyze the suggested quality models of e-government. The study aimed to recommend guidelines that aid in constructing a quality model for e-government portals. Quality models were classified into two categories: ISO-based and non-ISO-based models . In addition, they explored best practices in the literature and the industry. The collected results were categorized into three groups: back-end, front-end, and external [43]. Moreover, they proposed an e-government portal quality framework based on the ISO/IEC 25010 and the government portal best practices model, where a mapping between characteristics of the product quality model in the ISO/IEC 25010 and the government portal best practice model was performed. They were able to map a few of the government portal best practices to the characteristics and sub-characteristics of the ISO/IEC 25010 product quality model. Therefore, they introduced new characteristics and sub-characteristics and came up with a quality framework that contained five quality models: back-end, front-end web design, front-end content, external, and service. Each model has its own characteristics and sub-characteristics, which are associated with best practice subcategories [44]. Furthermore, they presented a comprehensive view of the e-government portal quality framework and provided measures for its characteristics and sub-characteristics .
Santa et al. [46] focused on e-government services provided to businesses in Saudi Arabia, where they explored the direct and indirect effects of trust in online services on user satisfaction, effectiveness of e-government systems, and operational effectiveness. The research findings showed that operational effectiveness and measures of system effectiveness (e.g., service quality, system quality, and information quality) mediate trust’s effect on user satisfaction. In addition, user satisfaction is mainly driven by information quality and operational effectiveness. Moreover, a key finding of their work is the negative relationship between service quality and trust in online services.
Saudi Arabia’s e-government quality has been researched from several perspectives, Alharbi et al. [47] focused on the success of mobile government solutions in Saudi Arabia, especially the factors that promote citizens’ adoption of such solutions. Hussain [48] investigated influencing factors of e-government adoption in Saudi Arabia. Aloboud et al. [49] investigated the usability and accessibility of e-government websites using the Nilsson’s 10 Heuristics for the former, and Web Content Accessibility Guidelines for the latter. Several recommendations were provided to overcome accessibility and usability problems. Similarly, Al-Sakran and Alsudairi [50] analyzed mobile website accessibility and usability of several public sectors. Almukhlifi et al. [51] investigated the moderating effect of Wastta (a Saudi Arabian culture act) on the adoption of e-government based on citizens’ perspective.
The quality of e-government solutions and services is known to be important, and such importance stems from the need to assure that the developed software solutions and services will work as expected when needed. Despite the above listed research, studies, and investigations there still exists the need to develop an overall model that enables assessing the quality of e-government services and solutions from the users’ perspective based on their satisfaction. It is true that actual quality may not be perfectly measured using user satisfaction. However, when it comes to assessing products and services, it is a major player on its own, and an integral component of the overall picture [52].
3. Proposed Model
Figure 1. The proposed model.
The proposed model enables studying and evaluating the quality of e-government services based on users’ expectations of four aspects that contribute to quality. The model, developed entirely by the author, utilizes a questionnaire, a scoring mechanism, and an evaluation scheme, as depicted in Figure 1.
As explained previously, developing a model is a two step process: 1) constructing a conceptual meaning and 2) structuring and contextualizing theory. Next, the process of developing the model is explained and illustrated, and a description of the model’s main components is provided.
3.1. Conceptual Meaning Construction
As mentioned previously, the ISO/IEC 25010 standard is concerned with ensuring quality of software solutions and services through the product quality model and the QinU model. The product quality model is concerned with static properties of the software and the dynamic properties of the system, and the QinU model is concerned with the interaction between the software and its users when used in a specific context. Since 9M. O. Alannsary the proposed model is aimed at evaluating and assessing e-government solutions and services from the users’ perspective, it is designed by utilizing the QinU model of the ISO/IEC 25010 and based on four aspects of users’ expectations of quality: general satisfaction, feature satisfaction, trust, and pleasure. These aspects combined, give a clear perspective on the quality of e-government services and solutions. The QinU model of the ISO/IEC 25010 has been used and adapted by researchers and practitioners in several areas to guarantee software quality [53]-[55].
The questionnaire used in the proposed model is built on characteristics and sub-characteristics of the QinU model that utilize questionnaires as the method to collect data. According to the ISO/IEC 25022: measurement of quality in use, sub-characteristics that use a questionnaire as a method to collect data are found in the Satisfactory characteristic. In addition, the comfort sub-characteristic is excluded and not to be measured in the proposed model, due to its focus on physical comfort, which is out of this study’s scope. Moreover, there are other measures that were excluded from the proposed model, this is due to not having a questionnaire as a data collection method for those measures. Table 1 displays the characteristics and sub-characteristics of the QinU model utilized in the proposed model. Furthermore, Table 2 shows the name and description of a subset of the ISO/IEC 25022 Satisfaction characteristic measures, those that were utilized in the current study.
Table 1. Quality in use model characteristics and sub-characteristics utilized in the proposed model.
Characteristic |
Sub-Characteristic |
Satisfaction |
Usefulness |
Trust |
Pleasure (user experience) |
Table 2. Name and description of a subset of the ISO/IEC 25022 Satisfaction characteristic measures.
Sub-characteristic |
Name |
Description |
General |
Overall satisfaction |
The overall satisfaction of the user. |
Usefulness |
Satisfaction with features |
The satisfaction of the user with specific system features. |
Trust |
User trust |
The extent to which the user trusts the system. |
Pleasure |
User pleasure |
The extent to which the user obtains pleasure compared to the average for this type of system. |
It is important to note that within the realm of e-government, the sub-characteristic” Pleasure” refers to the degree to which users experience a favorable emotional response while interacting with the service. This includes elements such as visual appeal, intuitive design, and overall satisfaction derived from a seamless and captivating user interface [56].
3.2. Model Structuring and Contextualizing
The model utilizes a questionnaire, a scoring mechanism, and an evaluation scheme. Questionnaire questions are drawn up and developed while considering the QinU model of the ISO/IEC 25010 using measures suggested and provided in the ISO/IEC 25022. The scoring mechanism developed by the author as part of this model is based on aggregation, like the approach described in [5], where values are assigned to responses of each questionnaire question, the question score is calculated, the quality aspect score is calculated, and then the quality evaluation score is determined by summing all of the quality aspect scores. As for the evaluation scheme, score ranges are shown to better explain accumulated scores for both the overall quality and each quality aspect individually.
3.2.1. The Questionnaire
The questionnaire used in the model contained 22 closed-ended questions. These questions were distributed among three major sections: basic information, e-government service and beneficiary needs, and satisfaction with e-government service features. Besides, a question was used to validate participant responses and disqualify inadequate ones. Each aspect of the above-mentioned aspects that contribute to quality (general satisfaction, satisfaction with features, trust, and pleasure) has its own set of questions in the questionnaire as explained in [19]. A copy of the questionnaire is shown in Appendix.
3.2.2. The Scoring Mechanism
The Partial Credit Score (PCS) was used to calculate the question score. This approach is widely used in educational research to capture examinees’ level of knowledge [57] [58]. It is also used by the Care Quality Commission (CQC) [59] of the National Health Service’s (NHS) Patient Survey Program to generate patient feedback on the level of quality of care received from NHS organizations [60]. PCS is implemented by assigning values to each response to allow calculation of a single average response for a multiple-choice question. This enables the derivation of links and comparisons over distinct groups.
Treating ordinal data that are converted to numbers as interval data, and then applying a parametric test has been a controversial issue in some fields, such as medicine [61]. Norman [62] elaborated that it could be done because results analyzed by using parametric tests are more robust. Work done by Sizmur [63] analyzed three different scoring approaches: the problem score, the PCS, and the bottom box. The results show that the PCS delivered reliability levels that were high. The work concluded that “both the Picker problem score and the partial credit scoring produced similar levels of trust-level reliability and would therefore be similarly capable of discriminating between providers. The bottom box scoring did not generally distinguish between trusts and performed rather less well”.
In the scoring mechanism of the proposed model, responses for each question in the questionnaire are assigned a PCS, Tables 3-5, and Table 6 show percentage values (APv), questions, question percentage value (Q%), question max value (Q Max), sum of question max values related to the quality aspect (ΣQ Max), question responses (QR), and the partial credit score (PCS) for the quality aspects: General Satisfaction, Feature Satisfaction, Trust, and Pleasure respectively. In addition, the following assumptions are made:
1) Assignment of the PCS is based on the best or favorable answer for that specific question. Likert [64] stated that every viable response needs to be assigned a numerical value for the objective of scoring or tabulation. In addition, the following explanation was provided to refute the claim that favorable and unfavorable are vague: “So far as the measurement of the attitude is concerned, it is quite immaterial what the extremes of the attitude continuum are called; the important fact is that persons do differ quantitatively in their attitudes, some being more toward one extreme, some more toward the other. Thus, as Thrustone has pointed out in the use of his scale, it makes no difference whether the zero extreme is assigned to appreciation of the church or depreciation of the church, the attitude can be measured in either case and person’s reaction to the church expressed”.
2) Each question score contributes evenly to the quality aspect it belongs to; hence, all questions that belong to a quality aspect have the same percentage value.
3) Each quality aspect contributes equally (25%) to the overall e-government score, thereby preserving neutrality and preventing bias toward any specific dimension. This approach guarantees a balanced evaluation framework, particularly in the absence of empirical evidence supporting the superiority of one aspect over others in influencing user satisfaction.
Table 3. Aspects percentage value, questions, responses, and pcs for the general satisfaction aspect.
APv |
|
Question |
Q% |
|
QR |
PCS |
25% |
14 |
6 |
5% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
13 |
5% |
2 |
1 |
2 |
2 |
1 |
15 |
5% |
3 |
1 |
1 |
2 |
2 |
3 |
1 |
16 |
5% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
17 |
5% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
Furthermore, the scoring mechanism is mainly concerned with calculating the following variables:
1. The question score Qs: first the score for each response (Res) is calculated through multiplying its PCS with the number of participants who chose it. Next,
Table 4. Aspects percentage value, questions, responses, and PCS for the feature satisfaction aspect.
APv |
|
Question |
Q% |
|
QR |
PCS |
25% |
19 |
8 |
3.125% |
2 |
1 |
2 |
2 |
1 |
9 |
3.125% |
2 |
1 |
1 |
2 |
2 |
10 |
3.125% |
2 |
1 |
1 |
2 |
2 |
11 |
3.125% |
2 |
1 |
2 |
2 |
1 |
12 |
3.125% |
2 |
1 |
2 |
2 |
1 |
18 |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
|
|
19 |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
20 |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
the Qs is calculated through aggregating all scores for its responses and dividing the result by the total number of participants Pc using equation 1:
(1)
where:
is the question score,
is a question’s response score, and
is the total number of participants.
2. The question value
, as shown in Tables 3-5, and Table 6, each question is assigned a max value and a percentage, the
is calculated by dividing the question’s score
by its max value
and multiplying the result by the question’s percentage
using equation 2:
(2)
where:
is the question’s value,
is the question’s score,
is the question’s max value,
is the questions percentage.
Table 5. Aspect percentage value, questions, responses, and pcs for the trust aspect.
APv |
|
Question |
Q% |
|
QR |
PCS |
25% |
12 |
21(a) |
4.167% |
2 |
1 |
2 |
2 |
1 |
21(b) |
4.167% |
2 |
1 |
2 |
2 |
1 |
21(c) |
4.167% |
2 |
1 |
2 |
2 |
1 |
21(d) |
4.167% |
2 |
1 |
2 |
2 |
1 |
21(e) |
4.167% |
2 |
1 |
2 |
|
|
|
|
|
2 |
1 |
21(f) |
4.167% |
2 |
1 |
2 |
2 |
1 |
3. The quality aspect value
, as mentioned previously, each quality aspect has its own set of questions in the questionnaire that correspond to it. The
of any quality aspect is calculated by aggregating the question value
of all questions belonging to that quality aspect using equation 3:
(3)
where:
is the quality aspect value, and
is a questions’ value.
4. The quality aspect score
is the accumulated score of the quality aspect. Calculating the score of any quality aspect is achieved through dividing its value
by its percentage value
using equation 4:
(4)
where:
is the value of the quality aspect, and
is the percentage value of the quality aspect.
5. The overall e-government score OVs is the overall score of e-government solutions and service. Obtaining a value for OVs is achieved through aggregating the value of all quality aspects
using equation 5:
(5)
where:
is the overall score of the e-government, and
is the value of the quality aspect.
Table 6. Aspects percentage value, questions, responses, and pcs for the pleasure aspect.
APv |
|
Question |
Q% |
|
QR |
PCS |
25% |
24 |
22(a) |
3.125% |
3 |
1 |
1 |
2 |
2 |
3 |
3 |
22(b) |
3.125% |
3 |
1 |
1 |
2 |
2 |
3 |
3 |
22(c) |
3.125% |
3 |
1 |
1 |
2 |
2 |
3 |
3 |
|
|
22(d) |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
22(e) |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
22(f) |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
22(g) |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
22(h) |
3.125% |
3 |
1 |
3 |
2 |
2 |
3 |
1 |
3.2.3. The Evaluation Scheme
The proposed model provides two types of scores that can be evaluated: the individual quality aspect score and the overall e-government service score. Furthermore, it is also possible to evaluate the score of each question individually. However, this may lead to incorrect interpretations, thus it was excluded from the model’s score types. Both scores may be evaluated and/or interpreted using score range descriptions provided in Table 7. It is worth noting that the score range values were sent to five domain experts to assure their applicability and validity.
Table 7. Score range evaluation.
Score range |
Evaluation and/or Interpretation |
90% to 100% |
Superior |
80% to less than 90% |
Above Average |
70% to less than 80% |
Average |
60% to less than 70% |
Below Average |
Below 60% |
Inferior |
4. Model Implementation
An implementation of the model was performed on e-government solutions and services in Saudi Arabia to prove its applicability. The details are described below.
4.1. Questionnaire
Based on the General Authority for Statistics [65], the number of people allowed access to e-government services in Saudi Arabia is slightly below 23 million, which makes up the questionnaire population size. Since this number is large, simple random sampling is used. Therefore, the required sample size for this study is 385 participants.
Invitations to the aforementioned questionnaire were distributed to 500 participants; 402 responses were collected, only 276 were considered valid. The remaining 126 responses were disqualified because they were incomplete or the provided responses were inconsistent or contradictory.
Table 8. Quality aspect scores for the general satisfaction aspect.
Q |
QR |
#Participants |
PCS |
Res |
|
|
Q% |
|
|
3 |
1 |
238 |
3 |
714 |
2.86 |
3 |
5 |
4.76% |
22.92% |
2 |
37 |
2 |
74 |
3 |
1 |
1 |
1 |
13 |
1 |
249 |
2 |
498 |
1.90 |
2 |
5 |
4.76% |
2 |
27 |
1 |
27 |
15 |
1 |
238 |
3 |
714 |
2.84 |
3 |
5 |
4.73% |
2 |
31 |
2 |
62 |
3 |
7 |
1 |
7 |
16 |
1 |
178 |
3 |
534 |
2.62 |
3 |
5 |
4.37% |
2 |
97 |
2 |
182 |
3 |
7 |
1 |
7 |
17 |
1 |
173 |
3 |
519 |
2.58 |
3 |
5 |
4.30% |
2 |
90 |
2 |
180 |
3 |
13 |
1 |
13 |
4.2. Score
Based on the proposed model, the scoring mechanism is applied to the questionnaire results, Tables 8-10, and Table 11 show the quality aspect questions (Q), question response (QR), the number of participants who chose the response (# participants), the partial credit score of the response (PCS), the response score (Res), the question score (
), the question max value (
), the question percentage (Q%), the question value (
), and the quality aspect value (
).
4.3. Evaluation
In compliance with the proposed model, there are two types of scores generated, the individual quality aspect score, and the overall e-government score. The evaluation of both are shown in Table 12, where it appears that all quality aspects of the e-government services are rated as Superior except for the Pleasure quality aspect, which was evaluated as Above Average. In addition, the overall score of e-government services (
) is 91.64%, this leads to evaluation of the overall e-government services in Saudi Arabia as Superior.
Table 9. Quality aspect scores for the feature satisfaction aspect.
Q |
QR |
#Participants |
PCS |
Res |
|
|
Q% |
|
|
8 |
1 |
265 |
2 |
530 |
1.96 |
2 |
3.125 |
3.06% |
22.65% |
2 |
11 |
1 |
11 |
9 |
1 |
38 |
1 |
38 |
1.86 |
2 |
3.125 |
2.91% |
2 |
238 |
2 |
476 |
10 |
1 |
58 |
1 |
58 |
1.79 |
2 |
3.125 |
2.80% |
2 |
218 |
2 |
436 |
11 |
1 |
201 |
2 |
402 |
1.73 |
2 |
3.125 |
2.70% |
2 |
75 |
1 |
75 |
12 |
1 |
231 |
2 |
462 |
1.84 |
2 |
3.125 |
2.87% |
2 |
45 |
1 |
45 |
18 |
1 |
181 |
3 |
543 |
2.63 |
3 |
3.125 |
2.74% |
2 |
87 |
2 |
174 |
3 |
8 |
1 |
8 |
19 |
1 |
185 |
3 |
555 |
2.61 |
3 |
3.125 |
2.72% |
2 |
74 |
2 |
148 |
3 |
17 |
1 |
17 |
20 |
1 |
214 |
3 |
642 |
2.74 |
3 |
3.125 |
2.85% |
2 |
52 |
2 |
104 |
3 |
10 |
1 |
10 |
5. Conclusions
Measuring, evaluating, and understanding the quality of e-government solutions and services is important. However, it has not been studied based on the users’ perspective. In addition, assessing the quality of e-government services and solutions using current models requires accessing data and information from its provider, which may be difficult or troublesome. Therefore, in this work, a model was developed to bridge the gap between measuring and evaluating quality on one hand, and not being able to access important data on the other. The proposed model enables the understanding of the quality of the e-government solutions and services, and clarifies the nature of e-government usage in accordance with the ISO/IEC 25010. Moreover, the proposed model partially utilizes ISO/IEC 25010 characteristics, sub-characteristics, and measures. Furthermore, it shows the effect of e-government software quality on users’ usage and fulfillment of their requirements and needs.
The proposed model focuses on four aspects of users’ expectations of quality: general satisfaction, satisfaction with features, trust, and pleasure. Collectively, these aspects draw a clear view of the quality level of e-government services and solutions. Furthermore, the proposed model incorporates a questionnaire, a scoring mechanism, and an evaluation scheme. Finally, the evaluation scheme provides an explanation of accumulated scores for both the overall quality and each quality aspect individually.
Table 10. Quality aspect scores for the trust aspect.
Q |
QR |
#Participants |
PCS |
Res |
|
|
Q% |
|
|
21(a) |
1 |
269 |
2 |
538 |
1.97 |
2 |
4.167 |
4.11% |
24.34% |
2 |
7 |
1 |
7 |
21(b) |
1 |
246 |
2 |
492 |
1.89 |
2 |
4.167 |
3.94% |
2 |
30 |
1 |
30 |
21(c) |
1 |
262 |
2 |
524 |
1.95 |
2 |
4.167 |
4.06% |
2 |
14 |
1 |
14 |
21(d) |
1 |
267 |
2 |
534 |
1.97 |
2 |
4.167 |
4.10% |
2 |
9 |
1 |
9 |
21(e) |
1 |
260 |
2 |
520 |
1.94 |
2 |
4.167 |
4.05% |
2 |
16 |
1 |
16 |
21(f) |
1 |
264 |
2 |
528 |
1.96 |
2 |
4.167 |
4.08% |
2 |
12 |
1 |
12 |
5.1. Model Implementation Results
To prove the model’s applicability, it was implemented and the quality of Saudi Arabian e-government services and solutions were evaluated and measured. The analysis showed that of the four quality aspects that were evaluated, only the “pleasure” aspect was evaluated to be Above Average, which means its score ranged from 80% to below 90%, while the remaining three: general satisfaction, satisfaction with features, and trust, were evaluated to be Superior, which means that their score totaled 90% and above. Furthermore, e-government services and solutions were evaluated to be 91.64%, which is considered Superior. These results align with findings reported in the United Nation’s Electronic Government Development Index published in the year 2024.
5.2. Model Implications
Implications of the proposed model are twofold: practical and theoretical. In the former, it is a step forward towards measuring the quality of e-government solutions and services without the need to acquire back-end data and/or information. This is beneficial for individuals and entities that need to evaluate the quality of e-government solutions and services and do not have access to such data, such as government monitoring agencies and researchers. In addition, it allows for the acquisition of a holistic view of the level of quality of any e-government solution or service. Moreover, it is easy and fast to implement. Furthermore, should a government entity detect a low score in any aspect, it could respond by improving that aspect. For example, if the “Trust” aspect had a low score, the government entity might act by improving transparency through more explicit communication of data privacy policies, implementing secure login mechanisms, or offering visible indicators of official verification to instill confidence in users about the credibility and security of the service. As for the latter, work done in this study opens the door for researchers and practitioners to further develop quality models that focus on the beneficiaries’ perspectives while aiding service providers to fulfill their obligations. Furthermore, researchers and practitioners may elect to enhance the model and incorporate other attributes, or adapt it to better suit other environments or cultures.
Table 11. Quality aspect scores for the pleasure aspect.
Q |
QR |
#Participants |
PCS |
Res |
|
|
Q% |
|
|
21(a) |
1 |
26 |
1 |
26 |
2027 |
3 |
3.125 |
2.37% |
21.73% |
2 |
146 |
2 |
292 |
3 |
103 |
3 |
309 |
21(b) |
1 |
29 |
1 |
29 |
2.30 |
3 |
3.125 |
2.40% |
2 |
135 |
2 |
270 |
3 |
112 |
3 |
336 |
21(c) |
1 |
27 |
1 |
27 |
2.55 |
3 |
3.125 |
2.65% |
2 |
71 |
2 |
142 |
3 |
178 |
3 |
534 |
21(d) |
1 |
215 |
3 |
645 |
2.74 |
3 |
3.125 |
2.85% |
2 |
50 |
2 |
100 |
3 |
11 |
1 |
11 |
21(e) |
1 |
207 |
3 |
621 |
2.69 |
3 |
3.125 |
2.80% |
2 |
53 |
2 |
106 |
3 |
16 |
1 |
16 |
21(f) |
1 |
235 |
3 |
705 |
2.82 |
3 |
3.125 |
2.93% |
2 |
31 |
2 |
62 |
3 |
10 |
1 |
10 |
21(g) |
1 |
215 |
3 |
645 |
2.75 |
3 |
3.125 |
2.86% |
2 |
52 |
2 |
104 |
3 |
9 |
1 |
9 |
21(g) |
1 |
219 |
3 |
657 |
2.75 |
3 |
3.125 |
2.87% |
2 |
46 |
2 |
92 |
3 |
11 |
1 |
11 |
Table 12. Individual quality aspect value and overall e-government score.
Quality Aspect |
|
QA% |
|
Evaluation |
|
General satisfaction |
22.91% |
25% |
91.65% |
Superior |
91.64% Superior |
Satisfaction with features |
22.65% |
25% |
90.59% |
Superior |
Trust |
24.34% |
25% |
97.36% |
Superior |
Pleasure |
21.74% |
25% |
86.94% |
Above
Average |
5.3. Study Limitations
The proposed model has several limitations, one of which is that the maturity of the evaluated e-government solutions and services was not considered, nor was it evaluated or assessed using the proposed model. Moreover, since the model was designed with Saudi Arabian e-government in mind, there exists the possibility, even with low probability, that it may not be applicable to other e-government solutions or services. Furthermore, the model was designed to evaluate e-government solutions and services, other software solutions and services were not considered. Finally, despite the intended sample size of 385, only 276 valid responses were collected; this deficiency may restrict the generalizability of the findings and must be considered when analyzing the results.
5.4. Future Work
Future work includes several directions. First, develop a more extensive questionnaire that incorporates open-ended questions aimed at gathering more profound insights into users’ experience and perception. Second, to implement the model on e-government solutions and services of other countries to assess its applicability. Third, to implement the model on software solutions and services other than the ones designed specifically for e-governments. Fourth, to investigate services and solutions according to their nature, whether it be Government to Consumer (G C), Government-to-Government (G G), or Government to Business (G B). Finally, to focus on individually studying popular e-government services and solutions such as “Absher” to produce customized analyses and results.
Statements and Declarations
Disclosure Statement
The author certifies that there is no competing interests to declare, and there is no affiliations or involvement in any organization or entity with any financial or non-financial interest in the subject matter or materials discussed in this manuscript.
Funding
The author declares that no funds, grants, or other support was received for this work.
Data Availability
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
Notes on Contributor(s)
The Author is responsible for the conceptualization of this research work. He wrote the main manuscript, and reviewed the manuscript draft. Furthermore, the data used in this study are available in the article. In addition, detailed evaluation results are available from the corresponding author on reasonable request after removing any sensitive information based on data privacy laws in the Kingdom of Saudi Arabia.
Appendix. Questionnaire
Below is a copy of the questionnaire (including all possible responses) developed in [19] and utilized in this study.
Basic Information
1. What is your gender?
•Male
•Female
2. In which age category do you belong?
•Below 20
•From 20 to 29
•From 30 to 39
•From 40 to 49
•From 50 to 59
•60 years or older
3. Which of the following categories best describes your employment status?
•Student
•Part-time Employe
•Full-time Employee
•Looking for a job
•Retired
4. What is your level of education?
•Below high school
•High school
•Diploma
•Some college, but no degree
•Bachelor
•Masters
•PhD
E-Government Services and Beneficiary Needs
5. Do you use e-government?
•Yes (an extension to the question will be displayed)
•No
How often do you use e-government services:
•Daily
•Weekly
•Once a Month
•Every 2 - 3 Months
•2 - 3 Times a Month
6. Do e-government services fulfill your requests?
•Yes
•To some extent
•No
7. Which e-government service do you use the most?
•Textual feedback
8. Was the service you requested fulfilled?
•Yes
•No
9. Does service execution consume more time than expected?
•Yes
•No
10. Does the service require unnecessary information or steps?
•Yes
•No
11. Does the requested service contain an explanation about how it works?
•Yes
•No
12. Is the service provided across multiple (different) devices?
•Yes
•No
13. Is the service completed with ease?
•Yes
•No (an extension to the question will be displayed)
Please select the cause of difficulty (you can select more than one).
•The website was not visually appealing.
•I encountered difficulties while using the service.
•The services were not completed quickly.
•The service cannot be used at any time.
•The pages did not load properly.
•The information provided is not sufficient to meet my needs.
•The information regarding the service was not relevant to me.
•Service information is not clear and easy to understand.
14. Do you complete e-government services on your own?
•Yes
•No (an extension to the question will be displayed)
To complete a service I get help from:
•A specialized office
•Family member
•A friend
•Other
Satisfaction with the Features of E-Government Services
15. The online service was more convenient than the one in person.
•Yes
•Sometimes
•No
16. To what extent are you satisfied with e-government services?
•Satisfied
•To some extent
•Not satisfied
17. To what extent are you satisfied with the e-government service’s ease of use?
•Satisfied
•To some extent
•Not satisfied
18. To what extent are you satisfied with the look and feel of e-government services?
•Satisfied
•To some extent
•Not satisfied
19. Is the registration process of e-government services easy?
•Yes
•To some extent
•No
20. Is signing into an e-government service seamless?
•Yes
•To some extent
•No
21. Do you agree with the following statements:
(a) The e-government service is trustworthy.
•Yes
•No
(b) I am not suspicious of the e-government service’s results.
•Yes
•No
(c) The e-government service’s actions do not cause harm.
•Yes
•No
(d) I am confident in the e-government service.
•Yes
•No
(e) I am familiar with the e-government service.
•Yes
•No
(f) The e-government service is completed successfully.
•Yes
•No
22. Generally, when signing into an e-government service, indicate your feelings towards each word or term:
(a) Excited
•Not at all
•Moderately
•Completely
(b) Enthusiastic
•Not at all
•Moderately
•Completely
(c) Proud
•Not at all
•Moderately
•Completely
(d) Distressed
•Not at all
•Moderately
•Completely
(e) Careless
•Not at all
•Moderately
•Completely
(f) Ashamed
•Not at all
•Moderately
•Completely
(g) Jittery
•Not at all
•Moderately
•Completely
(h) Afraid
•Not at all
•Moderately
•Completely