A Transitional QoE Model Framework for 5G and beyond Networks
Vinaya Surya
IP Networks, Nokia, Sunnyvale, USA.
DOI: 10.4236/jcc.2025.139002   PDF    HTML   XML   7 Downloads   48 Views  

Abstract

The primary objective of this study is to propose and evaluate a multi-component Quality of Experience (QoE) transition model that integrates both network-level impairments and application-level performance metrics, and a time-dependent transition factor to provide a comprehensive framework for calculating Quality of Experience during the evolution from 5G to 6G. This research employs a multi-modelling framework for calculation of QoE by combining 6G and 5G quality scores through a time-dependent transition function, while the 6G component incorporates four sub-models: an enhanced Simplified E-model for network impairments, an AI-optimized Analytic Hierarchy Process model, application-specific quality metrics, and context-aware performance indicators, all integrated through dynamic weights that adapt throughout the transition period, with 5G QoE serving as the baseline for comparison. Experiments using a synthetic dataset of video streaming demonstrate a significant QoE improvement of 35.6% by factoring in component interactions. The adaptive transition function is found to be superior to the Sigmoid and other alternatives, as it accurately models real-world patterns during the deployment phase. Statistical validation of the model yielded a simple regression equation with an R2 of 0.963, providing a practical implementation guideline for network operators. Unlike the previous works that have relied exclusively on subjective user evaluations or are limited to QoS-to-QoE mappings, this paper offers a scalable, adaptable, and hybrid approach that reflects both technical service delivery and user-level experience. Future work will focus on incorporating real user feedback, complex datasets involving multiple services, and 6G-specific applications.

Share and Cite:

Surya, V. (2025) A Transitional QoE Model Framework for 5G and beyond Networks. Journal of Computer and Communications, 13, 13-41. doi: 10.4236/jcc.2025.139002.

1. Introduction

In the rapidly evolving landscape of wireless communications, 6G networks are increasingly emerging as the next major paradigm shift, poised to redefine both technological capabilities and user experiences. Although the global deployment of 5G is still in progress—with 135 active commercial 5G networks deployed worldwide and the majority located in Western Europe and the Asia-Pacific region—industry leaders, research institutions, and standardisation bodies have already turned their attention toward envisioning and architecting 6G. This early momentum for 6G is supported by ongoing rollouts of LTE (680 networks) and LTE-Advanced (338 networks), which remain foundational to mobile broadband infrastructure [1]. Notably, the early interest in 6G coincides with the maturing of 5G specifications, such as Releases 15 and 16 by 3GPP, with further enhancements expected in Release 17 and beyond [2].

Projections indicate that by 2025, 5G will account for approximately 20% of global mobile connectivity, a forecast that has accelerated investments not only in deploying 5G infrastructure but also in conceptualising its successor [1]. Companies like NTT Docomo, Samsung, and Ericsson have already published strategic white papers on their visions for 6G networks. Simultaneously, major research initiatives such as the University of Oulu’s “6G Flagship” project are driving academic and technical discussions, fostering collaborations among global researchers and practitioners. The scholarly attention to 6G is equally robust; as of this writing, a simple Google Scholar query for “6G (cellular OR wireless)” limited to the post-2019 period yields more than 17,000 entries, underscoring a growing consensus on the significance and transformative potential of 6G technologies [3].

Unlike 5G, which emphasises software, virtualisation, and higher bandwidth, 6G is expected to be characterised by autonomous intelligence—a transformative attribute that extends to both network operations and user interactions. It is projected that initial 6G deployments could materialise as early as 2028-2029, achieving ambitious targets such as data rates in the range of terabits per second (Tbps) and latency reductions to the microsecond level [4]. However, the most distinguishing feature is expected to be the pervasiveness of artificial intelligence (AI) across every layer of the network. AI is no longer a complementary function; in 6G, it is foundational. Building upon existing efforts in 5G, such as the Network Data Analytics Function (NWDAF) [5], 6G aims to introduce even deeper AI-native capabilities, enabling real-time learning, prediction, and decision-making at both the core and edge of the network.

This push towards AI-native networks is in direct response to the escalating complexity of managing mobile networks. Traditional algorithmic approaches to radio resource management (RRM) have struggled to cope with the demands of an increasingly heterogeneous and dynamic communication environment. As early as 3GPP Release 12, the limitations of static and rule-based resource allocation models became evident [6]. Since then, AI-driven approaches have been proposed to dynamically adapt resource scheduling based on user behaviour, mobility, application type, and prevailing network conditions. These developments are not merely enhancements but necessary evolutionary steps to manage future demands, particularly as the volume and diversity of connected devices expand dramatically. A high-level overview of existing network where 6G capabilities will be introduced is shown in Figure 1.

Figure 1. Network overview.

According to the GSMA, while the growth in mobile subscribers may be stabilising (with a projected CAGR of 1.9% for 2019-2025), the total number of subscribers is expected to reach 5.8 billion by 2025 [7]. However, this figure pales in comparison to the anticipated explosion in connected devices. Forecasts suggest that by 2025, the number of Internet of Things (IoT) devices could reach 24.6 billion, fueling a complex landscape where data flows are vast, heterogeneous, and critically time-sensitive. To address these challenges, 5G introduced network paradigms such as massive Machine Type Communications (mMTC) and Ultra-Reliable Low-Latency Communications (URLLC). However, such capabilities are not quite enough for future applications that will require millisecond-scale latency, extreme reliability, and intelligence and understanding of contexts like 6G [8].

The move to a more context-aware, more AI-oriented, and highly distributed architecture in 6G also broadens the potential of changing the paradigms of user interactions. It is not about service giving anymore, but making it personal. Unlike previous generations, which mainly focused on technical Key Performance Indicators (KPIs), including throughput and latency, 6G draws renewed attention to Quality of Experience (QoE) as a user-centric performance paradigm that allows for capturing subjective satisfaction and interaction efficacy of various services.

Although the innovations of the physical layer of 6G and architectural visions have been extensively discussed in the literature [5], a notable gap remains in academic and industrial discourse regarding the modelling, management, and optimisation of QoE. Such neglect is significant because 6G will enable high-immersion applications, such as holographic communication, real-time XR, and even brain-computer interfaces (BCIs). Cross-layer, end-to-end orchestration of network resources is necessary for these applications, not only to meet service requirements but also to fulfil the specific expectations of individual users. This complexity underscores the importance of research on harmonised Quality of Experience (QoE) and Quality of Service (QoS) frameworks that can facilitate intelligent service delivery in real-time.

QoE, initially defined as the overall acceptability of an application or service as perceived subjectively by the end-user, has undergone significant changes since its introduction in the telecommunications field [9]. Early QoE evaluation methods were heavily based on subjective metrics, such as the Mean Opinion Score (MOS), which required the end-user to rate the services being evaluated on a specific scale. The aggregation of these scores was performed manually, and they served as proxies for determining the overall quality of service. The development of objective mappings between the measures used at the network level and those used to assess the quality experienced by users at the user level allowed for an increase in the complexity of the research over time. This comprised association research between jitter, packet loss, latency and perceived user contentment in the real world.

Nonetheless, 6G presents additional challenges and opportunities for QoE. Consumers are no longer limited to being users of static services; they are now involved contributors to a cyber-physical space that is enhanced by near-real-time sensor data, augmented reality (AR), and touch interactions. The more immersive the scenario, the less adequate standard QoE measures, such as MOS or packet delay variation, may prove to be [10]. On the contrary, 6G requires the creation of new Quality of Experience (QoE) metrics, including Quality of Immersion (QoI), neurological response measurements, and synchronisation measures across the senses, which reflect the multi-sensory aspect of contemporary interactions. Moreover, QoE in 6G implies a substantial integration with AI systems that will be able to estimate user intention, emotion, and context. It is especially applicable to areas such as remote surgery, autonomous navigation, or XR-based education, as user satisfaction depends not only on low latency but also on the efficiency of task accomplishment, contextual adaptivity, and perceptual realism [11]. An interaction among QoS mechanisms, such as network slicing, dynamic spectrum allocation, and time-sensitive networking, along with AI-enhanced QoE monitoring, is, therefore, a significant aspect of creating a seamless and personalised service experience. Additionally, new opportunities to optimise QoE in real-time emerge through the architectural changes of 6G, which are moving toward layered intelligence across devices, edge, and cloud. Sensing and adaptation can be conducted locally on devices, while model personalisation through federated learning can be performed on edge servers, allowing for privacy preservation [12]. Globally, optimisation techniques can be executed on cloud platforms using deep learning and meta-learning methods. These developments enable the adoption of context-aware QoS policies that dynamically adapt to network loads, application preferences, and user requirements.

The primary objective of the proposed study is to investigate how the quality of experience (QoE) can be efficiently modelled and optimised in 6G networks by utilising quality of service (QoS) parameters. Since 6G systems are expected to support deeply immersive applications, latency-sensitive applications, and context-aware applications, there will be no single use of QoS to ensure user satisfaction. In this paper, this gap will be addressed by proposing an overall framework of modelling the transition from 5G to full 6G that interconnects important QoS metrics like latency, jitter, data rate and synchronisation precision with emerging QoE measures like AI-based AHP, application-specific and context parameters. These components are applied to a transition function with the current 5G technologies at baseline to calculate QoE at different stages of network evolution. The study also examines how AI-based mechanisms in 6G can be applied to adaptively deliver optimal QoE in real-time, even as network conditions change and applications vary in nature. By doing so, the study contributes toward the design of intelligent, user-centric service architectures for the next generation of mobile networks.

2. Literature Review

Quality of Experience (QoE) is a user-level metric developed in mobile communication, whose evolution has drawn considerable research interest, especially with changes in network services. This shift has moved beyond voice-centric network services, such as those provided with 2G, to highly immersive and interactive network services envisioned for 6G. Initial research was primarily based on voice quality measures, using the Mean Opinion Score (MOS). However, later representations expanded to include Quality of Experience (QoE) extensions for multimedia, mobility, and real-time responsiveness. With the introduction of cellular networks to 4G and 5G, the focus of researchers shifted toward objective frameworks that can be used to relate Quality of Experience (QoE) with objective measures of network performance, thereby reducing the disconnect between end-user perception and technical Quality of Service (QoS). Recent efforts focus on the use of artificial intelligence (AI), machine learning (ML), and edge computing to dynamically optimise Quality of Experience (QoE) under varying network performance. This section provides an overview of these developments on a generation-by-generation basis, as well as the important methodologies, models, and upcoming issues in the current attempts to operationalise QoE in next-generation wireless communication systems.

2.1. Evolution of QoS and QoE in Mobile Networks

The transition from the cellular network, starting at 1G to 5G, has always strived to enhance user experience through technical parameter improvements, i.e., data rate, latency, and coverage. To begin with, network performance was measured and optimised according to the Quality of Service (QoS) criteria, including bandwidth, delay, jitter, and packet loss. Such objective performance indicators effectively described the functional aspects of the services, such as voice and data transfer. However, as multimedia and interactive applications became the norm in mobile communications, the limitations of QoS as a single indicator of service performance were evident. This led to the emergence of Quality of Experience (QoE)—a user-centric framework that considers how services are perceived and evaluated by the end user [13].

In first-generation (1G) analog cellular systems, the primary service was voice, and there was minimal need for nuanced user satisfaction metrics. With the transition to second-generation (2G) digital systems such as GSM and later enhancements like GPRS and EDGE, services began to include SMS, MMS, and basic mobile web access. Mean Opinion Score (MOS) emerged as one of the go-to practices in measuring voice quality in this period. MOS was standardised by the ITU-T in 1996 based on user-assignment of ratings between bad (1) and excellent (5) on a subjective basis about perceived service quality. These measures were combined to examine the overall quality of service delivery, and subsequently, automatic models were created that can reproduce MOS measures by mimicking them through algorithms [14].

The third-generation (3G) networks, first introduced in the early 2000s, offered higher data rates and enabled IP-based multimedia services through High-Speed Downlink Packet Access (HSDPA), transitioning to the IP Multimedia Subsystem (IMS). All these developments helped in the broader use of the Internet, the use of sessions and greater interactivity by the use of smartphones [15]. With the growing complexity of services, QoS-based network management—focused on throughput, latency, and packet loss—proved insufficient. This catalysed a shift to QoE frameworks that accounted for users’ subjective perceptions, context, and expectations.

QoE in 3G was defined by the ITU-T as “the degree of delight or annoyance of the user of an application or service,” considering context, personality, and current state [16]. Although inherently subjective, efforts have been made to standardise QoE assessment through objective mappings to network metrics. Examples include extending MOS to encompass visual and haptic dimensions, as well as correlating throughput and delay with user satisfaction for streaming applications.

Fourth-generation (4G) networks and LTE-Advanced furthered the IP-based evolution by implementing fully packet-switched cores, high-capacity downlink speeds (up to 1 Gbps), and MIMO technologies [17]. QoE considerations remained critical, especially with the explosion of mobile video and real-time services. Yet, network operations still relied heavily on QoS optimisations, with QoE often inferred indirectly. QoE management tools began to emerge, but true integration into real-time network orchestration remained limited. This backdrop sets the stage for 5G and 6G, where software and AI are reshaping network adaptability, and QoE must now evolve to handle increasingly immersive, dynamic, and personalised digital experiences.

2.2. QoS-QoE Mapping Models in 5G

In the context of 5G networks, bridging the gap between Quality of Service (QoS) and Quality of Experience (QoE) is essential to ensure service delivery meets user expectations. One of the most widely accepted QoE metrics is the Mean Opinion Score (MOS), which traditionally relies on subjective evaluations using a five-point scale, ranging from 1 (bad) to 5 (excellent). It makes sense to capture user attention in this way, but it is neither objective nor scalable, which is essential in the case of real-time network monitoring. Researchers have been able to resolve these inadequacies by designing objective models of QoE measurement based on mathematical approximations that utilise measurable features of internet connectivity, including delay, jitter, and packet loss.

The Simplified E-model is one of the notable VoIP Quality analysis models used so far, focusing on the impact of different network impairments on voice quality. In the scheme of things, MOS in this model is an extraction of the R-score, which can be simplified as follows:

RSimplified = Ro − Idelay − Icodec&loss − Ijitter [18].

Here, the latent quality would be Ro (93.2). Ideally, Icodec&loss and Ijitter are employed to estimate the impacts of delay, codec, and packet loss, as well as jitter degradation effects, respectively. These elements are calculated using a combination of linear and non-linear functions, considering thresholds such as the Heaviside function and exponential decay, as well as codec-specific thresholds, including parameters related to G.729. Once the R-score has been calculated, it is then mapped to the standard 5-point MOS scale using an empirical polynomial transformation [19].

Although the simplified E-model is highly accurate, its range is more of a constraint, as it only considers three impairment parameters and does not take into account multi-criteria decision-making tools, such as the Analytic Hierarchy Process (AHP). To overcome this, other QoE models have been developed for video applications using AHP, which enables the weighting of multiple factors based on their relative importance. For example, the QoENetwork model integrates packet loss, jitter, throughput, and auto resolution scaling (ARS) using a weighted sum:

QoENetwork = 0.26 × Loss + 0.55 × Jitter + 0.07 × Throughput + 0.12 × ARS [20].

Similarly, the QoEApplication model for video quality incorporates bit rate, frame rate, and resolution with weights reflecting user preferences:

QoEApplication = 0.26 × Bitrate + 0.63 × Framerate + 0.11 × Resolution.

The advantages of these AHP-based models are that they are more user- and resolution-oriented, provide more granular quality analysis, and are not limited by framework parameters such as jitter and codec loss in real-time voice-over IP or interactive services. This disparity indicates that a global, flexible QoE paradigm capable of interfacing impairment measures and user-based weighting in real-time 5G network applications is a feasible opportunity. Incorporating AHP into such models could enhance the interrelationship between technical and user satisfaction, especially in applications where performance depends on latency and multimedia richness, which is becoming a widespread feature of next-generation mobile networks.

2.3. Emergence of Immersive and Context-Aware Metrics in QoE

As wireless networks and movie networks move 6G, immersive technologies—encom, passing augmented reality (AR), virtual reality (VR), extended reality (XR), and holographic communication—are becoming a central part of next-generation technologies require a transition, at the very least, toward more complex and context-sensitive QoE evaluation models, such as the Mean Opinion Score (MOS). Traditional QoE approaches, which focus on audio and video quality, are limited in their ability to provide perceptual insight, as immersive applications typically involve real-time, multisensory interactions [21].

To measure user satisfaction in immersive experiences, new Quality of Experience (QoE) dimensions have emerged, including Quality of Immersion (QoI), sense of presence, cognitive load, and multi-modal synchronisation. QoI describes a subjective feeling in the user of being within a virtual world, which may be affected by issues such as frame rate, field of view, and motion fidelity [22]. Another essential measure is presence, a psychological state that means being fully engaged, which can be influenced by responsiveness and realism. These dimensions necessitate a strong interrelation between human perception and the technical performance of the service, an aspect that past QoE models have not sufficiently addressed.

Regarding the technical actions suggested in the immersive QoE measurement, motion-to-photon latency, or the interval between the time when the user moves and the system reacts to the movement visually, is significant in the VR systems, as the latter reduces the possibility of a user encountering motion sickness, making the response appear to be quicker. On the same note, haptic responsiveness, the amount of feedback delay, and the accuracy of haptic feedback are also essential to new applications in fields such as remote surgery or automated processes in the XR environment [18]. Additionally, multi-modal synchronisation, which synchronises visual, audio, and tactile stimuli, becomes one of the key determinants of perceptual coherence and user satisfaction.

Several studies have attempted to model and quantify immersive QoE metrics and correlate them with QoS proxies. For example, it becomes clear that motion-to-photon latency has been investigated about end-to-end network delay and rendering pipeline efficiency [23]. Haptic delay threshold was also a subject of some works, which paired its limitations with jitter and throughput in tactile Internet structures. However, the extent of standardisation remains low in this field, and the problem of immediate assessment in a scalable format persists.

With an increase in immersive applications in late 5G and expected 6G networks, such emergent dimensions of QoE are becoming necessary to address. The subjective and physiological attributes need to be integrated into future models to provide a comprehensive picture of user satisfaction while also allowing for dynamic adjustments to rapidly changing technologies and user situations.

2.4. AI and ML for QoE Prediction and Optimisation

The intelligent evolution of mobile network communication infrastructure has led to an era of abandoning fixed, rule-based management systems and transforming towards intelligent Quality of Experience (QoE) optimisation of the network, enabled by AI/ML architectures. Historical QoS-based methods were designed deterministically to consider mapping network parameters (i.e., delay, jitter) and service delivery failure, thereby enhancing user subjective satisfaction. Conversely, the AI and ML strategies would enable a data-driven prediction and inference system that synchronises network behaviour and user experience in real-me [20].

Conventional artificial neural networks like Convolutional Neural Networks (CNNs) and recurrent neural networks (RNNs) have demonstrated a high potential to conduct QoE predictions using the deep learning (DL) paradigm. CNNs are applicable in extracting spatiotemporal features of video frames to evaluate the perceived quality of video. RNNs, especially the Long Short-term Memory (LSTM) models, are better at detecting temporal relationships in network traffic and anticipating changing user satisfaction rates. Recently, transformer-based architectures have been introduced as a promising approach for multi-modal QoE inference, leveraging self-attention patterns to linearly combine multiple contextual features of the target environment, including device type, user history, and network state [24].

In addition to prediction, reinforcement learning (RL) models are also used to optimise Quality of Experience (QoE), such as in adaptive bitrate streaming. Learning optimal resource configuring or selecting bitrates Learn optimal resource-allocation strategies or selecting bitrates in an RL fashion, based on feedback (as QoE-related rewards) received when interacting with the network environment. In some empirical studies, the Deep Q-Network (DQN) framework has been applied to regulate video playback buffers across varying network conditions, resulting in improved overall MOS scores. Actor-critic models and Proximal Policy Optimization (PPO) are also gaining popularity because they are stable and scale to high-dimensional action spaces in 5G and 6G settings [25].

In 5G networks, the Network Data Analytics Function (NWDAF) in 3GPP Release 16 embodies the institutionalisation of using AI/ML in managing Quality of Experience (QoE). NWDAF enables policy-level analytics to inform policy adaptation and network slice resource orchestration. As we consider 6G, there is a proposal where decentralised AI agents are positioned at the edge of the network, enabling ultra-low latency in learning and decision-making in real time. It is expected that such agents will process multi-modal data, allow for explanation, and be privacy-compliant, as this is crucial for QoE optimisation in extensively personalised and immersive services. Despite the encouraging developments, significant obstacles remain on the horizon [26]. This is because inference performed in real-time is computationally intensive when carried out in distributed systems, particularly with resource-constrained edge nodes. The models have a low level of generalizability across various services and user types, thus necessitating extensive retraining or refinement. Another urgent issue is the explainability of DL models, as black-box predictions discourage reliance on them and meet regulatory requirements in life-saving services such as telesurgery or vehicle communications. The upcoming development should be dedicated to the incorporation of lightweight ML models, federated privacy-preserving inference learning, and explainable AI (XAI) frameworks to promote transparency and efficiency.

2.5. Neuro-QoE and Biometric Feedback in Next-Gen Networks

With the increased distribution of immersive technologies, including the use of augmented reality (AR), virtual reality (VR), and the tactile internet in 6G, conventional measures, mainly latency and throughput, are insufficient to capture the subtleties of user experience. This has sparked the emergence of the new concept of Neuro-QoE, which incorporates biometric and neurological responses to calculate user engagement, satisfaction, and emotional response to digital stimuli [15] [27].

The metrics have been proven helpful in pilot studies of human-computer interaction (HCI), gaming, and XR platform applications. In a controlled VR study, scientists were able to associate the EEG-derived power in alpha and beta frequencies with MOS scores, and a highly significant congruence was discovered between the state of the nerves and MOS scores. HRV and pupil dilation have also been applied in games, where they dynamically adjust the game’s complexity to enhance the player’s real-time experience. Likewise, biometric feedback has also been combined with adaptive streaming codecs, allowing a video stream to be scaled to a more effective or less effective level based on user levels of frustration or engagement, as measured by biosensors [28]. Nevertheless, there are barriers to practical deployment notwithstanding the advances. Signal processing and standardisation may be complicated due to the susceptibility of neurophysiological signals to noise, motion artefacts, and inter-individual variability. Moreover, the physical equipment that will need to be used, including dry EEG headsets, biometric wearables, and eye-tracking cameras, can be either invasive or expensive, making them not scalable. Integration into real-time systems also demands ultra-low latency in signal acquisition, preprocessing, and interpretation—requirements that are only just beginning to be met by emerging 6G architectures [29]. Addressing these challenges requires interdisciplinary collaboration across neuroscience, signal processing, and network engineering. Advancements in brain-computer interfaces (BCIs), wearable biosensors, and edge computing may eventually enable seamless, non-intrusive, and real-time Neuro-QoE monitoring. Such capabilities will be instrumental in applications such as cognitive load-aware XR training systems, emotion-sensitive media delivery, and neuroadaptive robotics in healthcare and industry.

2.6. Service-Specific QoE: VoIP, Streaming, XR, Tactile Internet

Quality of Experience is highly service-specific, with each application domain requiring customised models and metrics to accurately assess user satisfaction. VoIP, video streaming, extended reality (XR), and the tactile internet represent key verticals where QoE modeling has seen significant research and development [21]. In VoIP applications, Perceptual Evaluation of Speech Quality (PESQ) and Perceptual Objective Listening Quality Assessment (POLQA) are widely used. PESQ, standardised by ITU-T P.862, compares a reference speech signal with the received signal to produce a MOS-like score ranging from 1.0 to 4.5 [22]. POLQA (ITU-T P.863) extends PESQ by accommodating super-wideband (SWB) signals and coping better with modern codecs and time-varying conditions [30]. These metrics are used for real-time VoIP quality monitoring in both 4G and 5G systems and are expected to be integrated into AI-based QoE agents in 6G networks.

For video streaming, MOS remains a central metric, often complemented by Video Multimethod Assessment Fusion (VMAF) developed by Netflix. VMAF fuses multiple video quality indicators such as visual information fidelity (VIF), detail loss metric (DLM), and temporal variation metrics to provide a more accurate prediction of user perception. Adaptive bitrate algorithms in content delivery networks (CDNs) increasingly use VMAF scores as reward signals in RL-based optimisation, achieving smoother playback and reduced buffering. In XR applications, QoE is influenced by factors such as frame rate, field of view (FOV), motion-to-photon latency, and head-tracking accuracy. Research indicates that a minimum of 90 frames per second and latency below 20 milliseconds are necessary to avoid motion sickness and ensure immersive experiences [31]. Additional metrics such as cybersickness indexes and presence questionnaires (e.g., IPQ, PQ) are used to quantify immersion levels. 6G networks aim to enhance these experiences through edge-based rendering and AI-driven motion prediction to reduce latency and improve tracking fidelity. The Tactile Internet, encompassing applications such as remote surgery and haptic-enabled teleoperation, introduces novel QoE dimensions like haptic fidelity, synchronisation, and responsiveness. MOS-based voice and video models are insufficient for these use cases. Instead, metrics such as round-trip time for haptic feedback, force resolution, and jitter in actuation are used. The ETSI group has begun defining Tactile Internet QoE standards, but real-world implementation remains experimental due to infrastructure constraints and the need for precise cross-modal synchronisation [32].

Comparative studies have demonstrated significant variations in QoE performance across different network environments. For instance, VoIP applications yield higher MOS on Ethernet compared to WiFi due to lower jitter and packet loss. Similarly, XR applications perform better in 5G networks compared to LTE due to reduced motion-to-photon latency. However, harmonising QoE inference across services remains a key challenge. Multi-service environments require context-aware models that can dynamically adapt metric weights and thresholds depending on the active service type and user profile [21]. Developing unified frameworks that can monitor, infer, and optimise QoE across heterogeneous services is essential for next-generation networks. This demands modular AI architecture, standardised metric repositories, and shared datasets to facilitate interoperability and benchmarking across the research community.

2.7. Conclusion

The reviewed literature demonstrates significant progress in understanding and evaluating Quality of Experience (QoE) across various generations of mobile networks and application domains. From the early reliance on subjective measures like Mean Opinion Score (MOS) in 1G and 2G, to more sophisticated, QoS-correlated objective models in 3G and 4G, and to AI/ML-driven architectures in 5G, the evolution of QoE assessment has paralleled advances in network capabilities. Notably, the integration of AI and deep learning, the use of neurophysiological and biometric feedback, and the development of service-specific QoE models (e.g., for VoIP, video streaming, XR, and tactile internet) signal a clear transition toward intelligent, user-centric network management.

Despite this progress, several critical research gaps persist. First, most QoE models either focus narrowly on specific network impairment parameters or employ generalised multi-criteria decision-making techniques, such as AHP, without effectively combining both approaches. For instance, models such as the Simplified E-model consider delay, jitter, and packet loss but lack adaptive learning or context-aware prioritisation of these parameters. Meanwhile, AHP-based models weigh user-perceived factors but typically exclude dynamic real-time network metrics, making them less applicable to fast-changing environments like 5G.

Second, while there is a growing trend toward AI and ML-based QoE prediction, these models are often service-specific and lack generalizability across heterogeneous services. In addition, several of them are black box systems and lack explainability, which is an issue of transparency and regulatory adoption for critical 6G use cases.

Third, current models cannot sufficiently support the customisation of each user’s QoE model according to individual preferences, device capabilities, and contextual aspects (such as location or mobility). This constraint is particularly evident in immersive and real-time applications anticipated in 6G, including AR/VR and tactile internet, where both physiological and environmental factors impact user experience.

Finally, optimisation across layers is not adequately incorporated, as decisions at the application level can dynamically adjust and affect resources at the network and physical levels. Without such vertical harmonisation, QoE inference remains ad hoc, limited in scope, and reactive; however, it can be made holistic and proactive.

To bridge these deficits, this paper proposes an innovative QoE framework with a transitional function that provides network quality assessment during the evolutionary phase of 5G to 6G. The proposed model focuses on multi-component parameters with dynamic weights, deployment guidelines with simplified implementation equations, and a validation of QoE improvement during this phase. Through the application of adaptive, service-awareness and context-sensitive QoE inference, the model establishes the necessary precedence for more effective and lasting QoE management, as well as scalable and more customer-centric QoE management via next-generation mobile systems.

3. Proposed Modeling

In the 5G network, the presence of real-time applications, including voice over IP (VoIP), online gaming, video conferencing, and newer applications such as immersive experiences like augmented and virtual reality (AR/VR), has grown exponentially. This has increased the demand for dedicated and versatile Quality of Experience (QoE) measurement and control. The conventional models, although effective in some contexts, have been devised with several constraints in modeling the complex interdependencies that exist between network performance and user perception, particularly in dynamic network environments. The proposed research presents a new framework for QoE modelling that combines subjective (user-centric) and objective (network-centric) measurements using a hybrid strategy involving the Analytical Hierarchy Process (AHP) and advanced impairment-based modelling. At least the goal is to configure a more responsive, instantaneous, and context-interactive QoE evaluation system of 5G network services.

3.1. Architecture Overview

The suggested system features a modular architecture with four primary layers: the Data Acquisition and QoE Processing Layer, the Transition Processing Layer, and the Decision Intelligence Layer.

The Data Acquisition and QoE processing Layer is the layer that constantly acquires both network-level and user-level data. The data is acquired for both 5G and 6G impairment parameters that include delay, jitter, packet loss, throughput, and signal strength at the programming network level. On the user end, application statistics, including frame rate, resolution, and codecs used, are tracked, combined with optional user feedback provided through a lightweight satisfaction question.

QoE Processing Engine: Processing and aggregation of data streams is done in this layer. There are four important submodules of the engine:

QoE Impairment sub module: This is a submodule that is built on Simplified E-model, with 5G parameters, 6G specific impairments, with time-varying weights to adapt through the AI learning process when AI models are introduced. Also, this model incorporates a transition function for accounting for transition parameters using network transition from 5G to future technologies.

AHP AI Sub module: This is a submodule designed to provide dynamic AHP values based on AI optimisation. The federated learning infrastructure built into this sub-module provides model training at the edge nodes, maintains privacy infrastructure, a secure aggregation server and cross-domain knowledge transfers.

Application-specific submodule: This submodule consists of the Holographic Communications handler, Digital twin synchronisation, and XR/BCI Quality assessment. The holographic communication handler does real-time 3D data processing, quality assessment, and network optimisation based on these.

Context Aware sub module: This module does the user behaviour analytics. It also has an environmental context processor and does multi-modal context fusion.

Transition Orchestration Layer: This layer handles the transition from 5G to 6G and adjusts parameters during the network transition. It features a network evaluation monitor that tracks 5G to 6G readiness, as well as a capability assessment that factors in infrastructure development and AI model maturity at that point in time. It receives input from the transition phase detector, which provides input to the module for the transition phase between 5G and 6G technologies. This transition phase detector determines where the state is enhanced, 5G, hybrid or native 6G. The weightage component on this layer updates the parameter weights of the QoE 6G parameters to achieve the desired output in the transition state.

Decision Intelligence Layer: This layer calculates an integrated QoE score, which can be deployed by network management functions, such as the Network Data Analytics Function (NWDAF) of the 6G core, to reallocate resources or modify services. The unified score is obtained by combining the QoE 5G and 6G dual-mode score calculation with the AI improvement in the transition phase. It also has a transition phase policy engine to take into account of different phases of transition. It also had a backwards compatibility validator, which ensures the QoE of 6G is superior to that of 5G.

3.2. Model Formulation—Enhanced Core Model with Transition Support

The enhanced core model with transition support is a combination function that integrates the QoE of 5G and 6G, featuring a built-in transition function.

QoE_Total = η(t)·QoE_6G + (1 − η(t))·QoE_5G

where:

η(t) = Transition function ∈ [0, 1] representing 5G→6G migration progress.

QoE_5G = Existing 5G model (baseline).

QoE_6G = Enhanced 6G model.

QoE_6G = α(t)·QoE_Impairment_Enhanced + β(t)·QoE_AHP_AI + γ(t)·QoE_ Application_Specific + δ(t)·QoE_Context_Aware.

Where:

α(t) + β(t) + γ(t) + δ(t) = 1

The model incorporates dynamic weight evolution with AI learning. The weights are adjusted as AI learning evolves.

α(t + 1) = α(t) + Δα_AI(t)

β(t + 1) = β(t) + Δβ_AI(t)

γ(t + 1) = γ(t) + Δγ_AI(t)

δ(t + 1) = δ(t) + Δδ_AI(t)

Where:

Δweight_AI(t) = ε·∇J(weight) + μ·federated_update(weight) + λ·transition_bias(t)

ε: Learning rate for gradient descent.

μ: Federated learning contribution factor.

λ: Transition adaptation factor.

J: Cost function based on QoE prediction error.

The key component of the model is the transition function, which evaluates factors influencing the transition from 5G to 6G. The function is described below.

η(t) = 1/(1 + exp(−k·(t − t_mid)))·capability_factor(t)

where:

k: Transition steepness (typically 0.5 - 2.0).

t_mid: Midpoint of transition period.

capability_factor(t) = min(1,infrastructure_readiness(t)·AI_maturity(t)).

3.3. Impairment Model with Transition Dynamics

The core of the suggested model is based on the RSimplified formula, which has been fixed, and changes are added to personalise new network trade-offs and characteristics. The simplified model is expressed as:

RSimplified = R0 − I_delay − I_codec&loss − I_jitter − I_mobility − I_context

where:

R0 is the maximum QoE reference value (typically 93.2 for voice).

I_delay: Calculated using an updated function that includes round-trip time and queuing delay.

I_codec&loss: Considers codec sensitivity and packet loss ratio using the G.729 codec parameters as baseline.

I_jitter: Modified to integrate variance due to high-frequency channel switching in mmWave bands.

I_mobility: A new component that models degradation due to handovers and Doppler shift.

I_context: Represents user context factors such as device type, application class (e.g., conversational vs. streaming), and user motion.

The model result is then normalised into a 5-point MOS scale using the standard transformation:

MOS = 1 + 0.035 × R + R(R – 60)(100 – R) × 7 × 106 for 0 < R < 100.

The impairment model with transition dynamics incorporates the 6G impairment parameters along with the transition function defined earlier.

R_Total = η(t)·R_6G + (1 − η(t))·R_5G

R_6G = R0 − I_delay_enhanced(t) − I_codec_loss_6G(t) − I_jitter_sub_ms(t) − I_mobility_3D(t) − I_context_AI(t) − I_6G_specific(t) + A_6G(t)

R_5G = R0 − I_delay_5G − I_codec_loss_5G − I_jitter_5G − I_mobility_5G − I_context_5G

Where:

I_6G_specific: A new component which models the degradation due to 6G specific impairments. These include, but are not limited to, Terahertz communication impairments, intelligent reflecting surface impairments, MIMO Beamforming Degradation, AI-driven service adaptation failures, multi-domain and multi-modal integration failures, and edge computing impairments due to latency or resource shortages.

A_6G: Represents the advantage components that 6G has to the model. This includes improvements enabled by Terahertz and sub-Thz communications, which provide high data speeds with low latency, enhancements due to edge processing, energy-efficient gains, security and trust improvements, and predictive AI-based resource allocation.

3.4. AI Enhanced AHP Framework

The AHP process introduces subjectivity and personalisation into the model. Factors such as video resolution, frame rate, latency tolerance, and application interactivity are assessed using pairwise comparisons derived from user feedback. For example, a user engaged in mobile VR gaming may rate frame rate and latency significantly higher than resolution, while a user on a video call may prioritise jitter and voice clarity.

A matrix is created for each service class to compute the consistency ratio and ensure valid prioritisation. Each factor is assigned a weight (w1, w2, w3…) that is later used to compute the application-specific QoE component. The traditional AHP framework:

QoE_AHP = Σ (wi × metrici)

This traditional AHP component is integrated with an AI-native decision-making process by adding an AI weight optimiser and contextual criteria values enhanced by AI prediction. The enhacne

QoE_AHP_AI = Σᵢ₌1n w_i(t) × C_i(network_state, user_context, application_type)

where:

w_i(t) = AI_weight_optimizer(historical_data, real_time_conditions, user_ preferences).

C_i = Contextual criteria values enhanced by AI.

3.5. Application-Specific QoE

Video streaming is selected for the application for the experiment. The application-specific QoE model for the video streaming is given by:

QoE_Application_Specific = w_video × QoE_video + w_context × QoE_context

where:

QoE_video = f(VMAF, Resolution, Codec, Bitrate, Stalling).

QoE_context = g(Device_Type, Network_Conditions, User_Preferences).

3.6. Context-Aware QoE

Context-Aware QoE is the fourth component in this model. This captures the impact of user’s environment and context affect their experience. The factors could the environment there are present, the device of the user, and factors such as time of day. For example, the context factors change for AR gaming in a shopping mall when the same game is played at the user’s home.

QoE_Context_Aware = Σi wi(t) × Context_Factor_i(measurement, threshold)

where:

wi(t) = Dynamic weight for context factor i at time t Context_Factor_i = Normalised value (0-1) for each context dimension.

3.7. Backwards Compatibility Validator

The backwards compatibility validator which is a part of Decision Intelligence Layer ensures monotonic QoE improvements during 5G to 6G transition. The validator is defined as:

∀t ∈ [0, T]: QoE_Total(t) ≥ QoE_5G(t)

where:

t = time index, T = total transition period, and QoE_Total(t) = η(t)·QoE_6G(t) + (1 − η(t))·QoE_5G(t).

The backwards compatibility validator does validation for each component in the model, aggregate QoE validation, and transition function adjustments.

3.8. Implementation and Scalability

The proposed solution will be deployed as a microservice, which will chat with NWDAF in the 5G or 6G core platform. It utilises Kubernetes orchestration, allowing it to scale based on load and maintain minimal overhead. It can be extended to develop APIs that allow exposing QoE scores to policy engines, edge orchestrators, and service providers.

Additionally, the model can be deployed in edge environments for ultra-low-latency applications. During such deployments, AHP AI matrices are calculated locally based on regional user statistics; therefore, the QoE engine can have greater context sensitivity.

4. Results and Discussion

To test and verify the suggested Quality of Experience (QoE) paradigm in the context of real-time services in 5G to 6G transition networks, an experimental procedure was developed. This is aimed at designing a QoE assessment methodology for the transition phase of 5G to 6G networks. It combines network impairment characteristics, AI-based AHP, application-specific parameters and context-aware QoE. This model is combined with base QoE 5G model with a transition function to get the combined QoE at the transition phase. The overall experiment consisted of four major phases: formulating a model, simulating data, computing a metric, and aggregating and visualising the results.

4.1. Model Formulation

An experiment is initiated with the definition of each of the submodels of QoE 6G. Then, the results are combined with base 5G with values over a period for the transition function. The results are calculated for linear, sigmoid and adaptive transition functions.

The QoE 6G model has the following characteristics.

Impairment-Based Model with AI: This model is an enhanced version of simplified E-model of VoIP services. It provides a composite impairment score because of attributes; there are delays (ms), jitter (ms), packet loss, enhanced delay, codec losses for 6G, sub-ms Jitter, mobility, context aware AI, 6G specific impairments and 6G specific advantages. The traffic type chosen for this study is video streaming. The R values are calculated for transition functions:

linear (η_linear(t) = t/36)

sigmoid (η_sigmoid(t) = 1/(1 + exp(−2(t − 18)/6)))

adaptive (η_adaptive(t) = min(1, infrastructure_readiness(t)·AI_maturity(t)))

These are compiled to obtain a simplified rating, denoted as R_Total_linear, R_Total_sigmoid, and R_Total_adaptive.

AHP AI Model: Analytic Hierarchy Process (AHP)-Based AI QoE is computed at the application level. The contextual criteria for this model are chosen ad video quality, buffering, startup delay, bitrate adaptation, and frame rate stability. The learning rate α is in the range of 0.01 - 0.1. The federated learning weight β is in range of 0.05 to 0.2. The network state parameters are chosen for excellent in 0.9 to 1.0, good 0.7 to 0.9, and fair 0.5 to 0.7, depending on the quality of the network. The user satisfaction rewards are in the range of −1 to 1. The weight evolution pattern, learning dynamics, and network state distribution are:

Weight Evaluation:

Initial phase: The weights are in balance as AI learns user preferences.

Mid phase: w1 (buffering ) increases significantly.

Late phase: w1 dominates and w2 - w3 (startup delay) decreases, and w4 (bitrate) - w5(frame rate) stabilizes.

Learning Dynamics:

Learning rate(α): This is in range of 0.01 to 0.1 Learning rate is higher in the rapid improvement phase.

Federated learning(β): This increases over time in the range 0.05 - 0.02 as more users contribute.

Q_reward: The range is −1 to + 1 and has positive correlation with network improvement.

Network state distribution:

Early phase: The network state is fair with values 0.5 to 0.7.

Mid phase: Rapid infrastructure development is expected during this state. The values are in the range of 0.7 to 0.9 and state is considered good.

Final phase: The 6G infrastructure is matured. The network state is excellent with values in range on 0.9 to 1.

Application-Specific QoE: Application-specific data for the video streaming traffic type comprises of video and context components. The video and context QoE are calculated on a 1 - 5 scale based on parameters. The w_video is 0.7 and w_context is 0.3 to emphasise video quality.

The video components are:

VMAF: The range chosen is from 71.3 to 96.5.

Resolution: The resolution chosen are 720p, 1080p, 1440p, and 2160p.

Codec: The chosen codecs are H.264, H.265, and AV1.

Bitrate: The range is from 2.9 to 35.2 Mbps.

Stalling ratio: Lower values are better, with range from 0.0001 to 0.0065.

The context components considered are:

Device Type: The devices chosen for the context are Mobile phones, Tablets, Desktop PCs, and Smart TV.

Network Conditions: The devices are operated under Poor, Fair, Good, and Excellent network quality conditions.

User preferences: The range in the user preference parameters are classified into Low, Medium, and High.

Context Aware QoE: For the context aware QoE, seven different context dimensions are considered. The service types include various video streaming services such as VOD (Video on Demand), Live streaming, Gaming, etc. The context factors are normalised to 0 - 1 with higher values indicating better conditions. Dynamic weights are added to reflect relative importance of the context. The context considered are:

Location: The considered types are Indoor, outdoor, office, and transit. It also factors in where the device is stationary, walking, or moving. The location factor is given a weightage of 0.15.

Time: The main factors in this context and the time of day and the congestion level. For the time of the day, Morning, Afternoon, Evening, Night are considered. The congestion levels are classified as low, medium and high. This time component is given a weight of 0.10.

Device: The factors considered are battery, screen size, and CPU load. This weight is 0.20.

Network: The network performance is considered to be in equivalent to one of 4G, 5G, WiFi or Fiber. The congestion is in the range of 0.02 to 0.72. The network factor has the highest weightage with 0.25.

User: Users are given scores based on their activity level as active or passive with engagement level in the range low, medium, and high. The weightage for this factor is 0.15

Environment: Lighting conditions, noise level, and stability—whether the environment is stable or moving are considered and assigned score. The factor weight is 0.10.

Social: The context considers whether the user is working alone, in a group, online, or as part of a team. The interactions are placed in range of 0 - 85 for online streams. This factor has a weight of 0.05.

A QoE 5G baseline data set is created considering the following factors:

Video quality: The resolution ranges from 1080p to 8K, with bitrates in the range of 8.5 - 98 Mbps.

Network performance: The delay is considered to be in the range of 0.3 - 26 ms, packet loss is in the range of 0.000001 - 0.0016, and jitter is in the range of 0.01 to 4 ms.

User Experience: For the user experience buffering, startup delay and MOS scores are considered.

Evolution: 5G NSA to 5G SA, and then to 5G SA Advanced considered. This also takes into account the ongoing improvements in 5G over time.

Combined Model: The combined model is a combination of QoE 6G and QoE 5G with a transition function n(t). The characteristics of the model are added to the QoE 6G while QoE 5G acts as the baseline.

QoE_Total = η(t)·QoE_6G + (1 − η(t))·QoE_5G

4.2. Data Simulation

A synthetic dataset of 60 video streaming sessions was generated to simulate the transition from 5G to 6G networks over 36 equal periods. Each observation contained randomly sampled values in accordance with each of the submodule formulations. The dataset generation is done by a combination of deterministic trends and stochastic components to model network evolution during this phase. These values mimic real-world variability in user environments and network performance.

4.3. Metric Computation

Each observation’s parameters are combined and used to compute:

QoE_Impairment_Enhanced.

QoE_AHP_AI.

QoE_Application_Specific.

QoE_Context_Aware.

Then these results are combined to calculate:

QoE_6G.

QoE_5G is computed from the input data set and acts as a baseline.

The QoE Total is calculated from the above using the equation below.

QoE_Total = η(t)·QoE_6G + (1 − η(t))·QoE_5G

The datasets are created as .csv files and then the equations are implemented in Python using NumPy and Pandas for efficient numerical processing and data manipulation.

The QoE Impairment Enhanced component showed significant improvement during the transition period. There is 57.7% improvement in the R_Total_adaptive value with it reaching 121.4 by the end of the period. This is due to the improvement in sub-millisecond jitter control which reduced jitter to 0.008ms, ultra-low latency improvement which decreased the delay to 0.15 ms from 4.9 ms, and the packet loss rate improved by 99.99%. The adaptive transition function (η_adaptive) captured the nonlinear improvements during the period of 12 - 24, where infrastructure improvements accelerated.

The QoE AHP AI component showed an improvement from 51.6% to 100% during the period. These improvements are due to dynamic weight adaptation(w1) from 0.2 to 1.0 as the system continues to learn optimal conditions. Also, all the sessions achieved excellent status by the period of 10. The federated learning impact(Q_reward) increased from 0.15 to 0.98 for optimisation improvements. Overall, the reinforcement learning mechanism identified the video quality as the dominant factor for the video services and adjusted weights to prioritise it over other factors while maintaining baseline performance.

The application-specific QoE is in range of 0.55 - 0.83 during early phase showing basic streaming optimisations. This increased to 0.80 - 0.98 during mid phase due to the advanced codecs and adaptive bitrate algorithms. During the final phase, the score is 0.99 + which means near-perfect application optimisations are achieved. There is a strong correlation between network conditions and application-specific QoE. Cross layer optimisations are crucial in the 6G architecture.

The Context Aware QoE values have improved from 0.59 to 0.99. The location factor optimisations achieved 95% for stationary indoor scenarios. Resources are predicted and pre-allocated during peak hours. Also, dedicated resource allocation is optimised for group experience. The battery consumption is optimised by 40% based on the context.

The improvements of the QoE Impairment Enhanced, QoE AHP AI, application-specific QoE, and Context Aware QoE are captured in Table 1 and Figure 2.

Table 1. QoE component improvements.

Component/time

1

12

24

36

Improvement

QoE Impairment Enhanced

0.770

0.817

0.986

1.214

57.7%

QoE AHP AI

0.516

0.931

0.986

1.000

93.8%

QoE Application Specific

0.625

0.939

0.990

0.999

59.8%

QoE Application Specific

0.595

0.931

0.986

0.999

67.9%

QoE Total (6G)

0.673

0.895

0.987

1.031

53.2%

Figure 2. QoE component evaluation.

The evolution of dynamic weights (α, β, γ, δ) in Table 2 shows intelligent system adaptation during the transition phase. In the early phase α (Impairment) is 0.35, decreasing to 0.11 by mid phase. This is because the impairment decreases as the network stabilises. In the final phase α is 0.067 which shows minimal impairment impact in mature 6G. The β (AHP_AI) starts at 0.20 in the early phase as there is limited AI contribution during this learning phase. In the mid-phase the value reaches 0.31 as learning improves and AI has a higher influence. In the final phase, β has a value of 0.350, showcasing significant AI driven optimisation. The γ (Application) is 0.25 in the early phase as there is only moderate application optimisation. The γ factor improved to 0.42 as optimisation grew in the mid phase. At the final phase, the value is 0.461, showing application layer dominance. The δ (Context) starts with a value of 0.20 in the early phase, showing basic context awareness. The value decreases to 0.16, showing optimised context handling during mid phase and 0.122 in the final phase. This evolution demonstrates the system’s ability to dynamically prioritise different QoE factors based on network improvements.

Table 2. QoE components weight distribution.

Period

Network state

α (Impairment)

β (AHP AI)

γ (Application)

δ (Context)

1 - 6

Fair

0.350

0.200

0.250

0.200

7 - 12

Good

0.250

0.250

0.300

0.200

13 - 18

Excellent

0.125

0.292

0.403

0.180

19 - 24

Excellent

0.106

0.311

0.422

0.161

25 - 30

Excellent

0.089

0.328

0.439

0.144

31 - 36

Excellent

0.067

0.350

0.461

0.122

Three transition functions shown in Table 3 are evaluated for combining 5G and 6G QoE. First, the linear function (η = t/36) serves as a baseline for comparison; however, it is not realistic in real-world conditions, as it does not account for the acceleration phase observed in deployments. Next is the sigmoid function (η = 1/(1 + e^(−0.3 × (t − 18)))). This is a more realistic QoE_Total evaluation, especially during the rapid transition phase. Finally, the adaptive transition (η_adaptive) factors in network deployment delays, user adoption, standard, and regulatory ruling.

Table 3. QoE transition parameters.

Period

η Linear

η Sigmoid

η Adaptive

QoE Linear (%)

QoE Sigmoid (%)

QoE Adaptive (%)

1

0.028

0.001

0.005

75.99

76.24

76.21

6

0.167

0.047

0.098

83.29

83.85

83.68

12

0.333

0.354

0.370

89.01

89.03

89.05

18

0.500

0.818

0.690

92.64

93.36

93.01

24

0.667

0.982

0.910

97.47

98.57

98.28

30

0.833

0.999

0.982

100.16

100.95

100.86

36

1.000

1.000

1.000

102.70

102.70

102.70

The simulation results are encouraging as they show a total QoE improvement of 35.6% during the transition phase. The starting 5G baseline QoE is 75.99% and the final QoE is 103.05% for full 6G.

4.4. Discussion

As discussed in the related works section, MOS is a standard metric historically used for subjective evaluation of voice quality [21]. The experiment reaffirms its relevance but builds upon it using more complex QoE methods. In line with the simplified E-model introduced by [18], the experiment incorporates delay, codec loss, and jitter impairments, but the experiment adds more impairment factors such as sub-ms jitter, AI context, and 6G specific parameters to calculate R-values. Additionally, the study takes into account other parameters, including the influence of AI, application, and context, in the QoE calculations. All the QoE components are applied under transition functions to accurately model the transition phase from 5G and beyond technologies.

This study’s application-level QoE model aligns with approaches discussed in [20], where attributes like frame rate, resolution, and bitrate are used to quantify perceived video quality. However, unlike previous models that often treat these layers in isolation, this multi-component architecture presented here harmonises network-layer and application-layer evaluations, application-specific and context-standard development, and network development phases into a holistic QoE_Total score. This model achieves accuracy capturing the progress in transition from 5G to 6G technologies. This is a novel integration that addresses limitations noted in prior works, where one-sided models lacked cross-layer correlation [23].

The results show a total QoE improvement of 35.6%, reaching 103.05% with full 6 G implementation. This performance exceeding 100% is attributed to multiple co-relating factors captured in the simulation. The components in the proposed model reinforce each other for better end results. The proposed model’s large AI optimisation, predicting and preventing issues before they occur. The context-aware component of the model factors the anticipated user needs and allocates resources accordingly. The proposed model highlights the significance of network transition factors in determining the total QoE.

The model shows a strong positive linear correlation between QoE 6G and time with r = 0.98. This means that as the components get better over time, QoE 6G also tends to increase. The p < 0.001 shows this correlation is statistically significant. The model also demonstrated strong positive linear correlation between QoE Total and network condition with a r = 0.95. This means that as network conditions improve, QoE Total tends to improve. Also, p < 0.001 in this case. For the component interdependencies, the r > 0.7 which denotes that the individual components show a high degree of positive interdependence. As one component increases, the other tends to exhibit changes in the same direction.

The analysis shows QoE Total model has a variance of 96.3%, which means the variation observed in the model can be explained by the variables in the model—Impairment, AHP AI, application-specific and context aware. The regression equation for QoE Total:

QoE_Total = 0.42 + 0.23 × Impairment + 0.28 × AHP_AI + 0.31 × AppSpecific + 0.18 × Context

All coefficients are significant at p <0.05.

The key idea proposed by this model is transition stage analysis. The model illustrates the expected Total QoE when transitioning from a 5G-dominant phase to a fully operational 6G phase. A network metric comparison based on delay, jitter, packet loss, and bitrate in Figure 3 shows the performance improvements from 5G initial to full 6G. In the 5G dominant phase, the network infrastructure is driven by the existing 5G infrastructure with limited 6G support. The aim at this stage is to do basic optimisations for the 6G. In the next phase, network infrastructure investments are critical, as their contribution from 5G fades and 6G starts to become relevant. In the emerging phase, 6G features will start delivering superior QoE. The AI development during the previous phase starts to deliver results at this stage. In the 6G dominant and completion phase, advanced features will be fully operational and maximum QoE is achieved. The 6G dominant and completion phase will not be a completion phase, as the network transitions to future technologies. However, this study and experiment assume them as being in the full 6G phase for simplicity.

Figure 3. Network metric comparison.

4.5. Implications for 5G and 6G QoE Modelling

This experiment’s results have significant implications for QoE modelling in next-generation networks. As outlined in the literature, 6G aims to support eMBB, mMTC, and URLLC services with varying degrees of latency sensitivity [2]. Real-time applications operating over 6G will thus require intelligent, adaptive QoE management systems that consider multiple parameters simultaneously. The findings from this study reinforce the need for composite models that include both network-layer impairments and application-layer configurations, particularly in environments with fluctuating network states.

4.6. Bridging the Research Gaps

Previous works often focused on singular dimensions of QoE modeling, either subjective assessment or QoS parameter correlation [28] [29]. This study fills that gap by integrating multiple parameters with application metrics and providing a composite QoE_Total indicator. It offers a data-driven framework applicable in real-time analytics environments, potentially integrated with 5G or 6G core elements like NWDAF [21] for proactive QoE management.

Furthermore, while models like the Simplified E-model are foundational, they do not account for recent advancements in software, edge intelligence, and context-awareness. The proposed model accommodates these developments and prepares a pathway for incorporation into autonomous 6G architectures, where user experience will be AI-curated and context-sensitive [13].

4.7. Limitations

This study is comprehensive in factoring in various components for the QoE Total calculation. But it has several limitations. Firstly, the data used is synthetic; the real-time data may vary from our controlled inputs. The traffic patterns selected are video streaming; other applications could show different patterns. The 6G proposals are still evolving, so the assumptions made in the model could change. Also, the network deployment scenario in different geographical locations will vary, so the timelines could be different.

5. Conclusions

This research sets out to develop a composite Quality of Experience (QoE) model tailored to real-time services and applications operating over the 5G to 6G transition, with applicability for future full 6G contexts. The primary objective was to bridge the methodological gap in QoE modelling by integrating both network-layer impairments (delay, jitter, and packet loss), application-layer features (bitrate, frame rate, and resolution), and then factor in AI, application-specific and context-aware components of the 6G. The multi-level model, proposed with different parameters, was estimated through a set of functional parameters and summed up to form a total QoE_Total grade with a transition function built in. The strategy was initiated to address the shortcomings of past works, which either focused on subjective approaches or considered QoS correlations without considering user-based or application-dependent measurements. Also, this takes into account the transition factors that other models rarely consider.

The performance of the proposed model was justified by the experimental findings on synthetically created data that reflected variations of possible operation scenarios for real-time multimedia services. The results showed a progression from 75.99% to 103.05% QoE, which demonstrated the potential of 6G technologies in our lives.

Moreover, statistical validation through regression analysis confirms the interactions with the components in the model. The simplified regression equation provides a tool to estimate the QoE without using complex tools or analysis. These results validate the fact that QoE should not be modeled and optimised in a single component, as they have strong linear interdependency in the QoE Total.

Many of the older models have never proven to be generalised and adaptable, for example, models based solely on QoS metrics or restricted to subjective ratings. The trend of employing a multi-component approach, as discussed here, can be scaled more effectively and be more applicable in the new mobile environment, especially as services have begun to become immersive, personalised, and latency-driven. Additionally, the presented model also has the possibility of being integrated with analytics-enabled 5G or 6G core components, such as Network Data Analytics Function (NWDAF), to improve proactive QoE control and SLA enforcement.

This research proposes QoE Total as a comprehensive metric for the evaluation of networks beyond 5G. This model also enables you to manage the transition parameters during a network transition with proactive QoE control, facilitating better planning, resource allocation, and meeting the network’s performance objectives. This dynamic adaptability of this model allows for the design of networks with context-aware AI-driven intelligent systems.

Future research directions involve incorporating real-time data of multiple services on user behaviour into the proposed model’s feedback loops and utilising predictive analytics defined through the model to adjust network settings proactively in anticipation of expected QoE deterioration. It should combine augmented and virtual reality metrics, tactile responsiveness, and other sensory parameters that would make the model sufficient to handle the 6G service expectations. The integration of edge computing, management of mobility, and differentiation of multi-access edge QoE are some topics that can be explored further to make the model more useful in the context of complex service environments.

To conclude, the stated QoE modeling framework is more exhaustive in creating a scalable and prospective assessment of user experience in contemporary mobile networks. It combines the values of technical and perceptual aspects, offering an effective network optimisation and service quality assurance tool that establishes a baseline for QoE-aware architectures in 5G and opens the door to dynamic, user-adaptive experiences in 6G ecosystems.

Appendix

The transition generation data set and the implementation of the model in python is available in the github repository https://github.com/vinayasurya/5gto6gtransition/.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Taha, A.M. (2021) Quality of Experience in 6G Networks: Outlook and Challenges. Journal of Sensor and Actuator Networks, 10, Article No. 11.
https://doi.org/10.3390/jsan10010011
[2] Calabrese, F.D., Wang, L., Ghadimi, E., Peters, G., Hanzo, L. and Soldati, P. (2018) Learning Radio Resource Management in RANs: Framework, Opportunities, and Challenges. IEEE Communications Magazine, 56, 138-145.
https://doi.org/10.1109/mcom.2018.1701031
[3] Saxena, N., Rastogi, E. and Rastogi, A. (2021) 6G Use Cases, Requirements, and Metrics. In: Wu, Y.L., et al., Eds., 6G Mobile Wireless Networks, Springer International Publishing, 7-24.
https://doi.org/10.1007/978-3-030-72777-2_2
[4] Samsung (2020) Samsung’s 6G White Paper Lays out the Company’s Vision for the Next Generation of Communications Technology. Samsung Newsroom.
[5] Polese, M., Giordani, M., Mezzavilla, M., Rangan, S. and Zorzi, M. (2021) 6G Enabling Technologies. In: Wu, Y., Singh, S., Taleb, T., Roy, A., Dhillon, H.S., Kanagarathinam, M.R. and De, A., Eds., 6G Mobile Wireless Networks, Springer International Publishing, 25-41.
https://doi.org/10.1007/978-3-030-72777-2_3
[6] Ar, H. (2024) Radio Resource Allocation in Cellular V2X: From Rule Based to Reinforcement Learning Based Approaches. Friedrich-Alexander-Universitaet Erlangen-Nuernberg (Germany).
[7] GSMA (2019) The Mobile Economy. GSMA.
[8] J.R. (2025) Edge Computing Architecture and Applications: Definitive Reference for Developers and Engineers. HiTeX Press.
[9] Skorin-Kapov, L., Varela, M., Hoßfeld, T. and Chen, K.-T. (2018) A Survey of Emerging Concepts and Challenges for Qoe Management of Multimedia Services. ACM Transactions on Multimedia Computing, Communications, and Applications, 14, Article No. 29.
https://doi.org/10.1145/3176648
[10] Cui, Q., You, X., Wei, N., Nan, G., Zhang, X., Zhang, J., et al. (2025) Overview of AI and Communication for 6G Network: Fundamentals, Challenges, and Future Research Opportunities. Science China Information Sciences, 68, Article ID: 171301.
https://doi.org/10.1007/s11432-024-4337-1
[11] Zhao, Y., Zhai, W., Zhao, J., Zhang, T., Sun, S., Niyato, D. and Lam, K.-Y. (2021) A Comprehensive Survey of 6G Wireless Communications.
[12] Chowdhury, M.Z., Shahjalal, M., Ahmed, S. and Jang, Y.M. (2020) 6G Wireless Communication Systems: Applications, Requirements, Technologies, Challenges, and Research Directions. IEEE Open Journal of the Communications Society, 1, 957-975.
https://doi.org/10.1109/ojcoms.2020.3010270
[13] Wu, W., Zhou, C., Li, M., Wu, H., Zhou, H., Zhang, N., et al. (2022) AI-Native Network Slicing for 6G Networks. IEEE Wireless Communications, 29, 96-103.
https://doi.org/10.1109/mwc.001.2100338
[14] Moreira, R., Carvalho, T.C.M., de Oliveira Silva, F., Agoulmine, N. and Martins, J.S.B. (2025) Towards Sustainability in 6G Network Slicing with Energy-Saving and Optimization Methods. International Workshop on ADVANCEs in ICT Infrastructures and Services, St Malo, 23-25 June 2025, 12132.
[15] Bruno, G.Z., et al. (2025) Towards Energy-and QoS-Aware Load Balancing for 6G: Lever-Aging O-RAN to Achieve Sustainable and Energy-Efficient 6G.
[16] Yamazaki, T. (2021) Quality of Experience (QoE) Studies: Present State and Future Prospect. IEICE Transactions on Communications, 104, 716-724.
https://doi.org/10.1587/transcom.2020cqi0003
[17] Blaunstein, N. and Yehuda, B. (2020) Advanced Technologies and Wireless Networks beyond 4G. John Wiley & Sons.
https://doi.org/10.1002/9781119692478
[18] Akbar, M.S., Hussain, Z., Ikram, M., Sheng, Q.Z. and Mukhopadhyay, S. (2022) 6G Survey on Challenges, Requirements, Applications, Key Enabling Technologies, Use Cases, AI Integration Issues and Security Aspects.
[19] ITU-T G.107 Standard, February 2014.
https://www.itu.int/rec/T-REC-G.107
[20] Zhao, Y., Zhai, W.C., Zhao, J., Zhang, T.H., et al. (2020) A Comprehensive Survey of 6G Wireless Communications.
[21] Yang, H., Alphones, A., Xiong, Z., Niyato, D., Zhao, J. and Wu, K. (2020) Artificial-Intelligence-Enabled Intelligent 6G Networks. IEEE Network, 34, 272-280.
https://doi.org/10.1109/mnet.011.2000195
[22] Cui, Z., Zhang, P. and Pollin, S. (2023) 6G Wireless Communications in 7-24 GHz Band: Opportunities, Techniques, and Challenges.
[23] Telagam, N., Kandasamy, N., Manoharan, A.K., Anandhi, P. and Atchudan, R. (2025) Beyond 5G: Exploring Key Enabling Technologies, Use Cases, and Future Prospects of 6G Communication. Nano Communication Networks, 43, Article ID: 100560.
https://doi.org/10.1016/j.nancom.2024.100560
[24] Zanbouri, K., Noor-A-Rahim, M., John, J., Sreenan, C.J., Vincent Poor, H. and Pesch, D. (2025) A Comprehensive Survey of Wireless Time-Sensitive Networking (TSN): Architecture, Technologies, Applications, and Open Issues. IEEE Communications Surveys & Tutorials, 27, 2129-2155.
https://doi.org/10.1109/comst.2024.3486618
[25] Naresh, M., Das, V., Saxena, P. and Gupta, M. (2022) Deep Reinforcement Learning Based QoE-Aware Actor-Learner Architectures for Video Streaming in IoT Environments. Computing, 104, 1527-1550.
https://doi.org/10.1007/s00607-021-01046-1
[26] Wang, C., You, X., Gao, X., Zhu, X., Li, Z., Zhang, C., et al. (2023) On the Road to 6G: Visions, Requirements, Key Technologies, and Testbeds. IEEE Communications Surveys & Tutorials, 25, 905-974.
https://doi.org/10.1109/comst.2023.3249835
[27] Khadem, M., Zeinali, F., Mokari, N. and Saeedi, H. (2024) AI-Enabled Priority and Auction-Based Spectrum Management for 6G. 2024 IEEE Wireless Communications and Networking Conference (WCNC), Dubai, 21-24 April 2024, 1-6.
https://doi.org/10.1109/wcnc57260.2024.10570588
[28] Yang, P., Xiao, Y., Xiao, M. and Li, S. (2019) 6G Wireless Communications: Vision and Potential Techniques. IEEE Network, 33, 70-75.
https://doi.org/10.1109/mnet.2019.1800418
[29] Giordani, M., Polese, M., Mezzavilla, M., Rangan, S. and Zorzi, M. (2020) Toward 6G Networks: Use Cases and Technologies. IEEE Communications Magazine, 58, 55-61.
https://doi.org/10.1109/mcom.001.1900411
[30] Saad, W., Bennis, M. and Chen, M. (2020) A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Network, 34, 134-142.
https://doi.org/10.1109/mnet.001.1900287
[31] Gui, G., Liu, M., Tang, F., Kato, N. and Adachi, F. (2020) 6G: Opening New Horizons for Integration of Comfort, Security, and Intelligence. IEEE Wireless Communications, 27, 126-132.
https://doi.org/10.1109/mwc.001.1900516
[32] Khan, L.U., Saad, W., Niyato, D., Han, Z. and Hong, C.S. (2022) Digital-Twin-Enabled 6G: Vision, Architectural Trends, and Future Directions. IEEE Communications Magazine, 60, 74-80.
https://doi.org/10.1109/mcom.001.21143

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.