Real-time video application usage is increasing rapidly. Hence, accurate and efficient assessment of video Quality of Experience (QoE) is a crucial concern for end-users and communication service providers. After considering the relevant literature on QoS, QoE and characteristics of video trans-missions, this paper investigates the role of big data in video QoE assessment. The impact of QoS parameters on video QoE are established based on test-bed experiments. Essentially big data is employed as a method to establish a sensible mapping between network QoS parameters and the resulting video QoE. Ultimately, based on the outcome of experiments, recommendations/re- quirements are made for a Big Data-driven QoE model.
This paper presents a brief outline of Quality of Experience (QoE) in video traffic and describes how big data can provide a possible solution to the challenges in video QoE assessment. We have carried out an experiment that is used to gain an understanding of how we can apply the enormous amounts of data available to us when video is delivered through the Internet. Using this we place recommendations for the creation of future QoE models, particularly with big data in mind.
Video traffic is forecast to make up 80% - 90% of global consumer video traffic by 2018 [
・ Identifying, isolating and fixing problems―Using effective QoE measurement can help end-users to derive if the problems exist in their home network, providers or third party application services. From an operators perspective a complimentary understanding of end-user experience can help identify and fix network issues quicker and also leads to better, more concise notification of affected end-users.
・ Design and planning―With monitoring end-user experience providers can design and plan their networks in accordance to levels of user expectations and service level agreements. The information gained from QoE assessment can also be used in proactive activities within design and planning. Expanding on this point, [
・ Understanding the quality experienced by customers―Network operators can gain a better insight into the end-to-end performance experienced by its customers. This allows operators to provide better services to their customers and creates a better understanding for senior managers who make investment decisions.
・ Understanding the impact and operation of new devices and technology―As new products or technologies are deployed into network infrastructures it is essential that its operational impact can be measured and evaluated. Also quantifying these new implementations can lead to more informed decision making for larger, wider spread rollouts.
Quality of Service (QoS) has been defined by the International Telecommunications Union (ITU) as the, “Totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service” [
As mentioned previously QoS is mostly thought to be inferred from network performance indicators. In [
・ Bit Rate―Also known as through-put or more commonly bandwidth, this defines the total speed capable of data transfer based on an end-to-end link.
・ Delay―This is the time it takes for a packet to traverse the network or a segment of a network. It is often expressed as latency, which is the time it takes for a data packet to get from one designated point of a network to another [
・ Jitter―This is the full range of packet delay from the maximum amount to the minimum amount [
・ Packet Loss―Most commonly shown as a percentage. Packet loss refers to the number of packets lost over a period of transmission. Packet loss also tends to be broken down into two forms, burst and random [
The previously mentioned parameters can greatly impact end-user video output. End users would often describe four of these undesired effects as the following:
・ Blocking―Video coding is block based. This means loss of data or perhaps coding errors due to network performance issues will result in this.
・ Blurring―This can be seen through the loss of spatial information/features, edges around a scene or object tend to become indistinguishable.
・ Edginess―Specifically referring to the edges in comparison to the original video. Objects within the content has irregular edges.
・ Motion Stutter―Usually evaluated with real-time against video time via sequence numbering. Often content will freeze or skip segments, relates to FPS.
The term Quality of Experience has seen increased usage in research, consumerism and industry. The phrase in itself indicates an impact on end-users, meaning how an Internet service is experienced. The ITU define QoE as, “The overall acceptability of an application or service, as perceived subjectively by the end-user” [
With end-users playing such a key role in the assessment of QoE, subjective testing is a natural progression. Perceived video quality by nature is a subjective area. In order to grasp an actual end-user’s perceived quality the most obvious and simple way to do this is to ask them. As described in [
The ITU has standardised subjective testing methods for multimedia application in P.910 [
Although subjective testing provides the most accurate indication of user-perceived quality it also has various issues that should be considered. The most commonly associated issues with subjective testing are related to time and man-power [
In order to create reliable QoE prediction, as well as eliminating the negative aspects of subjective testing, objective QoE models are used. Objective models are computational based but still retain the primary goal of predicting perceived end-user video quality. Authors in [
The most commonly used reference classification approach is seen in layer
・ Full Reference (FR): Full access to original unaltered source video sequence.
・ Reduced Reference (RR): Partial video information is required. Usually destination output.
・ No Reference (NR): No access to original source video is required.
As both FR and RR models require access to video output they are usually based around a comparison approach where the original sequence is compare against a processed video sequence. Due to this they are often considered intrusive methods [
The previous section provided a brief overview of QoE models. Authors of [
1) FR Models cannot to be implemented in real-time due to complexity and a full-reference sequence being needed.
2) RR models, although not needing an original sequence still requires resources such as side channels to extract information of the video sequence.
3) Models based on subjective testing and the Human Visual System (HVS), although accurate require extensive preparation and validation and are often very complex.
4) In contrast, NR/engineering approaches have a lower complexity, but have reduced accuracy and are only accurate to specific data sets.
5) Evaluation of models are usually based on the data sets they have been created on. Adding to this the subjective tests they are based on often are only specific to certain criteria, ie viewer demographic, viewing time, etc.
Increased delivery of video over the Internet has created a surplus of data available for analysis. Improving end-user QoE has also become a crucial aspect of service agreements. So, with the utilization of this surplus of data it will provided an added monetization incentive that can give increased benefit for both deliverer and receiver.
Data available involves aspects of the viewer. Metrics have been defined as:
Viewer-Session Metrics (VSMs):
・ Viewing time per view―time user watches video, expressed in a ratio to full video time.
・ Abandoned view ratio―percentage of views that are abandoned compared to those initiated, expressed as a percentage.
Viewer-Level Metric (VLMs):
・ Number of views―number of views a certain video has, at a current time if real-time.
・ Viewing time per visit―ratio viewing time compared to initiated time.
・ Return/refresh rate―as an indication of viewer frustration of reduced QoE.
・ Video rating―User rating at end of transmission.
Aside from the viewer metrics available, data-driven QoE has focused more on the use of extensive QoS metric available, these include:
・ Startup delay―the time between a user request and initiation of a video.
・ Re-buffering―how long a video stream is paused to ensure content is delivered, otherwise known as stuttering. How often and for how long are considered.
・ Average bitrate―How fast the video content is displayed on screen. Dependent on video encoding/decod- ing, network and possible hardware statistics.
・ Previous QoS metrics―As discussed in Section 2.2.
Video QoE analysis has seemingly come full circle with the use of network QoS to QoE mapping again taking priority so that a real-time and real-world analysis can take place. The priority now is accuracy and efficiency.
An experiment was carried out in order to confirm the influence of QoS on end-user QoE and also gain an idea of the challenges and process of applying big data to QoE assessment with the data that is available to us. We utilize the easy to influence and monitor QoS metrics as stated in Section 2.2. Specifically we influence video output with delay and packet loss. In total 6 videos ranging from 10 - 12 seconds were streamed in various conditions. The videos ranged from 176 × 144 to 640 × 480 resolution. QoE score was then evaluated using a FR objective method described in Section 3.4. The scale used for video QoE is 0 - 50, translating to 0 - 5 from a end-user subjective standpoint as described in Section 3.2.
We mapped results against increasingly degraded network conditions as seen in Figures 4-7. Overall the outcome is as expected whereas when the network QoS conditions deteriorate, we see a decrease in end-user QoE scores. Notable, delay and packet loss as a single affecting network QoS parameters see very similar outputs in end-user QoE. Combining them sees an increased impact on video QoE where the worst condition we tested averages an overall QoE rating of 5.2. Something to consider is the initial streaming implication where coding, compression and decompression have an important impact on video QoE scores.
With an insight to QoE models gained we follow the ideas set out by works [
・ Requirements for a QoE Metric:
○ Quantifiable―Easily viewed and quantified in real-terms.
○ Accurate―The output should be an accurate representation of end-user QoE.
○ Informative―It has some real-world use to industry and is indicative of what it is representing.
○ Fit for Purpose―Question the purpose of the output, what/who does it serve and is it meeting the specific needs.
・ Requirements for a QoS to QoE Mapping Model based on Big Data:
○ Consistent―Is the output consistent with the input into the model and expected results.
○ Expressive―Is the relationship between QoS and QoE shown appropriately and accurately.
○ Real-time―Video is inherently real-time, the solution should translate to this.
○ Scalable―The Internet is ever growing, the model should be adaptable and translate to this fact.
○ Correct Flagging―When is a QoE result considered an issue? This should be accounted for.
○ Simplicity―The relationship of QoS to QoE can become very complex, it should be kept as simple as possible, whilst retaining accuracy.
The main goal of the paper was to determine how big data can be used in achieving the goal of accurately assessing end-user perceived quality, without the usual negative drawbacks. The experiment uses accessible QoS parameters to gain understanding of applying what data is available if a big-data QoE model is created. With insight from literature and the experience gained in the experiment, we achieve the goal of the paper by providing core recommendations for a big-data driven QoE model. The scope of experiment findings were limited as we only include two parameters in testing, but the progress gained still provides a very effective foundation as the recommendations placed can be followed when advancing a new big data-driven QoE model. Future work will entail increasing the parameters used, extending testing to higher resolution videos and adapting the QoE output to predict end-user quality with input of various real-time obtainable parameters.
Ethan Court,Kapilan Radhakrishnan,Kemi Ademoye,Stephen Hole, (2016) Recommendations for Big Data in Online Video Quality of Experience Assessment. Journal of Computer and Communications,04,24-31. doi: 10.4236/jcc.2016.45004