In this paper, we present the virtual desktop that uses a novel methodology and related metrics to benchmark thin client based on Data Delivery Networks (DDN) in terms of scalability and reliability. Most studies of the wireless networks mainly focus on system performance and power consumption circuit system; the main target has been separated in terms of Data operation and GUI operation by DDN. The communication protocol for wireless communication may play a major role in energy consumption and other important factors. The portable devices like Personal Digital Assistance (PDA) and others are mainly focusing on the efficient energy consumption (power control) in wireless networks. Here we focus on energy efficiency, algorithmic efficiency, virtualization and resource allocation; these are the main aims of the authors. The foremost research in the direction of wireless computing in saving energy and reducing carbon foot prints is also the challenging part. This is the study proof of brief account of wireless networks.
Actually, most studies of the wireless networks are mainly focused on system performance and power consumption circuit system. There are many studies in the hardware aspects of energy efficiency of mobile communications, such as low-power electronics, power-down modes, and energy efficient [
This research includes various chapters which are expressed in related work, existing work, virtualization based QoS, system architecture, simulation result and conclusion.
Wireless computing is environmentally sustainable usage of computers and related resources, and each resource can efficiently and effectively communicate inside the network system. The main goal is to reduce the use of hazardous materials, maximize energy efficiency [
GUI (Graphical User Interface) can split the data in two different ways (client side and server side), each way can support and interact with any registered node into the dynamic access network system. A system administrator can combine many nodes (system) into virtual machine (unique system). The terminal may act as centralized server for end user. The environment may get together through thin clients [
Computer virtualizations are referred to the process of running on logical computer systems. Computer administrator combines several physical systems into single virtual machine (single machine). It is powerful system. Virtualization can make single function which combines more number of data usages. Logically each data center can properly install the virtualized infrastructure which can support several applications in the operating system for energy consumption. There are different types of virtualization (i.e.,) server virtualization, application virtualization, network virtualization, storage virtualization and desktop virtualization. End user can run on thin client machine.
It can be shown images as well as text. It has been divided into two types of modes (vector mode and raster mode). The vector-mode displays drawn lines on the face of cathode-ray tube over the control of host computer system. The continue formation of lines are formed, but the electronics speed is limited. This mode is no longer used. Raster mode is usually scanning the picture and the visual element represents rectangular array of elements of pixel. Since the raster image has been perceptible to the human eye as a whole for a very short time, the raster must be refreshed many times per second to give the appearance of a persistent display. Today Most of the terminals are graphical i.e., they can show images on the screen. The modern term for graphical terminal is “thin client”. It typically uses a protocol like X11 for Unix-terminals or RDP for Microsoft Windows. The bandwidth needed depends on the protocol used [
Efficient Algorithm has determined the number of resources to execute; running time of an algorithm is related to input length (time complexity) and storage location (space complexity). In compare Slack Reduction Algorithm (SRA) tasks are executed in minor frequency. The linear algebraic algorithms which can be very different approach to save energy, power-aware (simulator) and scheduling the execution. The results of these procedures are modulated by an energy-aware simulator are charge of scheduling or mapping the execution of tasks in to the cores. The Integer bit power allocation algorithm (IPAA) is an optimal algorithm achieves the solution by the channel. It can be used to solve the problem of both data rate and margin maximization. It consumes little bit of energy consumption when compare to other. duEDF is dynamic task scheduling algorithm describe the combination of CPU energy and device energy are considered by CuSYS and duSYS. The CTRB and EPCLB algorithm has maximum power consumption varying from 10% to 40.3%.
Design for Disassembly (DFD) algorithm is an essential issue for product end-of-life management. It involved two major procedures, i.e. disassembly analysis (DA) and disassembly process planning (DPP). DPP is “bottom- up” design systems which include number, type, nature of connections, details of parts and assembly etc, however in a “top-down” approach; DA could be a single crucial procedure in the course of product creation. It uses wave propagation algorithm to find a disassembly sequence that minimizes the disassembly cost. DFD have been intensively studied [
Data card has two modules: the first module data node is connected with microprocessor and the other node is connected with Graphical User Interface. The architecture mainly focuses on product of long term efficiency, algorithmic efficiency, resource allocation, virtualization and power energy consumption.
Data Quality = Data Transferred (aggregate fps)
Render Time (aggregate fps)
Ideal Transfer (aggregate fps)
Data Transferred (atomic fps)
Render Time (atomic fps)
Ideal Transfer (atomic fps)
For a resource (ri) at any given time
the utilization (Ui) is defined as
Ui = n/tj
J = 1
where
n ―the number of tasks at running time,
Ui―the usage of resources of a task
Tj―minimum process of energy pmin
The energy consumption Ei of a resource ri at any given time is defined as
In
In
where desktop resources are centralized into one or more data centers. The benefits of centralization are hardware resource optimization, reduced software maintenance, and improved security. The ultimate aim of the paper has reached the goals through the DDN algorithm.
In thin client network computing devices are centralized server by client machines (systems connected to the network). In fact, the central server performs most of the computing tasks, stores data and hosts all the applications. The outdated desktop machines are running to the applications, to control footprint by Virtualization technology that can help save both hardware and software resources by creating an operating system or a peripheral devices. After separate the data and GUI from terminal server, Data can be operated by server side and GUI can work with client side (thin client concept has been used). When need for virtualized application [
Algorithm for DDN
Input: Data Ij and a set video streaming of r-resources (Modified Best Fit Decreasing)
Output: Consolidating Data and GUI for power management
1. Let r* split
2. for V, rεR do
3. Compute the power function value of on resources Ri
4. if r belongs to client
5. if Vm belongs to server
6. do
7. split data di and GUIi
8. if Vm List
9. Allocation of Vm VmList sort
decreasing Energy utilization()
10. for each VmList
11. Do
12. minpower←max
13. allocated client←GUI
14. if power←estimate power (client,Vm)
15. Allocated server←data
16. end if
17. end for
18. end if
19. end if
20. end if
21. compute the cost value of function
22. for consolidating di and GUIi
23. end for
24. return Energy consumption.
Generally search Engine’s initial pages are simple information with more white spaces (it consumes more power of watts). Normal white spaces may produce more brightness and more power of watts than black window (screen). The simulation result of power consumption shows the better performance (percentage of result) than the existing module; especially the simulator separately performs the evaluated result of DDN power consumption against simulation time, DDN power consumption against number of nodes, DDN startup delay against simulation time, DDN startup delay against number of nodes. Therefore in recent trends in wireless network power consumption is a more challenging fact. It may mainly consider the black and white screen in watts. As long years ago volume of (Energy Efficient) computers were used in offices, business, hotels, institution, etc. Even though personal computer consumed very less (one third of lamp bulb) power watts utilized. These devices may runs in limited operating system (LINUX―some Giga bits of processor and RAM) does not have moving parts and a fan.
We compare the performance of DDN with a state-of-the-art solution LEACH and Chen et al. (1998). DDN and LEACH are deployed in the wireless network whose settings are described. These solutions are modeled and implemented in NS-2. The DDN server stores more files and the length of each file is set to minimal graphical values. The initial target location and speed of seventy-some hundred mobile nodes are randomly assigned. When the mobile nodes arrive at the assigned target location, they continue to move according to the reassigned target location and speed. We generate the information of 70 nodes (including name and introduction) and 10,000 playback logs (including played ID and time) where the information and 10,000 playback logs are used to calculate the access probabilities between nodes. 100 plus mobile nodes join the system following distribution and play content following generated 300 logs where the popularities of played content meet the distribution and 70 nodes play 3 files during the simulation time.
When any node finishes the playback, it quits the system. In DDN, the value of threshold is set to 0.5 - 6. Before starting the simulation, the chain-based tree structure of DDN has been built and the logical relationship between nodes has been defined. When the mobile nodes join the system, they form the communities corresponding to the played node.
The performance of DDN is compared with that of LEACH in terms of power consumption, startup delay, packet loss rate and simulation time, respectively.
In DDN, the nodes with long online time are grouped into an AVL tree and other nodes which store the same with the nodes in tree form the chain and attach the tree. In initial simulation, the nodes which have joined the system first form the AVL tree and obtain the resources from the media server. The increase in the number of nodes provides the relatively enough available resources for the new system members. The change of user interests for the content brings the uncertainty of resource demand; namely, some nodes still do not obtain the requested resources from the P2P network and only receive the data from the server. Therefore, DDN groups the nodes into a chain-based tree structure and enables the nodes to form the communities corresponding to the
played nodes. Moreover, the nodes pre-fetch the nodes of interest into local buffer and make use of pre-fetched resource to serve other nodes; namely, the pre-fetched content increases the available resources in the P2P network.
Startup Delay: The difference values between the time of sending request message and receiving first data are defined as the startup delay. The mean values of startup delay during a time interval 2(s) and the process of node joining the system are shown in
Maintenance Cost: Which is defined in terms of control messages like (joining nodes, leaving nodes and finding) under the Network of P2P? In addition that maintenance cost with related to tree traversal (in-order, pre-order and post order) is given the better result than chain process. The chain cost indicates the higher traversal than tree traversal process. Figures 4 (a)-(d) show the two curves corresponding to DDN and LEACH have three fluctuation processes during the whole simulation time. The blue curve of DDN experiences a slight fluctuation from = 2 s to = 20 s, starts to quickly increase and shows the fast fall from = 2 s to = 4 s. The red curve of LEACH has a slight increase and quickly decreases in some interval. The curve also keeps a fast rise trend.
In this paper, we discussed how to define the power efficiency of video DDN system computing framework and the key impacting factors especially local cache hit ratio and power proportionality. DDN algorithm can split the data and store the data in data center, and data center can provide the data (proper request from the user) using DDN from client to server side. In particular, thin client infrastructure runs the application only in server side. Key strokes and mouse click are sent over network to the server to process and give back the result (screen). Clients can be a low powered PC or Thin client device. They don’t have HDD, FDD, CDROMS, Cooling Fans and Very Low Processing Power. Now they have thin client with DDN framework. The simulation graph shows the evaluated result of DDN power consumption against simulation time, DDN power consumption against number of nodes, DDN startup delay against simulation time, DDN startup delay against number of nodes.
R. Jegadeesan,N. Sankar Ram, (2016) Energy Consumption Power Aware Data Delivery in Wireless Network. Circuits and Systems,07,2829-2836. doi: 10.4236/cs.2016.710241