Application of XenServer in Computer Laboratory

Abstract

In the teaching of computer network operating system, teachers need to set up the teaching environment in independence server. Teachers and students often need to carry out software development, testing and other scientific research work under different network operating system environments. At present, the majority of servers are occupied by the school’s core application system, and the server resources are insufficient to meet the needs of teachers for teaching and scientific research. This paper explores the use of Citrix’s XenServer system to virtualize the server, so as to solve the problem of insufficient server resources in teaching and scientific research.

Share and Cite:

Hu, F. and Deng, J. (2019) Application of XenServer in Computer Laboratory. Open Access Library Journal, 6, 1-9. doi: 10.4236/oalib.1105357.

1. Introduction

Network operating system related courses are set up in all majors of the school’s information engineering department. Usually, desktop virtualization software such as VMware Workstation is used to create virtual machines on PC to simulate multiple operation system environments for experiments. With the upgrading of the network operating system and higher teaching requirements of professional courses, the simulated experimental environment cannot be qualified for comprehensive experiments of relevant courses due to the insufficient hardware resources. After all, all kinds of core application systems of the school need operating normally. In scientific research, teachers and students need to develop and test on different network operating system platforms, which need the support of independent servers. However, the server resources are quite insufficient. Usually one or two application systems will occupy one server in actual use, which leads to a waste of server resources. This paper explores the use of Citrix XenServer system for server virtualization to improve the utilization rate of server resources, optimize the teaching and experimental environment, and meet the needs of teachers and students for the use of server.

2. Overview of Existing Laboratory Environment

2.1. Current Situation

There are 6 computer LABS in our school, each lab has an average of 40 stations. They are connected to the H3C two-layer switches. The core switch is H3CS5560-SI, and the management network address is 172.16.0.0/24. The experimental teaching of operating system courses usually requires two laboratories.

2.2. Main Problems Existing at Present

1) Insufficient PC hardware resources

The best configuration of our teaching lab PC is equipped with i5 dual-core CPU, 500G hard disk, and 4G RAM. When establishing the virtualized environment on a single PC, the running verification shows that the speed is too slow to complete all teaching tasks. If we implement a comprehensive upgrade for our laboratory PCs, the investment could be too large and it is quite time-consuming.

2) Comprehensive experiments cannot be continuous

For the convenience of management, all the machines in the school computer laboratory are installed with the hard disk protection system. After the machine is shut down or the electricity is cut off, all the contents will be restored to the initial state. With the progress of the course, usually students cannot complete the experimental content of this course within the class time, and students have to rebuild the experimental environment in the next class. Although they can review the content of the previous class, it is a waste of time, affecting the teaching progress. Besides, it is not conducive to the mastery of the new content.

3) Teaching flexibility is restricted

The virtual experiment environment established on a single PC is not conducive to teachers’ control of the experiment process, nor to the communication and interaction between students.

3. Design Scheme [1]

To solve the above problems, we could use XenServer to achieve server virtualization. The requirements are as follows:

1) Set up 1virtual server per 2 stations, which means that 20 virtual machines should be built for each physical server. Windows Server 2008 R2 or CentOS6.5 should be run on the virtual machines.

2) Use two physical servers (Thinkserver RQ940) to install XenServer for virtualization. Thinkserver RQ940 is equipped with Intel E7-4820V2 CPU, 4 CPUs, 8 cores per CPU, 512 GBRAM. We install Openfile in another server (IBM system ×3650 m3, providing 4 TB hard disk space) as a storage share. See Table 1 for the list of servers.

3) After connecting to the server, users can only see the virtual machine they own.

4) Virtual machines planning.

A total of 46 virtual machines were generated, 40 for teaching, and the remaining 6 for software testing and teacher research. In the teaching virtual machine, 20 virtual machines install Windows Server 2008 R2 and 20 virtual machines install CentOS 6.5. Students are divided into two groups in the experiment, so in general, only 20 virtual machines are running at the same time for the experiment teaching.

For students’ experimental virtual machines, each virtual machine is allocated 4 vCpus, 8 GB RAM and 40 GB hard disk space. For teachers’ research virtual machines, each virtual machine allocates 8 vCpus, 16 GB RAM and 80 GB hard disk space.

5) IP address planning.

The network equipment management address is 172.16.0.0/24. The actual teaching network is 172.16.30.0/24; virtual machine network is 172.16.10.0/24; server management is 172.16.0.0/24; binding address is 172.16.0.0/24. The specific planning is shown in Table 2.

6) The network topology is shown in Figure 1.

Figure 1. Network topology.

Table 1. Physical Servers list.

Table 2. IP address planning.

4. Implementation

4.1. Preparation

Current mainstream CPUs produced by Inter all support virtualization technology, but virtualization technology is disabled by default. We should first go to the server CMOS setting and set the CPU virtualization option as Enabled, followed by setting the disk array to raid1 and downloading the mirror file named Xenserver-7.0.0-main. iso from the Citrix official website. Then we need to burn it into the startup disc and download the management client named Xenserver-7.0.1-xencentersetup.l10n.exe.

4.2. Installation [2]

1) We first install XenServer7.0 on both servers according to the prompts. The default administrator user is root, and select the first network interface (NIC0) as the management interface. IP is 172.16.0.2 and 172.16.0.3 respectively. We then set the machine name and DNS as Xenserver01, Xenserver02 and 61.139.2.69.

2) After we install XenCenter on the server, we name it asxencenter.kzd. XenCenter is the core management platform of XenServer virtualization architecture. It manages XenServer host centrally and supports clone migration of virtual machines. Once the installation is complete, we open the XenCenter console and add two XenServer servers.

3) After XenServer server is added, patches should be installed for the two XenServer servers for system security and stable operation.

4) Finally, we should install and configure the license server.

4.3. Network Configuration [3]

XenServer environment mainly includes business network, management network and storage network. Business network is mainly responsible for virtual machine and external communication; Management network is mainly responsible for virtual machine online migration and XenServer server management; Storage network is mainly responsible for the virtual machine and background storage data interaction. In order to achieve high availability of network data, we usually use dual network interface binding to achieve data exchange link redundancy.

In order to solve the problems such as the huge flow generated when students start the virtual machine at the same time and the single point failure possibly caused by the single network interface, the bandwidth is increased through the aggregation link to improve the network throughput and realize the load balance. Taking Xenserver01 host as an example, the first physical network interface (NIC0) is configured as the management interface, which is used to manage and store communication and is connected to the core switch g1/0/2 interface. Then we bind NIC1 with NIC2, set “bond mode” as “LACP with load balancing on IP and port of source and destination”. The bound NICs are connected to the interface of the core switch g1/0/5 and g1/0/6 for carrying business.

Adding a network of “External Network”, mapping it to a bound physical network interface, assigning the VLAN number used on the interface, providing a bridge between the virtual machine and the physical network interface connected to the network, thus enabling the virtual machine to connect to available resources through the physical network interface of the server. At this point, the link aggregation protocol (LACP) needs to be configured on the physical switch.

The configuration process is as follows:

We first create a two-level aggregation interface 1 and configure the interface as a dynamic aggregation pattern; Interfaces g1/0/5-1/0/6 are added to the aggregation group and all VLANs are allowed to pass.

[SWA] interface bridge-aggregation 1

[SWA-Bridge-Aggregation1] link-aggregation mode dynamic

[SWA-Bridge-Aggregation1] quit

[SWA] interfacegigabitethernet 1/0/5

[SWA-GigabitEthernet1/0/1] port link-aggregation group 1

[SWA-GigabitEthernet1/0/1] quit

[SWA] interfacegigabitethernet 1/0/6

[SWA-GigabitEthernet1/0/1] port link-aggregation group 1

[SWA-GigabitEthernet1/0/1] quit

[SWA] interface bridge-aggregation 1

[SWA-Bridge-Aggregation1] port link-type trunk

[SWA-Bridge-Aggregation1] porttrunkpermit VLAN all

[SWA-Bridge-Aggregation1] quit

4.4. Creation of the Resource Pool

With resource pools, multiple servers and the shared storage that connect to them can be managed as a unified resource, enabling flexible deployment of virtual machines based on their resource needs and business priorities. Here, we need to create a new pool in XenCenter, add two XenServer (Xenserver01, Xenserver02) servers to the pool, and specify the first XenServer server, Xenserver01, as the primary server.

4.5. Configure Shared Storage [4]

4.5.1. Installation of Openfiler

After downloading the Openfiler installation file from the official website and setting the keyboard and language settings by default, we need to clear all the disk partitions and create three new basic partitions: “/boot”, “/” and “/swap”. The remaining space is reserved for the network storage space. The installation method is similar to installing the Linux operating system.

4.5.2. Configuration of iSCSI to Provide Shared Storage for the XenServer Resource Pool

In this step, we are required to log in to the Openfiler management interface. The default user name is Openfiler and the password is password. After logging onto the main interface, the corresponding service need to be enabled, and a series of configurations such as the accessible legal network segments should be specified before the engine can initiate the connection.

1) Click Services and click Enable button next to the iSCSI target Server prompt to start the iSCSI service.

2) Specify the accessible address range of the storage server by clicking the System tab, and configuring the IP address segment and subnet mask as 172.16.0.0/24, type: share.

3) Configure multi-networkinterface binding.

4) Click Block Devices and you can view the physical disk owned by the current storage server. We then partition the physical disks, creating Volume Groups, and logical (iSCSI) volumes in the newly created Volume Groups. For the iSCSI clients to access these logical volumes, you need to create an iSCSI target for this volume. The iSCSI logical volume will map to this particular iSCSI target. Click Target Configuration, and create a new iSCSI target in this tab page. A default value is automatically generated as the name of the iSCSI target (often referred to as “Target IQN”).

5) click Volumes-->iSCSI Target-->LUN Mapping to map iSCSI logical volume to iSCSI Target. After completing the above steps, we should save, exit, and restart the iSCSI Target service. Up to now, the Openfiler setup is completed.

4.5.3. Creation of the Shared Storage in XenServer

We here run XenCenter, select the resource pool, click “Create new repository SR (Storage Repositories)”, select virtual disk Storage type iSCSI in the new repository wizard, and click Next to define the name on the connection interface. We need to enter 172.16.0.100 in the “location” page, and make the port as default, followed by clicking “Scan the target host”. After a moment, you will find the target IQN and target LUN on the storage server. At this point, a usable shared store appears at the bottom of the resource pool. The repository exists independently on the disk, and the virtual disk, the fault dump file or the suspended virtual machine image will be stored in the shared storage of the pool. Each server or virtual machine in the resource pool can share SR and access the storage device randomly, providing a powerful fail-over mechanism.

4.6. Creation of an ISO Shared Repository

This step is used to store the ISO media provided for the virtual machine. click “Storage” in the resource pool, click “new SR”, select Windows file sharing, enter the path, user name and password of the Shared folder, and then follow the prompts to complete the creation of the ISO shared repository. Finally, we need to upload Windows Server 2008 r2.iso and centos6.5.iso to the ISO shared repository.

4.7. Creation of Virtual Machine and Clone

Creating templates that use the XenCenter allows you to quickly deploy large numbers of virtual machines. First, we need to login XenCenter, click new VM in XenCenter, and select the template as Windows Server 2008 R2 in the new VM wizard and name it as S1. Second, follow the wizard prompts to install the virtual machine. Third, configure the IP address as 172.16.10.1/24 and gateway as 172.16.10.254/24 of virtual machine S1. For a better performance, after the operating system installation is complete, we would better install XenServerTools. In the virtual machine, Windows Server 2008 R2 is used to open the remote desktop. Next, close the created virtual machine, select the virtual machine that you want to convert to a template, and select “Convert to template”.

When deploying a virtual machine from a template, select a template in the XenCenter resources pane, right-click, select the new VM wizard, and follow the prompts. According to our plan, we need to create 22 virtual machines for running Windows Server 2008 R2 in Xenserver01, two of which are for other tasks; create 24 virtual machines for running CentOS6.5in Xenserver02, four of which are used for other tasks.

5. Client Login

The client connects to the Windows Server 2008 R2 virtual machine by remote desktop and connects CentOS virtual machine by SSHon PC. It operates in the same way as it does locally.

6. Concluding Remarks

The virtual experimental teaching environment based on XenServer proposed in this paper has been successfully applied in the courses of computer network technology, and at the same time, it also provides different test environments for the software developers. The application of virtualization technology can save hardware resources and reduce cost. We can complete comprehensive and complex experiments based on it, which significantly improve the teaching effect, and lay a foundation for students to better master the relevant contents of network technology courses.

Further research direction would be exploring and testing XenServer’s high availability and security, as well as optimizing configuration and improving its performance. The school will be equipped with storage servers, and provide more efficient and secure storage solutions, laying the foundation for the use of server virtualization technology in the actual production environment of the data center.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zhang, G.G. (2015) To Construct Data Center Service Platform Based on Virtualization Technology. Information Technology & Informatization, No. 2, 242-244. (In Chi-nese)
[2] Citrix Systems.
http://www.citrix.com
[3] Xenserver.org, “Xen-ServerFeature”.
http://Xenserver.org/overview-xenserver-open-sourc
-virtualization/open-source-virtualization-features.htm
[4] Childers, B. (2009) OpenFiler: An Open-Source Network Storage Appliance. Belltown Media.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.