KVM Hardware Virtualization on CentOS 6.2 Dedicated Servers

I wrote this guide to show you how to do a proper KVM setup on your CentOS dedicated server. We will be using VNC and connecting to the dedicated server remotely from your workstation. I realize there are a lot of other guides for utilizing Kernel-based Virtual Machine (KVM) hardware virtualization. What sets this guide apart is that it does not require a desktop Linux operating system. Many other guides are great, but they will NOT work for those of us that run dedicated servers. We simply do not have a GUI to complete our installs. I will show you how to setup KVM from start to finish on a remotely hosted dedicated Linux server without a desktop environment.

Go ahead and login to your server as the root user. First you will need to make sure your processor supports hardware virtualization. Intel calls it the VT-x bit. You can check your server by reading /proc/cpuinfo and seeing if the cores you have support vmx or svm in the “flags” section:

egrep "(vmx|svm)" /proc/cpuinfo

If you get results from running that command, it means your processor is good to go. Otherwise you will need a compatible server. Some servers do require the virtualization capability to be enabled. You should check the BIOS to see if there is an option similar to “Intel Virtualization Technology”. If you still have trouble, get your CPU, Motherboard and Server model and verify via manuals that hardware virtualization is indeed supported. Some proprietary servers such as Dell Datacenter Series models have quirky ways of supporting HVM.

Assuming we’re ready to prceed, let’s install the utilities we will need, this may take a while, get ready to wait a few minutes:

yum install kvm libvirt python-virtinst qemu-kvm bridge-utils

Once all of the mandatory software is installed we should start the libvirt daemon and add it to auto start with the OS:

/etc/init.d/libvirtd start
chkconfig libvirtd on

To verify that everything is now operational let’s test our connection through virsh. To do so run:

virsh -c qemu:///system list

This should output a blank listing such as:

[root ~]# virsh -c qemu:///system list
 Id Name                 State
----------------------------------

[root ~]#

Now we will need to create a bridged device to use with your KVM setup. There are alternative network options, but for dedicated servers offering virtualized environments to clients or for internal use the network bridge option is the most logical. By utilizing a bridge we can give our containers direct access to the network with each container effectively becoming a dedicated server. Ideally you will need a range of IP addresses allocated to the VLAN your server is utilizing or they can be statically routed to your device. You should NOT alias the individual addresses to the main host OS (CentOS 6 node) running the containers, if you have them aliased please remove the aliases now.

I’m working with a /25 allocation (128 addresses), 125 usable, 1 network, 1 gateway, 1 broadcast. The host operating system (CentOS 6 node) will use the first address, xxx.xxx.xxx.130 with the gateway of xxx.xxx.xxx.129 and the network mask of 255.255.255.128 (corresponds to /25.) The containers I spin up will use IP addresses after xxx.xxx.xxx.130, starting with 131.

Let’s create the bridged interface file:

nano -w /etc/sysconfig/network-scripts/ifcfg-br0

Here is how I setup my file for my IP allocation as explained above. You will need to edit it according to your configuration.

DEVICE="br0"
NM_CONTROLLED="yes"
ONBOOT=yes
TYPE=Bridge
BOOTPROTO=none
IPADDR=xxx.xxx.xxx.130
NETMASK=255.255.255.128
GATEWAY=xxx.xxx.xxx.129
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System br0"

You need to replace all the IPADDR/NETMASK/GATEWAY sections with corresponding information from your network configuration. If you don’t know this information you can cat your existing interface configuration, assuming your primary device is eth0, that would be:

cat /etc/sysconfig/network-scripts/ifcfg-eth0

Now we need to backup the primary (real) device configuration file and create a new one, in my case that is eth0:

cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ifcfg-eth0.backup
nano -w /etc/sysconfig/network-scripts/ifcfg-eth0

Replace the contents of the file as follows:

DEVICE=eth0
NM_CONTROLLED="yes"
ONBOOT=yes
HWADDR=00:30:48:94:ae:8a
TYPE=Ethernet
#BOOTPROTO=none
#IPADDR=xxx.xxx.xxx.130
#NETMASK=255.255.255.128
#GATEWAY=xxx.xxx.xxx.129
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
BRIDGE=br0

Essentially all we’re doing is commenting out BOOTPROTO/IPADDR/NETMASK/GATEWAY sections as that’s now handled by the bridge device and we’re adding a few new options to facilitate bridging.

So let’s cross our fingers, restart the network and hope that everything comes back up properly:

/etc/init.d/network restart

Once the network is back up you should see the new br0 device. To see the device you can list all devices via the ‘ifconfig’ command. If you were disconnected and could not reconnect to the server or don’t see the device, something went wrong. Please re-read the network configuration steps above and consult with your network engineer to ensure you’re using the right IP information. If your server is remote, you may have to have the datacenter staff undo the changes via the backup of eth0 we made earlier.

Assuming everything is up and running, we need to create a directory structure for use with our containers. This is mainly so that things stay organized. You should place this structure onto your large partition, I’m working with a 2TB RAID6 partition. I prefer to have my large partition mounted as /home and I use the following structure:

mkdir /home/images
mkdir /home/osisos

The /home/images directory will contain your container disk images. The /home/osisos will contain ISO images of various operating systems for direct installations. Once you install your first container of a specific OS image you will probably want to make a template snapshot so you don’t have to do full installs every time.

Go ahead and upload an Operating System ISO of your choice so we can proceed and install our first KVM container. I’m going to use Windows 7 for this example as I have a need for a remote RDP desktop and not a server OS. Mine will be called:

/home/osisos/Win7.iso

Once everything is in place and you’re ready to install your OS let’s use virsh to initialize your container. Here is the command we’re going to run, however before we run it, read below to understand the options:

virt-install --connect=qemu:///system --name=Win7Template --ram=768 --vcpus=1,maxvcpus=1,cores=1 --cpu host --cdrom=/home/osisos/Win7.iso --disk path=/home/images/win7temp.img,size=50 --graphics vnc,password=k8iNNh30Sepqk2pwtr,port=5901,listen=xxx.xxx.xxx.130 --noautoconsole --os-type=windows --os-variant=win7 --accelerate --network=bridge:"br0" --hvm

Here is the breakdown of all the options:

--connect=qemu:///system
--name=Win7Template
--ram=768
--vcpus=1,maxvcpus=1,cores=1
--cpu host
--cdrom=/home/osisos/Win7.iso
--disk path=/home/images/win7temp.img,size=50
--graphics vnc,password=k8iNNh30Sepqk2pwtr,port=5901,listen=xxx.xxx.xxx.130
--noautoconsole
--os-type=windows
--os-variant=win7
--accelerate
--network=bridge:"br0"
--hvm

The options you should worry about are:

  • –name is the name of the container. I’m calling mine Win7Template as I will use this base container as the template for future Win 7 containers.
  • –ram is the allocated amount of RAM in megabytes. I’m allocating 768MB as this is plenty to run Windows 7 32-bit. If you’re also running Windows, I recommend using 32-bit Windows unless you’re giving your container 4GB+ of RAM and multiple cores. 32-bit Windows performs better on containers with small resource allocations.
  • –vcpus is the CPU resource allocation. In my example I’m allocating 1 base CPU, a max CPU limit of 1 and 1 CPU core. You can scale your container as needed depending on your host node hardware as well as purpose and volume of the containers you’ll be creating. For more information on the vcpus option, please run ‘man virt-install’ and read the vcpus section.
  • –cpu is the setting that determines what the container sees as the CPU model. I’m using the host’s (node’s) CPU information. This is best for performance, but if you have to transfer the container to another host machine in the future you could experience issues if the CPU is very different on the other machine. If this is a possibility you may wish to omit this option.
  • –cdrom is the path to determine the CDROM device shown to the container. In my example I’m using a Windows 7 image (Win7.iso). You should change this to the path of your image. Alternatively you could even use a physical cdrom device. The physical CDROM device is nice if you plan on using the actual physical CDROM/DVDROM drive that your server has to input the installation media (CDs & DVDs) instead of uploading images. I’m using images for all my virtual operating systems.
  • –disk is the setting determining the hard drive for the container. For the most part it will always be an image file, unless you have spare hard drives and wish to use a whole physical hard drive for the container. I’m calling my disk image win7temp.img and storing it in the directory we created previously. I’m also setting the image size to 50, which means 50GB of space. This is the hard drive space that will be available to the container.
  • –graphics is a very important setting when using KVM for non-GUI driven dedicated server purposes. You will want to include the VNC option, set a password for VNC (I picked k8iNNh30Sepqk2pwtr), a unique unused port number and the system’s host IP address to bind to. You will use this information to connect to your container via VNC to complete the Operating System installation. Keep track of the VNC password you used and the ip/port so you know how to connect later.
  • –os-type is the setting determining the Operating System of the container. I’m using windows. You can see a full description in the manual via ‘man virt-install’ from the shell.
  • –os-variant is the version of the Operating System. I’m using win7 as I’m using Windows 7. You can see a full description and all available variants via ‘man virt-install’ from the shell.
  • –network is the network adapter presented to the container. Remember that bridge we made earlier? As long as you named it ‘br0’ you don’t need to change this option.

I hope at this point you understand all of the options. Go ahead and run the full command above to create the container. Once the container reports that it’s created you can see it’s status via:

[root ~]# virsh -c qemu:///system list
 Id Name                 State
----------------------------------
  1 Win7Template         running

[root ~]#

Go ahead and pat yourself on the back. We’re at the home stretch! All we need to do now is connect to VNC and finish our install from the installation media. First you will need to get a VNC client for your workstation. You may already have one installed if you’re using a Linux workstation, if not there are several options including Krdc, Remmina, Vinagre, RealVNC, TightVNC, etc.. I myself use a Windows workstation, though I primarily use it to manage thousands of Linux servers, funny huh? For Windows the choice is simple, TightVNC.

You can get TightVNC here: http://www.tightvnc.com/. During the install you can choose to do a custom install and disable the VNC server option, unless you want to run a VNC server later, otherwise save yourself the disk space and just install the client.

Once you have your VNC client open go ahead and connect to the IP address and port you picked earlier during provisioning of the container. In my example it will be xxx.xxx.xxx.130:5901 and connect. You should be prompted for a password. Enter the password you picked earlier, I picked “k8iNNh30Sepqk2pwtr”. You should now see the screen of your new container via VNC. If you were unable to connect you should check to see that you do not have a firewall configured on your host node that may be blocking your connection. If you’re sure there is no firewall you can check to see the binding of VNC via:

[root ~]# virsh -c qemu:///system vncdisplay Win7Template
xxx.xxx.xxx.130:1

[root ~]# 

Assuming you’re now connected you should complete the installation of your OS of choice as you would on any normal dedicated server. When your OS installation prompts you to configure your network (or after installation) you should give it an IP address. You should have some idea of the IP address available to you on that server, otherwise consult with your network engineer. I configured xxx.xxx.xxx.131 with a network mask of 255.255.255.128 and a default gateway of xxx.xxx.xxx.129. You should keep track of the IP address use so that your future containers don’t try to use the same IP address.

At this point your container should now be installed. You should configure it via VNC to allow remote access via SSH or RDP if your OS of choice was Windows. Test the remote connection and log out of VNC.

Congratulations. You have just setup KVM hardware virtualization! You should now spend some time reading all the documentation of ‘virsh’ and many other useful tools.

As a side note if you picked Windows as your target Operating System you may find that after your first reboot VNC will show that the container is stuck at a prompt that says: “Booting from Hard Disk…” and nothing happens. If this is the case your MBR needs to be rebuilt. This is for some reason common on virtualized Windows boxes and it’s one of the reasons I always recommend making templates. Here is a simple fix for this issue:

Use virsh to edit the container:

virsh -c qemu:///system edit Win7Template

Scroll down to the section that says:

  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='cdrom'/>
  </os>

Press the ‘i’ key and change ‘hd’ to ‘cdrom’ and then hit the Escape key, then Shift + ; keys and then type ‘wq’ and press the Enter key.

Use virsh to restart the container:

virsh -c qemu:///system destroy Win7Template
virsh -c qemu:///system start Win7Template

Now very very quickly VNC back into the container before the CD boot prompt goes away. If you miss it you will have to restart the container again. Press any key to let it boot back to the Windows CD image.

Pick your language and click Next. Now you will need to choose to repair the installation and “Use recovery tools”. Once the tool list is show pick the very bottom option of “Command Prompt”.

A command prompt should open at which point you need to:

  1. Type “E:” and press Enter (E: should be your CDROM drive, if it’s not try another letter).
  2. Change directory to boot via “cd boot”, if it’s successful you’re in the CDROM drive, otherwise you got the wrong drive letter in step 1.
  3. Type “bootsect /nt60 SYS /mbr” and press Enter
  4. Type “exit” to close the command prompt and click “Restart”.
  5. Do not boot to the CD this time. Instead let Windows boot properly.

That’s it folks!

Add a Comment