Some Terminology: - physical-host (bare metal) - regular-guest (guest created on the physical-host) -- [Alternate terms -- virtual-host/virtual-hypervisor]; domain-name: f16test1 - nested-guest (guest created on the regular-host) ; domain-name: nested-guest ############################## [root@moon ~]# virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 1994 MHz CPU socket(s): 1 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 10242864 kB [root@moon ~]# ############################## == Let's setup the Physical Host == I used oz to create an F16 guest - http://koji.fedoraproject.org/koji/taskinfo?taskID=3654859 These are the pkg versions ############################## [kashyap@moon nested-virt-test]$ uname -r ; arch ; rpm -q qemu-kvm libvirt oz 3.1.8-2.fc16.x86_64 x86_64 qemu-kvm-0.15.1-3.fc16.x86_64 libvirt-0.9.8-1.fc16.x86_64 oz-0.8.0-1.fc16.noarch [kashyap@moon nested-virt-test]$ ############################## [root@moon ~]# uname -r ; arch 3.1.8-2.fc16.x86_64 x86_64 ############################## [root@moon ~]# ############################## [root@moon ~]# modinfo kvm_intel | grep -i nested parm: nested:bool [root@moon ~]# ############################## [root@moon ~]# modprobe kvm_intel nested=1 [root@moon ~]# ############################## => Pass 'kvm-intel.nested=1' on kernel commandline <= [root@moon ~]# cat /sys/module/kvm_intel/parameters/nested Y [root@moon ~]# systool -m kvm_intel -v | grep -i nested nested = "Y" [root@moon ~]# ############################## NOTE: Adam Young identified an alternative way to do by adding 'kvm_intel nested=1' to the end of /etc/modprobe.d/dist.conf file and reboot Now, create the regular-guest using oz -- Use this script -- https://github.com/kashyapc/virt-scripts/blob/master/oz-jeos.bash ; or via any other preferred means like virt-install ############################## [kashyap@moon nested-virt-test]$ sudo virsh define f16test1.Jan_13_2012-15:17:48 Domain f16test1. defined from f16test1.Jan_13_2012-15:17:48 [kashyap@moon nested-virt-test]$ ############################## Set the maximum mem limit(1048576 X3 ==3145728 ~=3GB) which will take effect on next-boot [root@moon ~]# virsh setmaxmem f16test1. 3145728 [root@moon ~]# virsh setmem f16test1. 3145728 --current [root@moon ~]# ############################## Next, Set the maximum vcpu limit which will take effect on next-boot [root@moon ~]# virsh setvcpus f16test1. 4 --maximum --config [root@moon ~]# virsh setvcpus f16test1. 4 --current [root@moon ~]# ############################## Let's enable VMX in the regular-guest ############################## Now, to enable -cpu flag in the 'regular-guest', so that it exposes VMX for nested virtualization, run the below command and add it to the f16test1..xml file(virsh edit f16test1.) --------- core2duo --------- The above should result in a '-cpu core2duo,+vmx' option for qemu-kvm command-line NOTE: # We can get the above cpu info and flags supported, by running 'virsh capabilities | virsh cpu-baseline /dev/stdin' ############################## Add Serial console by editing the disk image(when OFFLINE) using 'guestfish' [root@moon ~]# guestfish --rw -i f16test1. [or to interactively edit use the below command-line] ---- [root@moon ~]# guestfish --rw -a /var/lib/libvirt/images/f16test1.img -i edit /etc/grub2.cfg ---- And, add 'console=tty0 console=ttyS0,115200' to the kernel command-line. ############################## == Setting up the regular-guest == Have cache=none in the 'disk' device attribute. And reboot the guest --
--- Notice that vmx is exposed in the guest ########################### [root@moon ~]# ps -ef | grep qemu-kvm qemu 17102 1 4 22:29 ? 00:00:34 /usr/bin/qemu-kvm -S -M pc-0.14 -cpu core2duo,+vmx -enable-kvm -m 3072 -smp 3,sockets=3,cores=1,threads=1 -name f16test1 -uuid f6219dbd-f515-f3c8-a7e8-832b99a24b5d -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/f16test1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/export/vmimgs/f16test1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=21,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e6:cc:4e,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 root 17591 17400 0 22:42 pts/0 00:00:00 grep --color=auto qemu-kvm [root@moon ~]# ########################### [root@moon ~]# virt-cat f16test1 /var/log/messages | grep 'dhclient.*bound to' Jan 13 11:42:27 dhcp201-159 dhclient[717]: bound to 10.65.201.159 -- renewal in 39921 seconds. Jan 13 12:00:18 dhcp201-159 dhclient[735]: bound to 10.65.201.159 -- renewal in 36894 seconds. ########################### ==> This is our regualar-guest <== ########################### [root@dhcp201-159 nested-test]# file /dev/kvm /dev/kvm: character special [root@dhcp201-159 nested-test]# ########################### [root@dhcp201-159 nested-test]# systemctl status libvirtd.service libvirtd.service - LSB: daemon for libvirt virtualization API Loaded: loaded (/etc/rc.d/init.d/libvirtd) Active: active (running) since Fri, 13 Jan 2012 12:31:10 -0500; 2min 51s ago Process: 26766 ExecStart=/etc/rc.d/init.d/libvirtd start (code=exited, status=0/SUCCESS) Main PID: 26774 (libvirtd) CGroup: name=systemd:/system/libvirtd.service ├ 26774 libvirtd --daemon └ 26829 /usr/sbin/dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --exce... ########################### [root@dhcp201-159 nested-test]# ls fed.ks vm-create.sh [root@dhcp201-159 nested-test]# ########################### [root@dhcp201-159 nested-test]# cat /proc/cpuinfo | grep -i processor processor : 0 processor : 1 processor : 2 ########################### ==> Bumped the RAM to 4G [1048576*4 ==4194304] <== [root@dhcp201-159 nested-test]# free -m total used free shared buffers cached Mem: 3956 263 3692 0 8 116 -/+ buffers/cache: 138 3817 Swap: 4031 0 4031 [root@dhcp201-159 nested-test]# ########################### [root@dhcp201-159 nested-test]# cat vm-create.sh virt-install --connect=qemu:///system \ --network=bridge:virbr0 \ --initrd-inject=/root/nested-test/fed.ks \ --extra-args="ks=file:/fed.ks console=tty0 console=ttyS0,115200 serial rd_NO_PLYMOUTH" \ --name=nested-guest \ --disk path=/var/lib/libvirt/images/nested-guest.img,size=6 \ --ram 2048 \ --vcpus=1 \ --check-cpu \ --hvm \ --location=http://download.eng.pnq.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ \ --nographics ########################### [root@dhcp201-159 nested-test]# ifconfig virbr0 virbr0 Link encap:Ethernet HWaddr 52:54:00:72:3E:FF inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) [root@dhcp201-159 nested-test]# ########################### ==> Debugging Tip: Add this inside the guest, so that you can cleanly log-in via serial console into the guest later for debugging <== [root@dhcp201-159 nested-test]# ln -s /dev/null /etc/systemd/system/plymouth-start.service [root@dhcp201-159 nested-test]# ########################### == Settup up the nested-host == Let's create the nested-guest of 6G RAW disk, 2GB RAM, 1vcpu(Probably this is too low.) ########################### [root@dhcp201-159 nested-test]# ./vm-create.sh + virt-install --connect=qemu:///system --network=bridge:virbr0 --initrd-inject=/root/nested-test/fed.ks '--extra-args=ks=file:/fed.ks console=tty0 console=ttyS0,115200 serial rd_NO_PLYMOUTH' --name=nested-guest --disk path=/var/lib/libvirt/images/nested-guest.img,size=6 --ram 2048 --vcpus=1 --check-cpu --hvm --location=http://download.eng.nay.redhat.com/pub/fedora/linux/releases/16/Fedora/x86_64/os/ --nographics Starting install... Retrieving file .treeinfo... | 1.7 kB 00:00 ... Retrieving file vmlinuz... | 7.9 MB 00:08 ... Retrieving file initrd.img... 28% [============== ] 647 kB/s | 38 MB 02:25 ETA ########################### . . . Package Install progresses to a certain extent, then, it is stuck forever. ================================================= Welcome to Fedora for x86_64 ┌─────────────────────┤ Package Installation ├──────────────────────┐ │ │ │ │ │ 24% │ │ │ │ Packages completed: 52 of 390 │ │ │ │ Installing glibc-common-2.14.90-14.x86_64 (112 MB) │ │ Common binaries and locale data for glibc │ │ │ │ │ │ │ └───────────────────────────────────────────────────────────────────┘ / between elements | selects | next screen ================================================= On a seperate terminal, let's try to see what's the state of the guest. It seems to be 'paused' state. This behavior is also reported by another person on upstream 'kvm' mailing list. (Just to note: there is enough disk space and RAM on the 'regular-guest') ############################################################## [root@dhcp201-159 nested-test]# virsh list Id Name State ---------------------------------- 1 nested-guest paused [root@dhcp201-159 nested-test]# ############################################################## [root@dhcp201-159 nested-test]# virsh domstate nested-guest --reason paused (unknown) [root@dhcp201-159 nested-test]# ############################################################## == Some further options to debug: == 1/ See if we get anything useful from 'virt-dmesg' 2/ Try with more vcpus for the nested-guest 3/ Try if any other optimizations that can be done in the regular-guest