# kashyap chamarthy == Intro to kvm/qemu virtualization == + hardware -->hypervisor-->libvirt/guests * KVM: + KVM virtulization uses CPU Virtualization hadrware extensions like: - Intel VT-x (we can enable them in the BIOS) - AMD-V + KVM is essentially a linux kernel module(it's just a peice of software which can extend the functionality of kernel without the need for reboot) - /lib/modules/2.6.32.9-70.fc12.x86_64/kernel/arch/x86/kvm/kvm.ko - KVM inherits all the benefits that the Linux Kernel provides (like cpu scheduling, memory management, device drivers, OS support code, etc) -------------------------------------------------------------- * Qemu: + plain 'qemu' could also be used + can be used as a 'machine emulator'(OS for one arch. could be run on a diff. machine) ; as a 'virtualizer' , qemu can take advantage of kvm, by executing the guest code directly on the host CPU. + system-mode emulation: a full system emulation including processor, peripherals - VGA, mouse/key-board, IDE hard-disks, CD-ROM interface, sound cards, Network cards, USB controller, PCI n/w adapter, etc + A modified version of qemu is used for KVM ######################################################################################################################### == libvirt/virsh == + hypervisor agnostic(kvm,qemu,xen) + start/stop, defining guests, dumping config info. + Essentially a virt API/wrapper around kvm/qemu + virt-viewer -- better performance compared to virt-manager, ssh -X, vncviewer -syntax: # virt-viewer --connect qemu+ssh://root@foo.pnq.redhat.com/system tornado + Export the xml of the VM you want to edit ; Edit the xml ; Define the 'new' xml - We can edit attributes of the VM like adding virtual CPUs ----------------------- ... 2 ... ----------------------- - Adding Memory ---------------------- ... 262144 262144 ... ---------------------- + Debugging using the Libvirt shell variable - LIBVIRT_DEBUG=1 -- set this variable on shell + secure remote management via pki/tls -- GNUtls ######################################################################################################################## == libguestfs == + Library for accessing/modifying guests + # guestfish -i (for editing OFFLINE images) - And then, make changes from the guestfish shell + # guestfish --ro -a disk.img (READ ONLY for live images) + Edit hostnames for clones, etc + fixing guests which are unbootable due to selinux problems, + scriptable(bindings available in python, perl,ruby, java, etc). + Very well documented. + export contents using 'tgz-out' ######################################################################################################################## == Bridging == + make guests part of the host network + virtual networks(for ppp0) - virsh net-list --all + Default ones created by virt-manager/libvirt are NAT Creating images from with a base image allows quick rollouts of many boxes based on an single install. (Like I have a pristine image of some machine, and I use it as a base machine & roll out more) * Also show a sample kickstart file(and some post section, take it from kvm autotest framework) ######################################################################################################################### == Image formats == * Raw + Most commonly used, high workloads + Robust against power failures * qcow2 -- a versatile image format + qemu 'copy on write + this format stores only the _actual_ size occupied by the VM(in raw, entire disk is allocated) + NOTE1: 'raw' block device on the host, virtio in the guest _should_ be fastest(from rjones) Performance ranking(fastest to slowest) ; Also note, 1st 3 options are robust against power failure - block device/partition - lvm logical volume - raw file (fully pre-allocated) - qcow2 file + NOTE2: A 'raw' file that is sparsely allocated will exhibit bad performance until all blocks have been fully allocated due to guest writes. ######################################################################################################################### == Snapshotting with qcow2 & 'qemu-img' tool == + 'Listing' - $ qemu-img snapshot -l f13qt.qcow2 + 'Creating' - $ qemu-img snapshot -c snap-f13qt f13qt.qcow2 (Snashotting Offline guest) - via qemu-monitor: savevm (Note: Takes time as we're doing it on a live) + 'Applying' / 'Reverting' - $ qemu-img snapshot -a snap-f13qt f13qt.qcow2 - via qemu-monitor: loadvm + 'Deleting' - $qemu-img snapshot -d snap-f13qt f13qt.qcow2 - via qemu-monitor: delvm * 'snapshot' mode (Different from above) + '-snapshot' mode (qemu-kvm foo.qcow2 -snapshot) - all the disk images are considered as read only - this will write to temporary files(instead of disk images) - changes can be forced back to the 'disk images' by using the 'commit' (qemu)monitor command. * Using Base images: ############################################################################################################################# == Cloning vms(Manual) == (By Rich. W. Jones) + there is, virt-clone, but doesn't do all the job + #dd if=/export/vmimgs/fed13.img of=/export/vmimgs/clonefed13.img bs=1M + #virsh dumpxml F14 > /tmp/fnew.xml ; #vim /tmp/fnew.xml + change name,uuid,source,macid in the xml + #virsh define /tmp/fnew.xml + #guestfish -i -d Fnew - edit /etc/sysconfig/network (change the hostname) - edit /etc/udev/rules.d/70-persistent-net.rules ((change the MAC address to the same as the one in the libvirt XML file) + Now, boot + To generate _new_ ssh keys -- #rm -rf /etc/ssh/ssh_host_*key* + NOTE: the current manual steps are being incorporated into the existing 'virt-clone') ######################################################################################################################### * snapshotting with LVM + For snapshotting guests, use LVM on the host. + change the usual things like hostname, IP, etc + A little bit of messing around with partitions on the host machine + It's not recommended to run LVM both on Host & Guest. ============================================================================================== == What's coming next == (From upstream) * 'Nested Virtualization' + Nested VMX(for intel) -- in progress + Nested SVM(for AMD) -- alread merged + use cases: - Hosting one of the new breed of operating systems which have a hypervisor as part of them. Windows 7 with XP mode is one example. Linux with KVM is another. - Cloud users could put in their virtual machine a hypervisor with sub-guests, and run multiple virtual machines on the one virtual machine which they get. (A user-controlled) - Enable live migration of entire hypervisors with their guests - for load balancing, disaster recovery, and so on. - Honeypots and protection against hypervisor-level rootkits - Make it easier to test, demonstrate, benchmark and debug hypervisors, and also entire virtualization setups. An entire virtualization setup (hypervisor and all its guests) could be run as one virtual machine, allowing testing many such setups on one physical machine * Btrfs filesystem (kernel shared memory) -- Increasing vm density * Spice protocol ============================================================================ ======================== Autotest?? ========================