Re: [libvirt-users] using shared storage with libvirt/KVM?
Tom Georgoulias Mon, 03 May 2010 13:25:58 -0700
On 05/03/2010 10:57 AM, David Ehle wrote:
I've spent a few days googling and reading documentation, but I'm looking for clarification and advise on setting up KVM/libvirt with shared storage.
I, too, have been looking at these kinds of setups in the last few weeks, and I've found myself confused about the stack more than once. The documentation is very scattered and incomplete in key areas (as you pointed out with the RHEL docs on shared storage), which makes it tough on a new user. Here's what I can share, I hope this encourages others to join in and help us both out.
I am trying to figure out the best way to set things up so that I can run several (eventually) production linux guests (mostly debian stable) on the 2 KVM host systems and be able to migrate them when maintenance is required on their hosts.
From the small testing I've done, making an iscsi based storage pool and using it to store the VMs between the two KVM hosts.
A common opinion seems to be that using LVs to hold disk images gives possibly the best IO performance, followed by raw, then qcow2.
Would you agree or disagree with this? What evidence can you provide?
That's probably right, but I think those are more relevant when you are using non-shared storage, like images on your local disk or perhaps on a shared mount.
I have succesfully made an iscsi device available to libvirt/virsh/virt-manager via an XML file + pool-define + pool-start. However the documentation states that while you can create pools through libvirt, volumes have to be pre-allocated: http://libvirt.org/storage.html "Volumes must be pre-allocated on the iSCSI server, and cannot be created via the libvirt APIs."
I'm very unclear on what this means in general, and specifically how you preallocate the the Volumes.
I believe this means you will need to create the LUNs on the storage and present them to libvirt, but libvirt cannot talk back to the storage system and create the targets/LUNs for you as you try to spin up new VMs.
If you are using shared storage (via NFS or iSCSI) does that also mean you MUST use a file based image rather than an LVM LV?
No, you can use iscsi and treat the storage as a raw disk.
Redhat provides pretty good documentation on doing shared storage/live migration for NFS: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/chap-Virtualization-KVM_live_migration.html But unfortunately the section on Shared Storage with iSCSI is a bit lacking: "9.1. Using iSCSI for storing guests This section covers using iSCSI-based devices to store virtualized guests." and thats it.
I know! How frustrating! Even the RHEL6 beta document is missing those sections…
Here's what I've documented for my initial test run. I hope this helps you get started, please note that I changed the IPs and iqn's to prevent sharing private data on the list.
1. Create a LUN on your shared storage system & get the iqn.
iqn.long.long.long:a6c98a7e-0275-c020-efae-a386dfd46b84
2. From the KVM host, scan for the newly created target
# iscsiadm -m discovery -t sendtargets -p <shared storage FQDN>
1.2.3.4:3260,2 iqn.long.long.long:a6c98a7e-0275-c020-efae-a386dfd46b84
There it is, *84.
3. Write the XML config describing the storage we will add to the storage pool, using the iqn from step 2.
<pool type="iscsi"> <name>lun1</name> <source> <host name="storage hostname"/> <device path="iqn.long.long.long:a6c98a7e-0275-c020-efae-a386dfd46b84"/> </source> <target> <path>/dev/disk/by-path</path> </target> </pool>
4. Import that XML to define the pool:
# virsh pool-define /tmp/lun1.xml Pool lun1 defined from lun1.xml
5. Verify:
# virsh pool-info lun1 Name: lun1 UUID: 701a1401-547a-08da-5f14-befea84778d9 State: inactive
6. Start the pool so we can use it
# virsh pool-start lun1 Pool lun1 started
7. List the volume names in the new pool so we'll know what to pass along to virt-install to use it:
# virsh vol-list lun1 Name Path ----------------------------------------- 10.0.0.0 /dev/disk/by-path/ip-1.2.3.4:3260-iscsi-iqn.long.long.long:a6c98a7e-0275-c020-efae-a386dfd46b84-lun-0
virsh # vol-info --pool lun1 10.0.0.0 Name: 10.0.0.0 Type: block Capacity: 20.00 GB Allocation: 20.00 GB
8. Install the VM with virt-install and use “–disk vol=lun1/10.0.0.0,size=20” to access the storage
[r...@rvrt010d ~]# virt-install <all of your options> --disk vol=lun1/10.0.0.0,size=20
After that's up and running, your VM will use the shared storage as if it is a local disk.
9. If you repeat steps 2-6 on your other KVM host, the storage will be shared between both hosts. Now you are setup for migration, either offline or live.
virsh migrate VM qemu+ssh://other.kvm.host/system
For maybe 6-10 guests, should I simply just be using NFS? Is its performance that much worse than iSCSI for this task?
This totally depends on your network and hardware, but here's how a dd test over NFS and iSCSI compared in my setup:
iSCSI run throughput 01 24.5 MB/s 02 25.7 MB/s 03 52.4 MB/s 04 56.5 MB/s 05 57.7 MB/s 06 56.2 MB/s 07 51.5 MB/s 08 57.8 MB/s 09 57.3 MB/s 10 55.8 MB/s NFS run throughput 01 34.5 MB/s 02 35.2 MB/s 03 36.2 MB/s 04 41.2 MB/s 05 39.2 MB/s 06 39.2 MB/s 07 36.2 MB/s 08 34.8 MB/s 09 36.7 MB/s 10 40.4 MB/s
I hope this helps, and I hope that someone corrects any errors I may have made. :)
Tom