Client Devices‎ > ‎Linux PC‎ > ‎

LVM/Software RAID

Logical Volume Manager is a very effective tool that can be used to create and maintain a software RAID array. When you create a logical volume, an allocation of storage that is less than or more than one given physical drive, that physical disk becomes divided up into one or more physical volumes, those physical volumes are then combined to create one large logical volume group. You can then create partitions on the logical volume how you see fit, like a standard disk. This can provide many benefits to your system and system administration such as:

       
  1. Live partition resizing.
  2. No need for extended partition.
  3. No need to unmount partition to create changes
  4. Live migration of partitions.

Creating Partitions with LVM:

There are two different ways that you can manage your logical volumes. The most 'user friendly' way to do this is using the graphical tool provided by the distro (such as Logical Volume Manager) or you can use bash to execute commands. To create a partition using LVM the first thing you are going to want to do is come up with a solid layout. Think about things like; what are you going to name the volumes? How many volume groups and logical volumes am I going to need? How many partitions will I have? All of these are just a few things that you will run into when creating a array with LVM, so why not be proactive about it, PLAN AHEAD. All of these things can be done through either the Logical Volume Manager or using the vgcreate command where you can specify things like name, extent size, and which drives you want the group to consist of. Now that you have started coming up with a plan, let's talk about how you would execute the plan.


When building a system I have found that when deciding on a name for volume groups and logical volumes the best way to approach it is to keep the names generic. For volume groups i use 'vg' to abbreviate for volume group and add '01' at the end to symbolize this as my first volume group and will now appear as 'vg01'. In most cases you won't have more than one volume group, meaning you might not have a 'vg02'. However, you will most likely have multiple logical volumes. This is where the number scheme comes in handy. For logical volumes abbreviate to 'lv' and then assign the logical group its proper corresponding place in the array such as 'lv01' or 'lv02' and so on. What good is making all these groups if you can't even locate them because the name is so complicated? Keep it simple. Now that you have a volume group you can begin to partition it up. Select the logical volume that you wish to partition, then choose a mounting point such as /home, select the desired size, encrypt the drive if you wish, or choose to force it to be a main partition. You can read more on partitioning and file systems here: File Systems


(LVM GUI Below/Common commands can be found at the bottom of the page for using LVM in the command line.)


    Software RAID:
Raid is used to span data over several hard drives creating enough redundancy so that an entire disk could fail without causing problems, keeping all of your data intact. Software RAID means that you are able to configure raid on non physical disk such as an IDE or SCSI. Choosing between software or hardware RAID is a difficult choice. They both have their advantages and disadvantages. Some of the things you should consider:
  • Cost: Software RAID is part of the OS, so there is no need to spend extra money on setting up a hardware RAID.
  • Performance: Software RAID performance depends on the server CPU performance and can only really offer high performance with a RAID0 or RAID1 array. Hardware RAID offers more stable higher performance.
  • Disk hot swapping: Only supported with hardware RAID
  • Hot spare support: This will keep a clear drive on hand in case there is a drive failure in the array. This ‘hot spare’ drive will then take the failed drives place in the array rebuilding the failed drive. This is supported by both Software and Hardware RAID.
So which one is better? None. It all really depends on your setup and requirements. Use what works for you.


This section will go over software RAID, at level(s) 0, 1, 01, and 5, on a system configured with logical volumes. Okay now let’s get started.
      RAID 0: striping without parity or mirroring
This level of raid has zero redundancy or error checking. It will provide an increase in performance, data storage, and more bandwidth allowing a larger data flow. The increase in bandwidth comes from the data being read in parallel. However, in order for the drives to be running parallel on a RAID 0 volume the data must be broken down into fragments, which are then written to their respective drive on the same sector. This allows the smaller sections of data to be read, oppose to one big data block, giving you an increase in your read performance and bandwidth. The more drives that you add to the array the faster your data will be read, but with each drive added to the array you are increasing your chances of data loss and increasing your write times.

    RAID 1: mirroring without parity or striping
This level of RAID will write all data identically onto two separate drives creating a “mirror”. This means that you have the same data in two different places which can be quite beneficial. When a read request is serviced it can retrieve the data from either of the two drives that contains the requested data. How does the request know which drive to choose from? It will choose the quickest way. Whichever drive involves less seek time and rotational latency is the drive that the data will be retrieved from. A write request will update the stripes of both drives, writing the data onto both drives.The real benefit of RAID 1 comes from the mirror effect. Since the drives are mirrored, the data is written identically onto two drives, the loss of a single drive will not stop the array from functioning. As long as one drive is functioning, the array will continue to function.

    RAID 10: mirroring and striping
This level of RAID usually consist of four, or more, drives where data is written in stripes across two of the drives, similar to RAID 0, these two drives are then mirrored onto the remained two drives, similar to the mirroring effect of RAID 1. Two for stripes to be written to and two for mirroring. You get the benefits of RAID 0 and RAID 1 (and the disadvantages). You get the redundancy of RAID 1, increasing backup and read performance, but you also suffer the slower write speed that comes with mirroring, and from the data striping of RAID 0. When it comes down to a drive failing, you’re in luck if you have RAID 10 configured, mirroring will keep the array alive. However, if a failed drive does not get replaced a single in-correctable media error on the mirrored drive would cause data loss, keep an extra drive on hand so you can replace and rebuild the failed drive in the array.

    RAID 5: block-level striping with distributed parity
With a RAID 5 array will write data to all drives along with parities across all drives. Parity is a technique that will check if data is lost or written over top of when it is moved from one place to another. Think of the parity bit as a monitor for a group of data. The parity bit is added to a group of bits that all move together. Before the data is sent anywhere it is counted. The parity bit will either be set as a 0 or a 1, depending on if the total number of bits is odd or even. After the data has been sent and received it is counted again to match sure the total number of bits is the same as when sent. Since there is one parity bit for each group of bits, each drive in the array will contain a mix of striped data and parity data. Everything is mixed up across all the drives unlike RAID 10 where striping goes on one set of drives and mirroring on the other. RAID 5 does not support mirroring but the distributed parities can almost act as a mirror, giving it some redundancy. If a single drive fails the array will continue to function like it never happened, replace the failed drive and keep going.


    LVM Commands:
(NOTE: These are not all of the LVM commands, just a few basic commands for executing common tasks. Some commands vary slightly from LVM1 and LVM2)

Volume creation commands:
pvcreate- Create physical volume
lvcreate- Create logical volume
vgcreate- Create volume groups

LVM monitoring/display commands:
pvscan - Used to scan the OS for physical volumes
vgscan - Used to scan the OS for volume groups
lvscan - Used to scan the OS for logical volumes
pvdisplay - Used to display information about physical volume
vgdisplay - Used to display information about volume groups
lvdisplay - Used to display information about logical volumes

LVM removal commands:
pvchange - Change the status of physical volume
vgchange - Change the status of volume groups
lvchange - Change the status of logical volumes
pvremove - Used to wipe the disk label of a physical drive so that LVM does not recognize it as a physical volume
vgremove - Used to remove a volume group
lvremove - Used to remove a logical group

Manipulation commands:
pvextend - Used to add physical devices to physical volume
pvreduce - Used to remove physical devices from a physical volume
vgextend - Used to add new physical disk to volume group
vgreduce - Used to remove physical disk from volume group
lvextend - Used to increase size of logical volume
lvreduce - Used to decrease the size of logical volume




Brian Brennan 10/23/2012
Comments