Russian-language documentation on Ubuntu. LVM is easy! Add new disk to lvm volume
at my home Linux server 250 GB disk installed. I just bought a new 250GB SATA drive and I want to add new disk to my existing LVM volume to increase its size to 500 GB. How to add disk to LVM and expand LVM volume in Linux operating system?
Linux Volume Management (LVM) creates an easy-to-use layer on top of physical disks. You can combine multiple disks and create logical storage volumes. This provides specific benefits such as:
- No restrictions on disk size;
- Increased disk bandwidth
- Mirroring volumes for critical business data;
- Volume snapshots;
- Lung backup and recovery using snapshots;
- Easy data transfer;
- Resizing storage pools (adding or removing disks) without necessarily reformatting disks.
Step 1 - Find out information about existing LVMsAttention: Be careful with lvm / mkfs.ext4 and other commands, as well as with device names, as if the device name is set incorrectly, it may destroy all data. Be careful and always keep full backups.
LVM storage management is divided into three parts:
- Physical Volumes (FT(PV))– actual (for example, /dev/sda, /dev,sdb, /dev/vdb, etc.)
- Volume Groups (GT(VG))– physical volumes are grouped into volume groups. (e.g. my_vg = /dev/sda + /dev/sdb .)
- Logical Volumes (LT(LV))– the volume group, in turn, is divided into logical volumes (for example, my_vg is divided into my_vg/data, my_vg/backups, my_vg/home, my_vg/mysqldb, etc.)
How to display information about physical volumes (pv)
Enter the following pvs command to view information about physical volumes:
So, currently my LVM includes a physical volume (actual disk) called /dev/vda5 . To view detailed information about properties, enter:
$ sudo pvdisplay
Examples of possible data outputs:It is clear from the above output that our volume group named ubuntu-box-1-vg is made from a physical volume named /dev/vda5 .
How to display information about LVM volume group (vg)
Type any of the following vgs/vgdisplay vgs commands to view information about volume groups and their properties:
$ sudo vgdisplay
Examples of possible data outputs:How to display information about LVM logical tome (lv)
Type any of the following lvs command / lvdisplay commands to view information about volume groups and their properties:
$ sudo lvdisplay
Examples of possible data outputs:My ubuntu-box-1-vg volume group is split into two logical volumes:
- /dev/ubuntu-box-1-vg/root - root file system;
- /dev/ubuntu-box-1-vg/swap_1 - Swap space.
Step 2 - Find out information about the new drive
You need to add a new drive to your server. In this example, for demonstration purposes, I have added a new drive that is 5GiB in size. To find out information about the launch of new discs:
$ sudo fdisk -l
$ sudo fdisk -l | grep "^Disk /dev/"
Examples of possible data outputs:Another option is to scan all visible devices for LVM2:
$ sudo lvmdiskscan
Examples of possible data outputs:/dev/ram0 [ 64.00 MiB] /dev/ubuntu-box-1-vg/root [ 37.49 GiB] /dev/ram1 [ 64.00 MiB] /dev/ubuntu-box-1-vg/swap_1 [ 2.00 GiB] /dev /vda1 [ 487.00 MiB] /dev/ram2 [ 64.00 MiB] /dev/ram3 [ 64.00 MiB] /dev/ram4 [ 64.00 MiB] /dev/ram5 [ 64.00 MiB] /dev/vda5 [ 39.52 GiB] LVM physical volume / dev/ram6 [ 64.00 MiB] /dev/ram7 [ 64.00 MiB] /dev/ram8 [ 64.00 MiB] /dev/ram9 [ 64.00 MiB] /dev/ram10 [ 64.00 MiB] /dev/ram11 [ 64.00 MiB] /dev/ ram12 [ 64.00 MiB] /dev/ram13 [ 64.00 MiB] /dev/ram14 [ 64.00 MiB] /dev/ram15 [ 64.00 MiB] /dev/vdb [ 5.00 GiB] 2 disks 18 partitions 0 LVM physical volume whole disks 1 LVM physical volume
Step 3 - Create physical volumes (pv) on a new drive called /dev/vdbEnter the following command:
$ sudo pvcreate /dev/vdb
Examples of possible data outputs:Physical volume "/dev/vdb" successfully created
Now run the following command to check:
$ sudo lvmdiskscan -l
Examples of possible data outputs:WARNING: only considering LVM devices /dev/vda5 [ 39.52 GiB] LVM physical volume /dev/vdb [ 5.00 GiB] LVM physical volume 1 LVM physical volume whole disk 1 LVM physical volume
Step 4 - Adding a newly created physical volume (pv) named /dev/vdb to an existing logical volume (lv)Enter the following command to add the physical volume /dev/vdb to the volume group "ubuntu-box-1-vg".
Logical Volume Manager ( LVM – English. Logical volume manager ) - provides an additional layer of abstraction between physical/logical disks (the usual partitions that fdisk and similar programs work with) and the file system. This is achieved by breaking the initial partitions into small blocks ( extents, usually 4-32 MB) and combining them into a single virtual volume, more precisely a group of volumes ( volume group), which is further divided into logical volumes ( logical volume). To the file system, a logical volume is represented as a regular block device, although individual volume extents may reside on different physical devices(and even the extent itself can be distributed like a RAID). LVM increases the flexibility of the file system, however, being just an intermediate layer, it does not cancel the restrictions and use of other layers. That is, you still need to create and modify partitions, format them.
Creation.
# pvcreate /dev/sdb1 /dev/sdb2 //create a physical volume # vgcreate volgroup00 /dev/sdb1 //create a volume group # vgextend volgroup00 /dev/sdb2 //add a new partition to the volume group # pvdisplay /dev/sdb2 //display physical volume attributes # lvcreate -L20G -ntest01 volgroup00 //create a 20Gb logical volume called test
The volume size can be created by the number of extents, their number in the volume group is determined by vgdsplay :
# lvcreate -l 10000 volgroup00 -n test02
Now that the logical volume has been created over it, you can perform the same actions as with a regular partition, i.e. it can be formatted, mounted, transferred information, and so on.
# mkreiserfs /dev/volgroup00/test01 //format the logical volume under ReiserFS # mount /dev/volgroup00/test01 /mnt/lvmtest //mount the logical volume # cp -a /etc/ /mnt/lvmtest //copy
Maintenance of LVM.
Increasing the size of a logical volume
After increasing the logical volume, you need to increase the size of the file system. For each FS your method. And in each case there are nuances:
Before resizing the file system ext2 you need to unmount the partition (size Ext3/Ext4 changes on the fly).
Ext4 only increases with resize2fs. AT fsadm their methods.
Increase the size of file systems Reiserfs possible as in assembled, as well as in unmounted condition.
File system size XFS can only be increased assembled condition. In addition, the utility needs to pass the mount point as a parameter, not the device name.
# lvextend - L+ 4G / dev/ volgroup00/ test01 //increase the logical volume by 4GB# resize2fs /dev/volgroup00/ext //Ext2/Ext3/Ext4 extension or the second option via fsadm (some sources write about e2fsadm, I didn't find it myself).# fsadm - l resize / dev/ volgroup00/ ext 2G //increasing the Ext2/Ext3 logical volume, with FS extension. As of September 2009 Ext4 is not supported yet# resize_reiserfs - f / dev/ volgroup00/ reiser //reiserFS extension# xfs_growfs / mnt/ lvm/ xfs // XFS extension # btrfsctl - r + 2g / mnt/ lvm/ btrfs/ // Btrfs extension or # btrfsctl - r + 2g - A / dev/ volgroup00/ btrfs // Btrfs extension |
# lvextend -L+4G /dev/volgroup00/test01 //increase the logical volume by 4GB # resize2fs /dev/volgroup00/ext //Ext2/Ext3/Ext4 extension or the second option via fsadm (some sources write about e2fsadm, at home didn't find it). # fsadm -l resize /dev/volgroup00/ext 2G //Increase the Ext2/Ext3 logical volume, with FS expansion. As of September 2009, Ext4 is not yet supported # resize_reiserfs -f /dev/volgroup00/reiser // ReiserFS extension # xfs_growfs /mnt/lvm/xfs // XFS extension # btrfsctl -r +2g /mnt/lvm/btrfs/ // Btrfs extension or # btrfsctl -r +2g -A /dev/volgroup00/btrfs //btrfs extension
Reducing the size of a logical volume
Logical volumes can also be reduced in size. First of all, you need to reduce the size of the file system, and only then reduce the size of the logical volume. In the reverse order, you can lose data. There are also some nuances:
The file system must be unmounted before resizing.
When resizing ext2/ext3 resize2fs'y specifies its new size.
Decrease XFS and JFS impossible.
Decrease btrfs You can on the fly, but it's better not to risk it.
# resize2fs /dev/volgroup00/ext2 500m //specify a new size for Ext2/Ext3 FS # fsadm -l resize /dev/volgroup00/ext3 200M //specify a new size for Ext2/Ext3 FS # resize_reiserfs -s-1G /dev/ volgroup00/reiserfs //reduce FS Reiserfs # btrfsctl -r -2g -A /dev/volgroup00/btrfs //reduce FS Btrfs # lvreduce -L-1G /dev/volgroup00/test01 //reduce logical volume
Renaming a logical volume
# vgchange -a n /dev/volgroup02 //disable the logical volumes in the victim group # vgmerge volgroup01 volgroup02 //Merge volgroup02 into volgroup01
Partitioning a volume group
# vgsplit volgroup01 volgroup02 /dev/sdb1 //allocation new group volumes volgroup02 which will be located on the physical volume /dev/sdb1
Resizing the physical volume. There are several nuances:
Increasing the size of the LVM physical volume is performed after the partition is enlarged by programs like cfdisk/fdisk
Volume reduction should be carried out after reducing file systems and logical volumes, otherwise data corruption is possible
# [b]pvresize /dev/sda1 //increase physical volume # [b]pvresize --setphysicalvolumesize 40G /dev/sda1 //reduce physical volume
Snapshots
A snapshot is a read-only copy of another volume. When creating snapshots, you need to make sure that it is running dmeventd. When taking pictures with XFS it needs to be frozen first. xfs_freeze.
# lvcreate -L600M -s -n var-backups /dev/volgroup00/var //create the var-backup volume as a copy of the volume var # mount /dev/volgroup00/var-backup /mnt/backup //then mount the snapshot
Creating a Mirror (Mirrors)
Mirrors are used to increase fault tolerance and increase the level of information security. To create a mirror, you need 3 physical volumes, 2 for the mirror and 1 for the journal. It is recommended to use physical volumes located on different media. The use of physical volumes from one media nullifies the usefulness of using a mirror, since in the event of a hardware failure, the mirror is rendered useless. When creating mirrors, you need to make sure that it is running dmeventd
# pvcreate /dev/sda5 /dev/sdb1 /dev/sdc1 //create a physical volume # vgcreate mirror00 /dev/sda5 /dev/sdb1 /dev/sdc1 //create a mirror00 group # lvcreate -L 5G -n volume00 -m 1 mirror00 //create a mirrored volume volume00
view information about logical volumes, Copy percentage should reach 100%
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert volume00 mirror00 mwi- a- 5.00G volume00_mlog 6.17 |
# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert volume00 mirror00 mwi-a- 5.00G volume00_mlog 6.17
checking device usage in the created mirror
# lvs - a - o + devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices volume00 mirror00 mwi- a- 5.00G volume00_mlog 100.00 volume00_mimage_0(0 ) , volume00_mimage_1(0 ) [ volume00_mimage_0] mirror00 iwi- ao 5.00G / dev/ sda5(0 ) [ volume00_mimage_1] mirror00 iwi- ao 5.00G / dev/ sdb1(0 ) [ volume00_mlog] mirror00 lwi- ao 4.00M |
# lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Convert Devices volume00 mirror00 mwi-a- 5.00G volume00_mlog 100.00 volume00_mimage_0(0),volume00_mimage_1(0) mirror00 iwi-ao 5.00G /dev/sda5 (0) mirror00 iwi-ao 5.00G /dev/sdb1(0) mirror00 lwi-ao 4.00M
# vgextend volgroup01 /dev/sdc1 /dev/sdd1 //add new physical volumes to the group # lvconvert -m 1 /dev/volgroup01/volume-new //convert regular volume in the mirror
Deleting Volumes and Groups
Logical volumes must be unmounted before being removed.
# vgreduce volgroup00 /dev/sdb1 //remove a physical volume from a group # lvremove /dev/volgroup00/test01 //remove a logical volume # vgremove volgroup00 //remove a volume group # pvremove /dev/sdc1 //remove a physical volume
The other day I had to change disks on a CentOS 6.7 server. Old disks, although still working, could not cope with the load. Therefore, they forked out and bought SSD drives of the same capacity as the old ones. But since cost of 1 GB SSD drive much more expensive, then looking at the size of /var (180 GB) and / (root partition) 300 GB, the decision to increase the size of / by reducing the size of /var naturally arose. The idea, of course, is good, but I have never played like that before, so I spent the weekend on the forums, first checked everything on a virtual machine without raid, then I thought that a soft raid1 was running on my server and made a test bench (I installed CentOS 6.7 on the old computer with two disks in a software raid1) and checked everything on it, then re-executed it on a working server. But it was exciting nonetheless. So let's go!
Tip: If you've never had fun with partition changes before, never try it on a work computer.
So, there is a system of two disks united in raid1 (mirror).
The breakdown is something like this:
/dev/sda:
/dev/sda1 200 MB, /dev/md0
/dev/sda2 480 GB, /dev/md1
/dev/sdb:
/dev/sdb1 200 Mb, /dev/md0
/dev/sdb2 480 GB, /dev/md1
/dev/md0 given to /boot
On /dev/md1 of an LVM volume:
/dev/VolGroup/LogVol00 swap 2 GB
/dev/VolGroup/LogVol01 /var 180 Gb, ext4
/dev/VolGroup/LogVol02 / 300 GB, ext4
Why did I give an example of a breakdown? Just so you can note that we will never resize volumes on /dev/sdXY devices in the future. We will carry out all partition changes only with LVM logical volumes!
A task: resize /var to 30 GB and give all freed space to / (root). The file system of both partitions is ext4. CentOS System 6.7.
Step 1: Reduce /var
In my case (software raid1), I first checked, just in case, that all disks are connected and there are no errors (cat /proc/mdstat).
I wasn't sure if it was a good idea to unmount /var (umount /var) in production mode, so I booted from the CentOS LiveCD in Rescue Mode (it was possible to enter single mode without a LiveCD).
Checking the availability of the LVM physical volume:
#pvscan
PV /dev/md1 VG VolGroup lvm2
Checking the availability of volume groups:
#vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup" using metadata type lvm2
Activate logical volumes:
We look at the logical volumes:
#lvscan
ACTIVE "/dev/VolGroup/LogVol01" inherit
ACTIVE "/dev/VolGroup/LogVol00" inherit
ACTIVE "/dev/VolGroup/LogVol02" inherit
You can see the details of the /dev/VolGroup/LogVol01 volume (we have it /var):
# lvdisplay /dev/VolGroup/LogVol01
and you may not look.
All of the above checks are to make sure that the LVM volumes are visible and active, and so that you understand which volume you should proceed with. Go ahead.
Unmount the volume that we have /var and which will reduce:
# umount /dev/VolGroup/LogVol01
We check the file system of the volume:
# fsck.ext4 /dev/VolGroup/LogVol01
The command should pass without errors.
We check for errors (-f - force):
# e2fsck -f /dev/VolGroup/LogVol01
First, reduce the size of the volume's file system:
resize2fs -p /dev/VolGroup/LogVol01 30G
Attention: here "30G" is the size we want to set for the file system, not by which we want to reduce the file system.
And only after that we resize the LVM volume:
# lvreduce -L 30G /dev/VolGroup/LogVol01
Received a success message.
Now you can not mount the partition back, but immediately reboot and check that everything is ok, the system boots, df -h gives the size of /var equal to 30 GB.
On CentOS, after booting before login, there was a message from SELinux that should re-index the changes. OK. It took some time and the system itself rebooted. After that, I logged in and made sure everything was ok. Only after that did I move on to the second step (which turned out to be much faster and easier), which was to increase the root partition by adding all the available free space on the LVM physical volume to it. Run pvscan and see if the command output shows that there is space available (this is plus or minus what was used before under /var). Now we are free place add to / (root).
Step 2: increase the size of the LVM volume (which we have /) without rebooting
Yes, you didn't read it. To increase the size of the LVM volume, we do not have to boot into single mode (or via LiveCD in Rescue mode).
Just in case, in order not to confuse which volume you want to give free space to, run cat /etc/fstab and lvscan, make sure that the / (root) partition is /dev/VolGroup/LogVol02, and not something else;)
# lvextend -l+100%FREE -r
That's right, no spaces between -l (that's a small L) and +100%FREE. Note that I didn't specify exactly how much I want to increase the size of the volume. In this situation, I didn’t need it, and in order not to guess how much specifically I need to add GB, I simply indicated that everything that was possible was added. man lvextend for variations of -L+100G ;) The -r option tells the file system to be resized after the logical volume has grown. Without this option, there would be two commands:
# lvextend -l+100%FREE /dev/VolGroup/LogVol02
# resize2fs /dev/VolGroup/LogVol02
After successful completion, I would reboot and check that everything is ok.
Naturally, all of the above can fail at any stage, if only because of a power failure. So if the data on the mutable partitions is in any way important, you should have a copy of it.
All of the above is true for CentOS. For Ubuntu, everything seems to be the same. I think (I don't know) that for all modern Linux all commands are the same.
08/21/2017 10:48 bzzz
System AdministrationWhat is LVM?
LVM stands for Logical Volume Manager. I will not give official definitions, but will briefly describe in my own words. LVM is an additional layer of disk space abstraction. This level is located between the file system and the physical disk. LVM is similar to software RAID. In this very abstraction, there are 3 elements: a group of volumes (Volume Group, abbr. VG), a physical volume (Physical volume, abbr. PV) and a logical volume (Logical Volume, abbr. LV). You can create multiple volume groups. You need to add physical volumes to each volume group. Physical volumes are disk partitions. After adding physical volumes, you can add logical volumes. And on logical volumes, you can already create a file system. All this is very convenient, especially on the server.How can LVM be used?
If you use LVM, you can simplify server maintenance. You can create many partitions with different file systems, you can mount file systems with different flags (for example, disable file execution), you can very quickly and easily expand the size of a partition if it runs out of space. Of course, the extra layer between the disk and the file system reduces read and write speeds. You have to pay for everything. I use LVM to conveniently manage the disk space of virtual machines. Usually, as virtual disk using a regular file. Firstly, this is inconvenient, because KVM does not have a mechanism for taking instant snapshots of a virtual disk (snapshots), and copying even several gigabytes takes a long time, and virtual machine will have to stop. Second, if the virtual disk file is stored in file system, then we will get additional delays associated with reading and writing this file. Therefore, I use LVM logical volumes as a virtual disk.Command Quick Reference
Create volume group:- vgcreate vg_virt /dev/sda1 /dev/sdb1
- pvcreate /dev/sda2
- vgextend vg_virt /dev/sda2
- lvcreate -L10G -n lv_ubuntu_vm vg_virt
To grow a logical volume, you can specify the end size of the volume, or you can specify the size by which you want to grow the volume.
- lvextend -L12G /dev/vg_virt/lv_ubuntu_vm
- lvextend -L+3G /dev/vg_virt/lv_ubuntu_vm
- resize2fs /dev/vg_virt/lv_ubuntu_vm
- lvremove /dev/vg_virt/lv_ubuntu_vm
- lvcreate --size 2G --snapshot --name snapshot_ubuntu_vm /dev/vg_virt/lv_ubuntu_vm
And to create a copy of a logical drive, that is, clone it completely, you can use the simple dd utility.
- sudo dd if=/dev/vgroup1/lvolume1 of=/dev/vgroup1/lvolume_copy
Logical Volume Manager - logical volume manager operating systems GNU/Linux and OS/2. It allows you to create logical volumes on top of physical partitions (or even unpartitioned hard drives), which in the system itself will be visible as ordinary block devices with data (that is, as ordinary partitions). The main advantages of LVM are that, firstly, one logical volume group can be created over any number of physical partitions, and secondly, the size of logical volumes can be easily changed right during operation. In addition, LVM supports snapshots, on-the-fly partition copying, and RAID-1-like mirroring.
Creating and Removing LVM
For LVM, there are three groups of utilities designed to work with physical volumes (pv*), logical groups(lg*) and logical volumes (lv*). For example, the pvcreate command creates physical volumes, the pvscan command reports on the physical volumes, and the pvdisplay command displays full information about them. And the command triples vgcreate, vgscan, vgdisplay and lvcreate, lvscan, lvdisplay do the same for volume groups and logical volumes, respectively.
Removing LVM(or its individual parts, for example, logical volumes or volume groups) occurs in the reverse order of their creation:
unmount partitions (umount)
lvdisplay. remove logical volumes (lvremove)
vgdisplay. remove volume groups (vgremove) # vgremove vz
pvdisplay. remove unnecessary physical volumes (pvremove) # pvremove /dev/sda3
Creating an LVM
Create a physical volume on the sda3 partition: # pvcreate /dev/sda3 # pvdisplay
On the physical volume, create a volume group named vz: # vgcreate -s 32M vz /dev/sda3
The vgcreate command is run with the group name as the first argument and the partition's device filename as the second argument. The group name is arbitrary, the full notation must be used in the paths to the physical volume device files when using devfs (as the pvscan command outputs). By default, volumes are cut into physical extent blocks of 4 MB. If you want to have a different block size, you can explicitly set this with the -s ##m option. It is recommended to use extents of 32 MB, in this case maximum size any of the future logical volumes is limited to 2 terabytes, but if you stop at the default extent "e, the volume limit would be 256 GB.
Creating a logical volume or volumes (similar to partitioning a physical hard drive). Let's create two 10Gb tmp partitions and a rest partition that will take up all the remaining space of the vz volume group: # lvcreate -l 10G -n tmp vz # lvcreate -l 100%FREE -n rest vz
Format the resulting logical volumes: # mkfs.ext4 /dev/vz/tmp # mkfs.ext4 /dev/vz/rest