1. 개요
- LVM(Linux Logical Volume Manager)을 이용해 RAID0, RAID1, RAID5, RAID10 구성을 실습하는 과정을 다룹니다.
2. 버전
- Rocky Linux 9.5
3. 설명
3-1. RAID란?
- RAID(Redundant Array of Independent Disks)는 여러 개의 물리 디스크를 묶어 성능 향상 또는 데이터 안전성 확보를 제공하는 기술입니다.
- RAID0 (Striping) : 성능 ↑, 안전성 ×
- RAID1 (Mirroring) : 안전성 ↑, 용량 절반
- RAID5 (Parity 기반 분산 저장) : 성능+안전성 균형, 디스크 최소 3개 필요
- RAID10 (RAID0 + RAID1 조합) : 성능+안전성 모두 확보, 디스크 최소 4개 필요
3-2. rimage와 rmeta란?
- LVM에서 RAID LV를 만들면 내부적으로 여러 서브 LV를 생성합니다.
- rimage (RAID Image)
- 실제 데이터 블록이 저장되는 공간
- RAID0/RAID1/RAID5/RAID10 구성에서 데이터 분산 저장
- rmeta (RAID Metadata)
- RAID 구성을 관리하기 위한 메타데이터 저장소
- 각 디스크의 상태, 동기화 정보, Parity 관리 등에 사용됨
- RAID1, RAID5, RAID10 등 메타데이터가 필요한 RAID 모드에서만 생성
- rimage (RAID Image)
4. Partition
4-1. Partition 생성
# for i in vdb vdc vdd vde; do parted -s /dev/$i mklabel gpt parted -s /dev/$i mkpart $i 1 100% parted -s /dev/$i set 1 lvm on done
4-2. Partition 생성 확인
# parted -l | grep lvm
출력 예시:
1 1049kB 137GB 137GB vdd lvm 1 1049kB 137GB 137GB vdb lvm 1 1049kB 137GB 137GB vde lvm 1 1049kB 137GB 137GB vdc lvm
5. PV(Physical Volume)
5-1. PV(Physical Volume) 생성
# pvcreate /dev/vdb1 /dev/vdc1 /dev/vdd1 /dev/vde1
출력 예시:
Physical volume "/dev/vdb1" successfully created. Physical volume "/dev/vdc1" successfully created. Physical volume "/dev/vdd1" successfully created. Physical volume "/dev/vde1" successfully created. Creating devices file /etc/lvm/devices/system.devices
5-2. PV(Physical Volume) 생성 확인
# pvdisplay -s
출력 예시:
Device "/dev/vdb1" has a capacity of <128.00 GiB Device "/dev/vdc1" has a capacity of <128.00 GiB Device "/dev/vdd1" has a capacity of <128.00 GiB Device "/dev/vde1" has a capacity of <128.00 GiB
6. VG(Volume Group)
6-1. VG(Volume Group) 생성
# vgcreate vg_raid /dev/vdb1 /dev/vdc1 /dev/vdd1 /dev/vde1
출력 예시:
Volume group "vg_raid" successfully created
6-2. VG(Volume Group) 생성 확인
# vgdisplay -s
출력 예시:
"vg_raid" 511.98 GiB [0 used / 511.98 GiB free]
7. LV(Logical Volume)
7-1. LV(Logical Volume) 생성
7-1-1. RAID0
# lvcreate --type raid0 -L 20G -n lv_raid0 vg_raid /dev/vdb1 /dev/vdc1
출력 예시:
Using default stripesize 64.00 KiB. Logical volume "lv_raid0" created.
7-1-2. RAID1
# lvcreate --type raid1 -L 20G -n lv_raid1 vg_raid /dev/vdb1 /dev/vdc1
출력 예시:
Logical volume "lv_raid1" created.
7-1-3. RAID5
# lvcreate --type raid5 -L 20G -n lv_raid5 vg_raid /dev/vdb1 /dev/vdc1 /dev/vdd1
출력 예시:
Using default stripesize 64.00 KiB. Logical volume "lv_raid5" created.
7-1-4. RAID10
# lvcreate --type raid10 -L 20G -n lv_raid10 vg_raid /dev/vdb1 /dev/vdc1 /dev/vdd1 /dev/vde1
출력 예시:
Using default stripesize 64.00 KiB. Logical volume "lv_raid10" created.
7-2. LV(Logical Volume) 생성 확인
7-2-1. RAID0 확인
# lvdisplay -m /dev/vg_raid/lv_raid0
출력 예시:
--- Logical volume --- LV Path /dev/vg_raid/lv_raid0 LV Name lv_raid0 VG Name vg_raid LV UUID a3QcAT-0jcx-59Fz-auJA-AB5a-KErm-MOD5Gb LV Write Access read/write LV Creation host, time KVM01, 2025-09-06 15:31:53 +0900 LV Status available # open 0 LV Size 20.00 GiB Current LE 5120 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 512 Block device 252:2 --- Segments --- Logical extents 0 to 5119: Type raid0 Monitoring not monitored Raid Data LV 0 Logical volume lv_raid0_rimage_0 Logical extents 0 to 2559 Raid Data LV 1 Logical volume lv_raid0_rimage_1 Logical extents 0 to 2559
# lvs -a -o name,devices vg_raid | grep lv_raid0
출력 예시:
lv_raid0 lv_raid0_rimage_0(0),lv_raid0_rimage_1(0) [lv_raid0_rimage_0] /dev/vdb1(0) [lv_raid0_rimage_1] /dev/vdc1(0)
7-2-2. RAID1 확인
# lvdisplay -m /dev/vg_raid/lv_raid1
출력 예시:
--- Logical volume --- LV Path /dev/vg_raid/lv_raid1 LV Name lv_raid1 VG Name vg_raid LV UUID FU2uZl-yXSA-Ibey-n8gZ-UF4h-7tBG-gX8GgV LV Write Access read/write LV Creation host, time KVM01, 2025-09-06 15:32:26 +0900 LV Status available # open 0 LV Size 20.00 GiB Current LE 5120 Mirrored volumes 2 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:7 --- Segments --- Logical extents 0 to 5119: Type raid1 Monitoring monitored Raid Data LV 0 Logical volume lv_raid1_rimage_0 Logical extents 0 to 5119 Raid Data LV 1 Logical volume lv_raid1_rimage_1 Logical extents 0 to 5119 Raid Metadata LV 0 lv_raid1_rmeta_0 Raid Metadata LV 1 lv_raid1_rmeta_1
# lvs -a -o name,devices vg_raid | grep lv_raid1_
출력 예시:
lv_raid1 lv_raid1_rimage_0(0),lv_raid1_rimage_1(0) [lv_raid1_rimage_0] /dev/vdb1(2561) [lv_raid1_rimage_1] /dev/vdc1(2561) [lv_raid1_rmeta_0] /dev/vdb1(2560) [lv_raid1_rmeta_1] /dev/vdc1(2560)
7-2-3. RAID5 확인
# lvdisplay -m /dev/vg_raid/lv_raid5
출력 예시:
--- Logical volume --- LV Path /dev/vg_raid/lv_raid5 LV Name lv_raid5 VG Name vg_raid LV UUID djmAxC-fh3X-mvru-IPoy-6gsx-RTox-isqmjc LV Write Access read/write LV Creation host, time KVM01, 2025-09-06 15:32:38 +0900 LV Status available # open 0 LV Size 20.00 GiB Current LE 5120 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 768 Block device 252:14 --- Segments --- Logical extents 0 to 5119: Type raid5 Monitoring monitored Raid Data LV 0 Logical volume lv_raid5_rimage_0 Logical extents 0 to 2559 Raid Data LV 1 Logical volume lv_raid5_rimage_1 Logical extents 0 to 2559 Raid Data LV 2 Logical volume lv_raid5_rimage_2 Logical extents 0 to 2559 Raid Metadata LV 0 lv_raid5_rmeta_0 Raid Metadata LV 1 lv_raid5_rmeta_1 Raid Metadata LV 2 lv_raid5_rmeta_2
# lvs -a -o name,devices vg_raid | grep lv_raid5_
출력 예시:
lv_raid5 lv_raid5_rimage_0(0),lv_raid5_rimage_1(0),lv_raid5_rimage_2(0) [lv_raid5_rimage_0] /dev/vdb1(7682) [lv_raid5_rimage_1] /dev/vdc1(7682) [lv_raid5_rimage_2] /dev/vdd1(1) [lv_raid5_rmeta_0] /dev/vdb1(7681) [lv_raid5_rmeta_1] /dev/vdc1(7681) [lv_raid5_rmeta_2] /dev/vdd1(0)
7-2-4. RAID10 확인
# lvdisplay -m /dev/vg_raid/lv_raid10
출력 예시:
--- Logical volume --- LV Path /dev/vg_raid/lv_raid10 LV Name lv_raid10 VG Name vg_raid LV UUID fXRwoA-xFo6-O2Lt-x6Q9-V1SL-nCqP-jZe2vU LV Write Access read/write LV Creation host, time KVM01, 2025-09-06 15:32:49 +0900 LV Status available # open 0 LV Size 20.00 GiB Current LE 5120 Mirrored volumes 4 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 1024 Block device 252:23 --- Segments --- Logical extents 0 to 5119: Type raid10 Monitoring monitored Raid Data LV 0 Logical volume lv_raid10_rimage_0 Logical extents 0 to 5119 Raid Data LV 1 Logical volume lv_raid10_rimage_1 Logical extents 0 to 5119 Raid Data LV 2 Logical volume lv_raid10_rimage_2 Logical extents 0 to 5119 Raid Data LV 3 Logical volume lv_raid10_rimage_3 Logical extents 0 to 5119 Raid Metadata LV 0 lv_raid10_rmeta_0 Raid Metadata LV 1 lv_raid10_rmeta_1 Raid Metadata LV 2 lv_raid10_rmeta_2 Raid Metadata LV 3 lv_raid10_rmeta_3
# lvs -a -o name,devices vg_raid | grep lv_raid10_
출력 예시:
lv_raid10 lv_raid10_rimage_0(0),lv_raid10_rimage_1(0),lv_raid10_rimage_2(0),lv_raid10_rimage_3(0) [lv_raid10_rimage_0] /dev/vdb1(10243) [lv_raid10_rimage_1] /dev/vdc1(10243) [lv_raid10_rimage_2] /dev/vdd1(2562) [lv_raid10_rimage_3] /dev/vde1(1) [lv_raid10_rmeta_0] /dev/vdb1(10242) [lv_raid10_rmeta_1] /dev/vdc1(10242) [lv_raid10_rmeta_2] /dev/vdd1(2561) [lv_raid10_rmeta_3] /dev/vde1(0)
8. 파일 시스템
8-1. 파일 시스템 생성
# mkfs.ext4 /dev/vg_raid/lv_raid0 # mkfs.ext4 /dev/vg_raid/lv_raid1 # mkfs.ext4 /dev/vg_raid/lv_raid5 # mkfs.ext4 /dev/vg_raid/lv_raid10
9. 마운트
9-1. 마운트 포인트 생성
# mkdir /mnt/{raid0,raid1,raid5,raid10}
9-2. 마운트
# mount /dev/vg_raid/lv_raid0 /mnt/raid0 # mount /dev/vg_raid/lv_raid1 /mnt/raid1 # mount /dev/vg_raid/lv_raid5 /mnt/raid5 # mount /dev/vg_raid/lv_raid10 /mnt/raid10
9-3. 마운트 확인
# df -Th | grep raid
출력 예시:
/dev/mapper/vg_raid-lv_raid0 ext4 20G 24K 19G 1% /mnt/raid0 /dev/mapper/vg_raid-lv_raid1 ext4 20G 24K 19G 1% /mnt/raid1 /dev/mapper/vg_raid-lv_raid5 ext4 20G 24K 19G 1% /mnt/raid5 /dev/mapper/vg_raid-lv_raid10 ext4 20G 24K 19G 1% /mnt/raid10
10. DISK 교체
10-1. 구성 상태 확인
# lvs -a -o name,devices vg_raid | grep lv_raid1_
출력 예시:
lv_raid1 lv_raid1_rimage_0(0),lv_raid1_rimage_1(0) [lv_raid1_rimage_0] /dev/vdb1(2561) [lv_raid1_rimage_1] /dev/vdc1(2561) [lv_raid1_rmeta_0] /dev/vdb1(2560) [lv_raid1_rmeta_1] /dev/vdc1(2560)
10-2. 장애 디스크 교체 실행
- PV(
/dev/vdb1
)를 다른 PV로 교체합니다.
# lvconvert --replace /dev/vdb1 vg_raid/lv_raid1
10-3. 교체 확인
# lvs -a -o name,devices vg_raid | grep lv_raid1_
출력 예시:
lv_raid1 lv_raid1_rimage_0(0),lv_raid1_rimage_1(0) [lv_raid1_rimage_0] /dev/vdd1(5123) [lv_raid1_rimage_1] /dev/vdc1(2561) [lv_raid1_rmeta_0] /dev/vdd1(5122) [lv_raid1_rmeta_1] /dev/vdc1(2560)