前言
本章,您将学习 RAID 1 的相关操作。回顾下前面 RAID 1 的知识:
- 需要的磁盘数量:至少需要两块硬盘(2n,n≥1)
- 可用容量:50%
- 读写性能:低
- 冗余备份:有
- 经济成本:高
准备工作
上篇文章中,我将操作系统中的硬盘环境还原为了初始状态,为的就是来演示 RAID1 的实验。
关机状态下添加三块 50GB 的机械硬盘,接着开机:
# 可以看到,新添加的硬盘被 udev 识别为 sdb 、 sdc、sdd
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb disk 50G
sdc disk 50G
sdd disk 50G
sr0 rom 13.2G 2024-05-27-14-12-59-00
对三块硬盘执行分区与格式化:
# 分区
Shell > parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 分区
Shell > parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdc: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 分区
Shell > parted /dev/sdd
GNU Parted 3.2
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdd: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 格式化,写入文件系统 ext4
Shell > mkfs -t ext4 /dev/sdb1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13106688 4k blocks and 3276800 inodes
Filesystem UUID: 4d440538-6cda-4558-923c-62df6cd38e2f
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
# 格式化,写入文件系统 ext4
Shell > mkfs -t ext4 /dev/sdc1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13106688 4k blocks and 3276800 inodes
Filesystem UUID: 84dac29c-0ba0-45ed-a428-394d7607f5be
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
# 格式化,写入文件系统 ext4
Shell > mkfs -t ext4 /dev/sdd1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13106688 4k blocks and 3276800 inodes
Filesystem UUID: 600e5540-0c62-4ac8-a748-ee53195c8c97
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
准备工作完成:
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb disk 50G
└─sdb1 part 50G 4d440538-6cda-4558-923c-62df6cd38e2f
sdc disk 50G
└─sdc1 part 50G 84dac29c-0ba0-45ed-a428-394d7607f5be
sdd disk 50G
└─sdd1 part 50G 600e5540-0c62-4ac8-a748-ee53195c8c97
sr0 rom 13.2G 2024-05-27-14-12-59-00
Q:RAID 1 明明需要 2n(n≥1)块硬盘,为什么要添加第三块硬盘?
多余的那块硬盘是热备盘,主要作用是模拟磁盘故障时,热备盘会自动顶替故障盘。
组建 RAID 1
# -C 组建raid;-v 显示详细信息;-l 指定 raid 级别;-n 指定硬盘数(或分区数);-x 指定备用盘数量
Shell > mdadm -C -v /dev/md1 -l raid1 -n 2 -x 1 /dev/sd[b-d]1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=52426752K mtime=Thu Jan 1 08:00:00 1970
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=52426752K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=52426752K mtime=Thu Jan 1 08:00:00 1970
mdadm: size set to 52392960K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
查阅 RAID 信息:
Shell > mdadm -Ds
ARRAY /dev/md1 metadata=1.2 spares=1 UUID=6191293f:176cb1a2:5d7f53e3:c592d279
# 如您所见,sdb1 和 sdc1 组建了 RAID 1 且处于正常工作状态(active sync),sdd1 状态为备用
Shell > mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Fri May 30 11:48:04 2025
Raid Level : raid1
Array Size : 52392960 (49.97 GiB 53.65 GB)
Used Dev Size : 52392960 (49.97 GiB 53.65 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri May 30 11:52:25 2025
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : HOME01:1 (local to host HOME01)
UUID : 6191293f:176cb1a2:5d7f53e3:c592d279
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 - spare /dev/sdd1
# "[2](S)":表示这是当前阵列的第三个设备且是备用盘,如果有损坏的盘,则是 "(F)"
# "[0]" 和 "[1]": 0 表示这是当前 raid1 当中的第一个设备,1 表示这是当前 raid1 的第二个设备,以此类推
# "[2/2]":表示当前阵列使用到 2 块磁盘且都正常运行。
Shell > cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2](S) sdc1[1] sdb1[0]
52392960 blocks super 1.2 [2/2] [UU]
unused devices: <none>
生成配置文件
Shell > mdadm -Ds > /etc/mdadm.conf
使用 RAID1
Shell > parted /dev/md1
GNU Parted 3.2
Using /dev/md1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md1: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.6GB 53.6GB primary
(parted) quit
Information: You may need to update /etc/fstab.
Shell > mkfs -t ext4 /dev/md1p1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13097728 4k blocks and 3276800 inodes
Filesystem UUID: 50ab27f1-a9e8-4dee-ab87-bf49a71ead60
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
# 临时挂载
## 如您所见,raid1 的可用容量为 50%
Shell > mkdir /raid1/
Shell > mount -t ext4 /dev/md1p1 /raid1/
Shell > df -hT
Filesystem Type Size Used Avail Use% Mounted on
...
/dev/md1p1 ext4 49G 24K 47G 1% /raid1
若您需要让挂载永久生效,则可以写入到 /etc/fstab 文件:
/dev/md1p1 /raid1 ext4 defaults 0 3
模拟 RAID 1 故障
为了方便演示,您需要开启两个 SSH 窗口,这样可以直观看见故障后的构建过程:
# 第一个窗口
Shell > watch -n 1 cat /proc/mdstat
# 第二个窗口
## -f 选项表示将 RAID1 当中的指定设备标记为 faulty,这里标记的是 sdb1 这个设备
Shell > mdadm -f /dev/md1 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1
第一个窗口会呈现这样的效果:
Every 1.0s: cat /proc/mdstat HOME01: Fri May 30 12:13:15 2025
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)
52392960 blocks super 1.2 [2/1] [_U]
[=>...................] recovery = 7.6% (4007744/52392960) finish=4.0min speed=200387K/sec
unused devices: <none>
如您所见,sdb1 状态为 "(F)",表示这是一块坏掉的盘,下面的进度条显示正在还原数据的过程。
直到 100% 结束:
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1] sdb1[0](F)
52392960 blocks super 1.2 [2/2] [UU]
unused devices: <none>
移除故障盘
坏掉的故障盘在服务器上最直观的显示就是信号灯不再亮起,因为服务器通常都支持「热插拔」技术,所以在不关机的情况下可以执行以下操作:
Shell > mdadm -r /dev/md1 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1
Shell > cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdd1[2] sdc1[1]
52392960 blocks super 1.2 [2/2] [UU]
unused devices: <none>
重新生成配置文件:
Shell > mdadm -Ds > /etc/mdadm.conf
新添加热备盘
因为服务器通常都支持「热插拔」技术,所以管理员此时可以物理取下服务器上坏掉的硬盘,并同时换上新的热备盘:
# 假设您在 Vmware Workstation 那边移除了那块 50G 的故障硬盘,同时又换上了新的热备盘
# 新的热备盘被识别为 sde
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdc disk 50G
└─sdc1 part 50G 6191293f-176c-b1a2-5d7f-53e3c592d279
└─md1 raid1 50G
└─md1p1 md 50G 50ab27f1-a9e8-4dee-ab87-bf49a71ead60
sdd disk 50G
└─sdd1 part 50G 6191293f-176c-b1a2-5d7f-53e3c592d279
└─md1 raid1 50G
└─md1p1 md 50G 50ab27f1-a9e8-4dee-ab87-bf49a71ead60
sde disk 50G
sr0 rom 13.2G 2024-05-27-14-12-59-00
# 分区
Shell > parted /dev/sde
GNU Parted 3.2
Using /dev/sde
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sde: 53.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 53.7GB 53.7GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 格式化
Shell > mkfs -t ext4 /dev/sde1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 13106688 4k blocks and 3276800 inodes
Filesystem UUID: 229ba9b0-36ec-4361-a8ea-6575c52a5b0b
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
# 新添加热备盘
Shell > mdadm /dev/md1 -a /dev/sde1
mdadm: added /dev/sde1
Shell > mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Fri May 30 11:48:04 2025
Raid Level : raid1
Array Size : 52392960 (49.97 GiB 53.65 GB)
Used Dev Size : 52392960 (49.97 GiB 53.65 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Fri May 30 12:34:02 2025
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Name : HOME01:1 (local to host HOME01)
UUID : 6191293f:176cb1a2:5d7f53e3:c592d279
Events : 38
Number Major Minor RaidDevice State
2 8 49 0 active sync /dev/sdd1
1 8 33 1 active sync /dev/sdc1
3 8 65 - spare /dev/sde1
Shell > cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sde1[3](S) sdd1[2] sdc1[1]
52392960 blocks super 1.2 [2/2] [UU]
unused devices: <none>
最后生成配置文件:
Shell > mdadm -Ds > /etc/mdadm.conf
磁盘环境还原
为了接下来的 RAID 5 实验,需要将磁盘环境还原:
Shell > umount /raid1
Shell > echo "" > /etc/mdadm.conf
# 停止阵列,若重新激活阵列,则需要读取 /etc/mdadm.conf 文件
Shell > mdadm -S /dev/md1
mdadm: stopped /dev/md1
Shell > shutdown -h now
# 在 Vmware Workstation 中将 sdc、sdd、sde 磁盘移除
版权声明:「自由转载-保持署名-非商业性使用-禁止演绎 3.0 国际」(CC BY-NC-ND 3.0)

用一杯咖啡支持我们,我们的每一篇[文档]都经过实际操作和精心打磨,而不是简单地从网上复制粘贴。期间投入了大量心血,只为能够真正帮助到您。
暂无评论