高级磁盘管理05 — RAID10实验

前言

本章,您将学习 RAID 10 的操作知识。

回顾下前面的知识:

  • 需要的磁盘数量:4n(n≥1)
  • 可用容量:50%
  • 读写性能:中
  • 冗余备份:有
  • 经济成本:非常高

实验要求:

  1. 利用 4 块硬盘组成 RAID 10,其中两块硬盘组成第一组 RAID 1,另外两块硬盘组成第二组 RAID 1
  2. 模拟两组中的其中一块盘损坏
  3. 使用 2 块热备盘自动替换故障盘
  4. 移除坏掉的硬盘

准备工作

准备 6 块硬盘,每块硬盘为 10GB,对这些硬盘执行分区与格式化(前面的文章中已经操作很多次了,这里忽略操作细节),最后的效果如下:

Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME   TYPE  SIZE UUID
sda    disk   50G
├─sda1 part    1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part   47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part    2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb    disk   10G
└─sdb1 part   10G 71196a43-fedf-418a-8da2-1592097ae246
sdc    disk   10G
└─sdc1 part   10G 2f14b0d0-8e4a-4983-9afe-273116b5a968
sdd    disk   10G
└─sdd1 part   10G 65be1f7b-5604-44e3-80c7-f7cae72ac6df
sde    disk   10G
└─sde1 part   10G e5274631-4e21-40d7-95a1-4d90f0945683
sdf    disk   10G
└─sdf1 part   10G 48b22ea6-e928-49ed-b4fc-f09ea1b363dc
sdg    disk   10G
└─sdg1 part   10G 4fed9a5d-a9ee-4b37-90e8-039cb2651c6e
sr0    rom  13.2G 2024-05-27-14-12-59-00

组建 RAID 10

在我的环境中,将 sdb1 和 sdc1 组建为第一组 RAID 1,将 sdd1 和 sde1 组建为第二组 RAID 1:

# 第一组 RAID1
Shell > mdadm -C -v /dev/md1 -l raid1 -n 2 /dev/sd{b,c}1
mdadm: /dev/sdb1 appears to contain an ext2fs file system
       size=10483712K  mtime=Thu Jan  1 08:00:00 1970
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sdc1 appears to contain an ext2fs file system
       size=10483712K  mtime=Thu Jan  1 08:00:00 1970
mdadm: size set to 10474496K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

# 第二组 RAID1
Shell > mdadm -C -v /dev/md11 -l raid1 -n 2 /dev/sd{d,e}1
mdadm: /dev/sdd1 appears to contain an ext2fs file system
       size=10483712K  mtime=Thu Jan  1 08:00:00 1970
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: /dev/sde1 appears to contain an ext2fs file system
       size=10483712K  mtime=Thu Jan  1 08:00:00 1970
mdadm: size set to 10474496K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md11 started.

Shell > mdadm -Ds
ARRAY /dev/md1 metadata=1.2 UUID=32bb917b:f956cc74:ec58310c:5fce2ec6
ARRAY /dev/md11 metadata=1.2 UUID=5541a7ae:8dff7890:f868f8aa:66060f31

这两组设备同样需要分区与格式化:

# 分区
Shell > parted /dev/md1
GNU Parted 3.2
Using /dev/md1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md1: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  10.7GB  10.7GB               primary

(parted) quit
Information: You may need to update /etc/fstab.

Shell > parted /dev/md11
GNU Parted 3.2
Using /dev/md11
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md11: 10.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  10.7GB  10.7GB               primary

(parted) quit
Information: You may need to update /etc/fstab.

# 格式化(高级格式化)
Shell > mkfs -t ext4 /dev/md1p1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 2618112 4k blocks and 655360 inodes
Filesystem UUID: 2c97dbcb-20f6-4f63-8acf-20a992af5628
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Shell > mkfs -t ext4 /dev/md11p1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 2618112 4k blocks and 655360 inodes
Filesystem UUID: 551db872-34fb-4568-8f08-f77ce1baed3a
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

利用这两组设备的分区再组建为 RAID0:

Shell > mdadm -C -v /dev/md10 -l raid0 -n 2 /dev/md1p1 /dev/md11p1
mdadm: chunk size defaults to 512K
mdadm: /dev/md1p1 appears to contain an ext2fs file system
       size=10472448K  mtime=Thu Jan  1 08:00:00 1970
mdadm: /dev/md11p1 appears to contain an ext2fs file system
       size=10472448K  mtime=Thu Jan  1 08:00:00 1970
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

Shell > mdadm -Ds
ARRAY /dev/md1 metadata=1.2 UUID=32bb917b:f956cc74:ec58310c:5fce2ec6
ARRAY /dev/md11 metadata=1.2 UUID=5541a7ae:8dff7890:f868f8aa:66060f31
ARRAY /dev/md10 metadata=1.2 UUID=40d2305d:76ebb9f5:985d3949:95e9a1f5

生成配置文件

Shell > mdadm -Ds > /etc/mdadm.conf

使用 RAID 10

还是熟悉的操作,分区、格式化、挂载:

# 分区
Shell > parted /dev/md10
GNU Parted 3.2
Using /dev/md10
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md10: 21.4GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  21.4GB  21.4GB               primary

(parted) quit
Information: You may need to update /etc/fstab.

# 格式化
Shell > mkfs -t ext4 /dev/md10p1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5231104 4k blocks and 1308160 inodes
Filesystem UUID: 16689710-3844-44ee-b510-a0e5457b2454
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

# 临时挂载
Shell > mkdir /raid10
Shell > mount -t ext4 /dev/md10p1 /raid10/
## 每块 10GB 容量的 4 块硬盘在组建 RAID 10 之后的实际可用容量为 20GB,即总容量的一半
Shell > df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  1.8G     0  1.8G   0% /dev
tmpfs          tmpfs     1.8G     0  1.8G   0% /dev/shm
tmpfs          tmpfs     1.8G  9.0M  1.8G   1% /run
tmpfs          tmpfs     1.8G     0  1.8G   0% /sys/fs/cgroup
/dev/sda2      ext4       46G  2.4G   42G   6% /
/dev/sda1      xfs      1014M  219M  796M  22% /boot
tmpfs          tmpfs     364M     0  364M   0% /run/user/0
/dev/md10p1    ext4       20G   24K   19G   1% /raid10

若您需要让挂载永久生效,可将相应的内容写入到 /etc/fstab 文件中,如:

/dev/md10p1   /raid10    ext4   defaults    0   5

添加两块热备盘

请注意这里热备盘的位置,第一块热备盘位于第一组 RAID 1 中(/dev/md1),第二块热备盘位于第二组 RAID 1 中(/dev/md11

Shell > mdadm /dev/md1 -a /dev/sdf1
mdadm: added /dev/sdf1

Shell > mdadm /dev/md11 -a /dev/sdg1
mdadm: added /dev/sdg1

Shell > mdadm -Ds
ARRAY /dev/md1 metadata=1.2 spares=2 UUID=32bb917b:f956cc74:ec58310c:5fce2ec6
ARRAY /dev/md11 metadata=1.2 UUID=5541a7ae:8dff7890:f868f8aa:66060f31
ARRAY /dev/md10 metadata=1.2 UUID=40d2305d:76ebb9f5:985d3949:95e9a1f5

Shell > mdadm -D /dev/md1
...
Consistency Policy : resync

              Name : HOME01:1  (local to host HOME01)
              UUID : 32bb917b:f956cc74:ec58310c:5fce2ec6
            Events : 20

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

       2       8       81        -      spare   /dev/sdf1

Shell > mdadm -D /dev/md11
···
Consistency Policy : resync

              Name : HOME01:11  (local to host HOME01)
              UUID : 5541a7ae:8dff7890:f868f8aa:66060f31
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

       2       8       97        -      spare   /dev/sdg1

Shell > cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md11p1[1] md1p1[0]
      20926464 blocks super 1.2 512k chunks

md11 : active raid1 sdg1[2](S) sde1[1] sdd1[0]
      10474496 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf1[2](S) sdc1[1] sdb1[0]
      10474496 blocks super 1.2 [2/2] [UU]

unused devices: <none>

模拟故障

假设第一组 RAID 1 中 sdb1 和第二组 RAID1 中的 sdd1 故障了:

Shell > mdadm -f /dev/md1 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md1

Shell > mdadm -f /dev/md11 /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md11

# 故障之后,则热备盘会自动替换故障盘并执行还原操作
Shell > cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md11p1[1] md1p1[0]
      20926464 blocks super 1.2 512k chunks

md11 : active raid1 sdg1[2] sde1[1] sdd1[0](F)
      10474496 blocks super 1.2 [2/1] [_U]
      [========>............]  recovery = 43.9% (4601472/10474496) finish=0.4min speed=209157K/sec

md1 : active raid1 sdf1[2] sdc1[1] sdb1[0](F)
      10474496 blocks super 1.2 [2/1] [_U]
      [===========>.........]  recovery = 59.2% (6201472/10474496) finish=0.3min speed=206715K/sec

unused devices: <none>

# 生成配置文件
Shell > mdadm -Ds > /etc/mdadm.conf

移除故障盘

坏掉的故障盘在服务器上最直观的显示就是信号灯不再亮起,因为服务器通常都支持「热插拔」技术,所以在不关机的情况下可以执行以下操作:

Shell > mdadm -r /dev/md1 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md1

Shell > mdadm -r /dev/md11 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md11

新添加两块热备盘

因为服务器通常都支持「热插拔」技术,所以管理员此时可以物理取下服务器上坏掉的硬盘,并同时换上新的两块热备盘:

# 假设您在 Vmware Workstation 那边添加了两块新硬盘并移除了那两块 10G 的故障硬盘

# 在操作系统这边,udev 识别新换上的两块硬盘为 sdh 和 sdi,对这两块硬盘执行了分区与格式化(高级格式化),最终如下所示:
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME             TYPE   SIZE UUID
...
sdh              disk    10G
└─sdh1           part    10G 810e4724-cae4-4964-ab56-0d822f3c10f0
sdi              disk    10G
└─sdi1           part    10G 991df1e2-8267-4d1b-81e6-5fc81b3a5b13
sr0              rom   13.2G 2024-05-27-14-12-59-00

将新换上的两块硬盘作为热备盘,请注意单块热备盘需要添加到对应的 RAID 1 中:

Shell > mdadm -a /dev/md1 /dev/sdh1

Shell > mdadm -a /dev/md11 /dev/sdi1

Shell > cat /proc/mdstat
Personalities : [raid1] [raid0]
md10 : active raid0 md11p1[1] md1p1[0]
      20926464 blocks super 1.2 512k chunks

md11 : active raid1 sdi1[3](S) sdg1[2] sde1[1]
      10474496 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdh1[3](S) sdf1[2] sdc1[1]
      10474496 blocks super 1.2 [2/2] [UU]

unused devices: <none>

不要忘记生成配置文件:

Shell > mdadm -Ds /etc/mdadm.conf

还原硬盘环境

Shell > umount /raid10

Shell > echo "" > /etc/mdadm.conf

# 停止阵列
Shell > mdadm -S /dev/md10
Shell > mdadm -S /dev/md1
Shell > mdadm -S /dev/md11

# 输出内容为空
Shell > mdadm -Ds

Shell > shutdown -h now

# 在 Vmware Workstation 中将这些实验的硬盘移除

其他说明

若您使用 4 块硬盘组建 RAID 10 ,需要注意并不是说允许同时坏两块盘,而是指 同一组 RAID 1 当中的所有硬盘不能同时故障,否则数据难以还原。如上面的模拟故障,我是将单组 RAID 1 中的单个设备标记为故障(mdadm -f /dev/md1 /dev/sdb1mdadm -f /dev/md11 /dev/sdd1)。

Avatar photo

关于 陸風睿

GNU/Linux 从业者、开源爱好者、技术钻研者,撰写文档既是兴趣也是工作内容之一。Q - "281957576";WeChat - "jiulongxiaotianci",Github - https://github.com/jimcat8
用一杯咖啡支持我们,我们的每一篇[文档]都经过实际操作和精心打磨,而不是简单地从网上复制粘贴。期间投入了大量心血,只为能够真正帮助到您。
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇