前言
本章,您将学习 RAID 5 的操作知识。
回顾下前面的知识:
- 需要的磁盘数量:至少三块硬盘(n≥3)
- 可用容量:总磁盘容量的 (s-1)/s
- 读写性能:中
- 冗余备份:有
- 经济成本:中
它的特点是将数据通过奇偶校验信息保存到不同的硬盘设备上,这样的好处是其中任何一块硬盘坏掉后,都能通过算法将数据还原回来。
使用的算法为 简单异或逻辑运算(即相同为0,相异为1)
Q:什么是简单异或逻辑算法?
即通过算法计算后,相同值算出来的结果为 0 ,不同值算出来的结果为 1 。如下表所示:
A 值 | B 值 | 结果 |
---|---|---|
0 | 0 | 0 |
0 | 1 | 1 |
1 | 0 | 1 |
1 | 1 | 0 |
比如说您当前拥有四块硬盘且组建了 RAID 5,假设某个位置的数据丢失了,可以通过异或逻辑算法知道它的值。
硬盘 A | 硬盘 B | 硬盘 C | 损坏的硬盘 D |
---|---|---|---|
0 | 0 | 1 | 1 |
1 | 1 | 0 | 0 |
1 | 1 | 1 | 1 |
实验要求:
- 准备四块硬盘,其中三块磁盘组建 RAID 5,另外一块磁盘为热备盘
- 模拟 RAID 5 中的磁盘故障,并使用热备盘自动完成故障顶替
- 停止阵列,然后重新激活阵列
- 扩展 RAID 5 阵列的设备数,将现有的三块磁盘扩展到四块磁盘
准备工作
开机状态下添加四块 20GB 的机械硬盘:
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb disk 20G
sdc disk 20G
sdd disk 20G
sde disk 20G
sr0 rom 13.2G 2024-05-27-14-12-59-00
分区操作:
# 分区
Shell > parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 分区
Shell > parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdc: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 分区
Shell > parted /dev/sdd
GNU Parted 3.2
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdd: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
# 分区
Shell > parted /dev/sde
GNU Parted 3.2
Using /dev/sde
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sde: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
接着对这些磁盘的分区进行格式化:
Shell > mkfs -t ext4 /dev/sdb1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: da16c187-9339-41c9-9156-21e6db3ff5cb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > mkfs -t ext4 /dev/sdc1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: 0fc2708e-d464-42fb-910d-8c97b0fc3bdd
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > mkfs -t ext4 /dev/sdd1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: f37ede62-5509-4da0-8b1d-692a8c01bd07
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > mkfs -t ext4 /dev/sde1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: e1558db8-f1de-417a-b4ab-de436a0ef3e8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb disk 20G
└─sdb1 part 20G da16c187-9339-41c9-9156-21e6db3ff5cb
sdc disk 20G
└─sdc1 part 20G 0fc2708e-d464-42fb-910d-8c97b0fc3bdd
sdd disk 20G
└─sdd1 part 20G f37ede62-5509-4da0-8b1d-692a8c01bd07
sde disk 20G
└─sde1 part 20G e1558db8-f1de-417a-b4ab-de436a0ef3e8
sr0 rom 13.2G 2024-05-27-14-12-59-00
组建 RAID 5
# -C 创建 RAID;-v 显示详细信息;-l 选择 RAID 级别;-n 指定组建的磁盘数(分区数);-x 指定备用盘数量;-c 指定 chunk 的大小,即 32KB
Shell > mdadm -C -v /dev/md5 -l raid5 -n 3 -x 1 -c 32 /dev/sd{b,c,d,e}1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=20969472K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=20969472K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=20969472K mtime=Thu Jan 1 08:00:00 1970
mdadm: /dev/sde1 appears to contain an ext2fs file system
size=20969472K mtime=Thu Jan 1 08:00:00 1970
mdadm: size set to 20952064K
Continue creating array? yes
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
查询 RAID 的信息:
Shell > mdadm -Ds
ARRAY /dev/md5 metadata=1.2 spares=2 UUID=055a5edd:17fb8759:00ab6da3:9c386ee4
# chunk 大小为 32KB,sdb1、sdc1、sdd1 都是组建 RAID 5 的设备;sde1 是热备盘
Shell > mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri May 30 21:09:25 2025
Raid Level : raid5
Array Size : 41904128 (39.96 GiB 42.91 GB)
Used Dev Size : 20952064 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri May 30 21:11:10 2025
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : HOME01:5 (local to host HOME01)
UUID : 055a5edd:17fb8759:00ab6da3:9c386ee4
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 65 - spare /dev/sde1
Shell > cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
41904128 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
生成配置文件
Shell > mdadm -Ds > /etc/mdadm.conf
使用 RAID5
Shell > parted /dev/md5
GNU Parted 3.2
Using /dev/md5
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: Linux Software RAID Array (md)
Disk /dev/md5: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 42.9GB 42.9GB primary
(parted) quit
Information: You may need to update /etc/fstab.
Shell > mkfs -t ext4 /dev/md5p1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 10475520 4k blocks and 2621440 inodes
Filesystem UUID: 1f394927-b18c-4117-973d-eb9cc90d2da8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > mkdir /raid5
# 临时挂载
Shell > mount -t ext4 /dev/md5p1 /raid5/
# 三块 20G 磁盘组建 RAID5,则可用容量为总容量的 2/3,即 40G
Shell > df -hT
Filesystem Type Size Used Avail Use% Mounted on
...
/dev/md5p1 ext4 40G 24K 38G 1% /raid5
若您需要让挂载永久生效,则可以写入到 /etc/fstab 文件:
/dev/md5p1 /raid5 ext4 defaults 0 4
Q:什么情况下需要生成配置文件?
当状态发生了变化(比如硬盘的数量变化、硬盘状态变化、阵列状态变化等),就需要重新生成配置文件。
模拟 RAID 5 故障
同样开启两个 SSH 窗口:
# 第一个窗口
Shell > watch -n 1 cat /proc/mdstat
# 第二个窗口
## -f 选项表示将 RAID5 当中的指定设备标记为 faulty,这里标记的是 sdb1 这个设备
Shell > mdadm -f /dev/md5 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5
第一个窗口会呈现这样的效果:
Every 1.0s: cat /proc/mdstat HOME01: Fri May 30 21:14:27 2025
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
41904128 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/2] [_UU]
[>....................] recovery = 4.8% (1009152/20952064) finish=1.6min speed=201830K/sec
unused devices: <none>
如您所见,sdb1 状态为 "(F)",表示这是一块坏掉的盘,下面的进度条显示正在还原数据的过程。
直到 100% 结束:
Every 1.0s: cat /proc/mdstat HOME01: Fri May 30 21:16:40 2025
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3] sdc1[1] sdb1[0](F)
41904128 blocks super 1.2 level 5, 32k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
移除故障盘
坏掉的故障盘在服务器上最直观的显示就是信号灯不再亮起,因为服务器通常都支持「热插拔」技术,所以在不关机的情况下可以执行以下操作:
Shell > mdadm -r /dev/md5 /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md5
重新生成配置文件:
Shell > mdadm -Ds > /etc/mdadm.conf
新添加两块热备盘
因为服务器通常都支持「热插拔」技术,所以管理员此时可以物理取下服务器上坏掉的硬盘,并同时换上新的两块热备盘:
# 假设您在 Vmware Workstation 那边添加了两块新硬盘并移除了那块 20G 的故障硬盘
# 两块新的热备盘被识别为 sdf 和 sdg
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdc disk 20G
└─sdc1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sdd disk 20G
└─sdd1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sde disk 20G
└─sde1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sdf disk 20G
sdg disk 20G
sr0 rom 13.2G 2024-05-27-14-12-59-00
还是一样的分区与格式化:
Shell > parted /dev/sdf
GNU Parted 3.2
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdf: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
Shell > parted /dev/sdg
GNU Parted 3.2
Using /dev/sdg
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdg: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
Shell > mkfs -t ext4 /dev/sdf1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: 3800c831-7fd9-45b4-9662-4f9ed13ba51e
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Shell > mkfs -t ext4 /dev/sdg1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: 47c84399-cb80-4e73-bf05-66beac9a744f
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
添加两块硬盘作为备用盘:
Shell > mdadm /dev/md5 -a /dev/sd{f,g}1
mdadm: added /dev/sdf1
mdadm: added /dev/sdg1
# 生成配置文件
Shell > mdadm -Ds > /etc/mdadm.conf
Shell > mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Fri May 30 21:09:25 2025
Raid Level : raid5
Array Size : 41904128 (39.96 GiB 42.91 GB)
Used Dev Size : 20952064 (19.98 GiB 21.45 GB)
Raid Devices : 3
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Fri May 30 21:24:49 2025
State : clean
Active Devices : 3
Working Devices : 5
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Name : HOME01:5 (local to host HOME01)
UUID : 055a5edd:17fb8759:00ab6da3:9c386ee4
Events : 46
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
5 8 81 - spare /dev/sdf1
6 8 97 - spare /dev/sdg1
停止与激活阵列
# 停止阵列
## 在停止之前需要生成配置文件以及卸载(umount)
Shell > umount /raid5
Shell > mdadm -Ds > /etc/mdadm.conf
Shell > mdadm -S /dev/md5
mdadm: stopped /dev/md5
# 激活阵列,激活阵列时需要读配置文件
Shell > mdadm -As
mdadm: /dev/md5 has been started with 3 drives and 2 spares.
## 重新挂载后进行使用
Shell > mount -t ext4 /dev/md5p1 /raid5
扩展硬盘数组建 RAID 5
Q:上面例子使用的是三块硬盘组建 RAID 5,能扩展一块硬盘即四块硬盘组成 RAID 5 吗?
可以,需要先用 mdadm
命令的 -a
选项添加并变成热备盘,然后使用 -G
选项变更为 raid 5 当中的设备,见如下操作:
# 在 Vmware Workstation 中添加一块 20G 的新硬盘
# 新硬盘被 udev 识别为 sdb
Shell > lsblk -o NAME,TYPE,SIZE,UUID
NAME TYPE SIZE UUID
sda disk 50G
├─sda1 part 1G 8a77104f-8e6c-459c-93bc-0b00a52fb34b
├─sda2 part 47G ae2f3495-1d6a-4da3-afd2-05c549e55322
└─sda3 part 2G 1646e0aa-af19-4282-bbab-219c31a2ab6d
sdb disk 20G
sdc disk 20G
└─sdc1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sdd disk 20G
└─sdd1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sde disk 20G
└─sde1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sdf disk 20G
└─sdf1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sdg disk 20G
└─sdg1 part 20G 055a5edd-17fb-8759-00ab-6da39c386ee4
└─md5 raid5 40G
└─md5p1 md 40G 1f394927-b18c-4117-973d-eb9cc90d2da8
sr0 rom 13.2G 2024-05-27-14-12-59-00
# 分区以及格式化
Shell > parted /dev/sdb
GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel gpt
(parted) mkpart primary 0% 100%
(parted) print
Model: ATA VMware Virtual S (scsi)
Disk /dev/sdb: 21.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 21.5GB 21.5GB primary
(parted) quit
Information: You may need to update /etc/fstab.
Shell > mkfs -t ext4 /dev/sdb1
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 5242368 4k blocks and 1310720 inodes
Filesystem UUID: f57a2e54-ba9f-4009-b3cd-eb60eaaafaed
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
# 这块硬盘此时为备用盘
Shell > mdadm /dev/md5 -a /dev/sdb1
mdadm: added /dev/sdb1
Shell > mdadm -D /dev/md5
/dev/md5:
...
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
5 8 81 - spare /dev/sdf1
6 8 97 - spare /dev/sdg1
7 8 17 - spare /dev/sdb1
# 扩展一块硬盘组建 RAID5
## 一旦你键入这条命令,就会在四块盘上重新生成校验位
Shell > mdadm -G /dev/md5 -n 4
/dev/md5:
Version : 1.2
Creation Time : Fri May 30 21:09:25 2025
Raid Level : raid5
Array Size : 41904128 (39.96 GiB 42.91 GB)
Used Dev Size : 20952064 (19.98 GiB 21.45 GB)
Raid Devices : 4
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Fri May 30 21:44:42 2025
State : clean, reshaping
Active Devices : 4
Working Devices : 6
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 32K
Consistency Policy : resync
Reshape Status : 5% complete
Delta Devices : 1, (3->4)
Name : HOME01:5 (local to host HOME01)
UUID : 055a5edd:17fb8759:00ab6da3:9c386ee4
Events : 74
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
7 8 17 3 active sync /dev/sdb1
5 8 81 - spare /dev/sdf1
6 8 97 - spare /dev/sdg1
# 重新生成配置文件
Shell > mdadm -Ds > /etc/mdadm.conf
还原硬盘环境
Shell > umount /raid5
Shell > echo "" > /etc/mdadm.conf
Shell > mdadm -S /dev/md5
# 输出内容为空
Shell > mdadm -Ds
Shell > shutdown -h now
# 在 Vmware Workstation 将这六块硬盘移除
版权声明:「自由转载-保持署名-非商业性使用-禁止演绎 3.0 国际」(CC BY-NC-ND 3.0)

用一杯咖啡支持我们,我们的每一篇[文档]都经过实际操作和精心打磨,而不是简单地从网上复制粘贴。期间投入了大量心血,只为能够真正帮助到您。
暂无评论