RAID : Redundant Arrays of Independent Disks , 独立磁盘冗余阵列;
1、RAID特性
提高IO能力:通过磁盘并行读写实现;
提高耐用性:通过磁盘冗余来实现;
2、RAID实现方式
外接式磁盘阵列:通过扩展卡提供适配能力;
内接式RAID:主板集成RAID控制器;
Software RAID : 通过软件方式提供磁盘冗余;
3、RAID级别
用磁盘做RAID,每块磁盘的容量大小需要一致,不一致的情况下,只能以容量最小的磁盘为基准,其他磁盘的剩余容量是不能参与RAID制作的。
下面说的RAID可用空间,均以容量均等的前提下描述;
RAID-0特性: 条带卷,strip;
读写性能提升;无容错能力;可用磁盘空间(N),N为磁盘数量;需要2块或2块以上磁盘;
RAID-1特性: 镜像卷,mirror;
读性能提示,写性能略有下降;有冗余能力;一般用2块磁盘,可用空间为1块磁盘容量;
RAID-5特性:
读写性能提升;有容错能力,允许1块磁盘损坏;可用磁盘空间(N-1);需要3块或以上磁盘;(每块磁盘都需要写校验码)
RAID-6特性:
读写性能提升;有容错能力,允许2块磁盘同时损坏;可用磁盘空间(N-2);需要4块或以上磁盘;(有两块磁盘用于写校验码)
RAID-10特性:
读写性能提升;有容错能力,允许每组镜像只能同时坏一块;可用空间(N/2);需要4块或以上磁盘;
JBOD : Just a Bunch Of Disks ;
将多块磁盘的空间合并为一个大的连续的空间使用,各个磁盘的容量可以不一致;可用空间为所有磁盘容量的总和;
4、Centos6系统上的软件RAID实现方法
结合内核中的模块md(multi devices),实现将多个硬盘设备组织成单个设备,设备接口抽象以后通过系统调用,再向用户空间调用;
4.1 mdadm 模式化工具(命令) : 把内核模块(md)输出的接口写成程序,通过此程序输出选项、参数来与md模块进行通信;
4.2 mdadm命令语法格式:
mdadm [mode] <raiddevice> [options] <component-devices>
4.3 mdadm支持的raid级别:
LINES(线性的,如JBOD), RAID0 , RAID1 , RAID4 , RAID5 , RAID6 , RAID10 ;
mdadm支持的模式(mode):
创建:使用选项 -C
装配;使用选项 -A
监控;使用选项 -F
管理;使用选项 -f , -r , -a
<raiddevice> : /dev/md#
<component-devices> : 任意块设备;
-C : 创建模式下的选项应用;
-n # : 使用#个块设备来创建此RAID;
-l # : 指明要创建的RAID级别;
-a {yes|no} : 是否自动创建目标RAID设备的设备文件;
-c CHUNK_SIZE : 指明CHUNK块大小;默认是512K;
-x # : 指明空闲盘的个数;
chunk : RAID控制器内置的一个控制参数,RAID控制器接收到数据后,会根据RAID级别的不同,对数据分拆成若干块,这个块就叫’chunk’,默认的’chunk’大小为512K,在创建RAID设备时,可自定义大小;
假如创建RAID5,总共4块磁盘设备,其中3块用来做RAID5,1块做空闲盘;
创建过程:
1> 做软RAID的磁盘,在创建分区时候,需要指定文件系统类型为 ‘fd’;
2> 强制内核刷新分区表;
3> 查看系统中是否有md设备已经启用,如果md0设备已经被占用,创建RAID时则使用md1;
查看md设备的运行状态: ~]# cat /proc/mdstat
查看是否有md设备: ~]# ls /dev | grep “md”
4> 创建RAID设备:
~]# mdadm -C /dev/md0 -a yes -n 3 -x 1 -l 5 /dev/sd{b,c,d,e}
可以实时看到创建过程;
5> 查看md设备的运行状态: ~]# cat /proc/mdstat
6> 格式化分区,创建文件系统;
~]# mke2fs -t ext4 /dev/md0
7> 挂载文件系统;
查看 /dev/md0的UUID号,使用UUID号进行挂载;理由:系统重启后,md设备的名称与编号可能会随机变更;
~]# blkid /dev/md0
写入配置文件 /etc/fstab ;
~]# mount -a
管理RAID设备;
8> 查看 /dev/md0的详细信息;
~]# mdadm -D /dev/md0
9> 测试:手动标记其中一块磁盘为损坏状态;
~]# mdadm /dev/md0 -f /dev/sdb
此时,用如下命令可查看到RAID设备在重新同步数据,做恢复操作;
~]# cat /proc/mdstat
或者,使用如下命令,每隔1秒动态刷新同步数据的过程;使用 ‘ctrl + c’ 退出;
~]# watch -n1 ‘cat /proc/mdstat’
等到数据同步恢复后,用如下命令可观察到:
/dev/sdb 磁盘已经从RAID级别中变更为空闲盘,/dev/sde 设备从空闲盘变更为RAID设备;
~]# mdadm -D /dev/md0
10> 测试: 移除被标记为损坏的磁盘;
~]# mdadm /dev/md0 -r /dev/sdb
再标记一块损坏的磁盘,并移除;
~]# mdadm /dev/md0 -f /dev/sdc
~]# mdadm /dev/md0 -r /dev/sdc
此时,查看RAID设备的详细信息,可以看到只剩下2块磁盘在正常使用;
~]# mdadm -D /dev/md0
11> 测试: 把修复好的磁盘加入RAID中,或者使用其他新磁盘;
~]# mdadm /dev/md0 -a /dev/sdb
~]# mdadm /dev/md0 -a /dev/sdc
此时,查看RAID设备的详细信息;
~]# mdadm -D /dev/md0
11> 停止md设备;
~]# mdadm -S /dev/md#
脚本1:
1> 列出当前系统识别到的所有磁盘设备;
2> 如果磁盘数量为1,则显示器空间使用信息;
否则,显示最后一个磁盘设备上的空间信息;
[root@kouyuushinn tmp]# more six.sh #!/bin/bash # all_disks=$(fdisk -l | grep -o "^Disk /dev/[sh]d[a-z]") echo echo "This system has disk device :" echo "$all_disks" num_disk=$(fdisk -l | grep -o "^Disk /dev/[sh]d[a-z]" | wc -l) echo echo "num_disk is $num_disk." echo first_disk_name=$(fdisk -l | grep -o "^Disk /dev/[sh]d[a-z]" | head -1 | cut -d' ' -f2) last_disk_name=$(fdisk -l | grep -o "^Disk /dev/[sh]d[a-z]" | tail -1 | cut -d' ' -f2) echo "first_disk_name is $first_disk_name." echo "last_disk_name is $last_disk_name." echo if [ $num_disk -eq 1 ];then echo "The first disk device infomation:" fdisk -l $first_disk_name else echo "The last disk device infomation:" fdisk -l $last_disk_name fi [root@kouyuushinn tmp]# [root@kouyuushinn tmp]# ./six.sh This system has disk device : Disk /dev/sda Disk /dev/sdb num_disk is 2. first_disk_name is /dev/sda. last_disk_name is /dev/sdb. The last disk device infomation: Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xfead7da1 Device Boot Start End Blocks Id System /dev/sdb1 2048 2099199 1048576 83 Linux /dev/sdb2 2099200 6293503 2097152 83 Linux /dev/sdb3 6293504 41943039 17824768 5 Extended /dev/sdb5 6295552 10489855 2097152 83 Linux /dev/sdb6 10491904 11106303 307200 83 Linux [root@kouyuushinn tmp]#
创建一个可用空间为1G的RAID10设备,要求其chunk大小为128k,文件系统为ext4,开机可自动挂载至/mydata目录下;
此处采用一块磁盘中的多个分区来做RAID10,创建4个分区,每个分区500M;
第一步:创建4个分区,每个分区500M:
--------------------------------
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xfead7da1
Device Boot Start End Blocks Id System
/dev/sdb1 2048 2099199 1048576 83 Linux
/dev/sdb2 2099200 6293503 2097152 83 Linux
/dev/sdb3 6293504 41943039 17824768 5 Extended
/dev/sdb5 6295552 10489855 2097152 83 Linux
/dev/sdb6 10491904 11106303 307200 83 Linux
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 7
First sector (11108352-41943039, default 11108352):
Using default value 11108352
Last sector, +sectors or +size{K,M,G} (11108352-41943039, default 41943039): +500M
Partition 7 of type Linux and of size 500 MiB is set
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 8
First sector (12134400-41943039, default 12134400):
Using default value 12134400
Last sector, +sectors or +size{K,M,G} (12134400-41943039, default 41943039): +500M
Partition 8 of type Linux and of size 500 MiB is set
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 9
First sector (13160448-41943039, default 13160448):
Using default value 13160448
Last sector, +sectors or +size{K,M,G} (13160448-41943039, default 41943039): +500M
Partition 9 of type Linux and of size 500 MiB is set
Command (m for help): n
Partition type:
p primary (2 primary, 1 extended, 1 free)
l logical (numbered from 5)
Select (default p): l
Adding logical partition 10
First sector (14186496-41943039, default 14186496):
Using default value 14186496
Last sector, +sectors or +size{K,M,G} (14186496-41943039, default 41943039): +500M
Partition 10 of type Linux and of size 500 MiB is set
Command (m for help): t
Partition number (1-3,5-10, default 10):
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): t
Partition number (1-3,5-10, default 10): 7
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): t
Partition number (1-3,5-10, default 10): 8
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): t
Partition number (1-3,5-10, default 10): 9
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'
Command (m for help): t
Partition number (1-3,5-10, default 10): 10
Hex code (type L to list all codes): fd
Changed type of partition 'Linux raid autodetect' to 'Linux raid autodetect'
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@kouyuushinn ~]# partx -a /dev/sdb
partx: /dev/sdb: error adding partitions 1-3
partx: /dev/sdb: error adding partitions 5-6
[root@kouyuushinn ~]# partx -a /dev/sdb
partx: /dev/sdb: error adding partitions 1-3
partx: /dev/sdb: error adding partitions 5-10
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# cat /proc/partitions
major minor #blocks name
8 0 41943040 sda
8 1 1048576 sda1
8 2 40893440 sda2
8 16 20971520 sdb
8 17 1048576 sdb1
8 18 2097152 sdb2
8 19 1 sdb3
8 21 2097152 sdb5
8 22 307200 sdb6
8 23 512000 sdb7
8 24 512000 sdb8
8 25 512000 sdb9
8 26 512000 sdb10
11 0 1048575 sr0
253 0 38789120 dm-0
253 1 2097152 dm-1
9 0 1023232 md0
[root@kouyuushinn ~]#
第二步:创建RAID10
------------------
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# mdadm -C /dev/md0 -a yes -n 4 -l 10 -c 128 /dev/sdb{7,8,9,10}
mdadm: /dev/sdb7 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sun May 6 20:35:01 2018
mdadm: /dev/sdb8 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sun May 6 20:35:01 2018
mdadm: /dev/sdb9 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sun May 6 20:35:01 2018
mdadm: /dev/sdb10 appears to be part of a raid array:
level=raid10 devices=4 ctime=Sun May 6 20:35:01 2018
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@kouyuushinn ~]#
第三步:格式化分区并挂载;
-----------------------
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# mke2fs -t ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=32 blocks, Stripe width=64 blocks
64000 inodes, 255808 blocks
12790 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=262144000
8 block groups
32768 blocks per group, 32768 fragments per group
8000 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# blkid /dev/md0
/dev/md0: UUID="e8c1aa0f-b241-422c-b05a-a4e34e14672d" TYPE="ext4"
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# vim /etc/fstab
[root@kouyuushinn ~]#
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# tail -1 /etc/fstab
UUID=e8c1aa0f-b241-422c-b05a-a4e34e14672d /mydata ext4 defaults 0 0
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# mount -a
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# df -lh
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 37G 1.9G 36G 5% /
devtmpfs 478M 0 478M 0% /dev
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 488M 6.8M 481M 2% /run
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/sdb5 2.0G 33M 2.0G 2% /sdb5
/dev/sdb1 1014M 33M 982M 4% /sdb1
/dev/sdb2 2.0G 6.0M 1.8G 1% /sdb2
/dev/sdb6 283M 2.1M 262M 1% /sdb6
/dev/sda1 1014M 153M 862M 16% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/md0 968M 2.5M 900M 1% /mydata
[root@kouyuushinn ~]#
查看 /dev/md0的详细信息;
-----------------------
[root@kouyuushinn ~]#
[root@kouyuushinn ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon May 7 11:43:08 2018
Raid Level : raid10
Array Size : 1023232 (999.25 MiB 1047.79 MB)
Used Dev Size : 511616 (499.63 MiB 523.89 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon May 7 11:46:11 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 128K
Consistency Policy : resync
Name : kouyuushinn.cn:0 (local to host kouyuushinn.cn)
UUID : ea281df5:82e730ba:359718e7:1a218e8a
Events : 17
Number Major Minor RaidDevice State
0 8 23 0 active sync set-A /dev/sdb7
1 8 24 1 active sync set-B /dev/sdb8
2 8 25 2 active sync set-A /dev/sdb9
3 8 26 3 active sync set-B /dev/sdb10
[root@kouyuushinn ~]#