RHCS+ISCSI+CLVM+GFS2 实现共享存储

环境介绍:

node1 (10.11.8.187) : target节点 --> 安装: corosync scsi-target-utils

node2 (10.11.8.186), node3(10.11.8.200) : initiator节点 --> 安装: corosync, iscsi-initiator-utils, gfs2-utils, lvm2-cluster

安装过程不再详述, 参照前文

target 配置:

[root@node1 ~]# tgtadm --lld iscsi --mode target --op showTarget 1: iqn.2016.com.shiina:storage.disk1    System information:        Driver: iscsi        State: ready    I_T nexus information:        I_T nexus: 1            Initiator: iqn.2016.com.shiina:node2            Connection: 0                IP Address: 10.11.8.186        I_T nexus: 2            Initiator: iqn.1994-05.com.redhat:e6175c7b6952            Connection: 0                IP Address: 10.11.8.200    LUN information:        LUN: 0            Type: controller            SCSI ID: IET     00010000            SCSI SN: beaf10            Size: 0 MB, Block size: 1            Online: Yes            Removable media: No            Prevent removal: No            Readonly: No            Backing store type: null            Backing store path: None            Backing store flags:        LUN: 1            Type: disk            SCSI ID: IET     00010001            SCSI SN: beaf11            Size: 8590 MB, Block size: 512            Online: Yes            Removable media: No            Prevent removal: No            Readonly: No            Backing store type: rdwr            Backing store path: /dev/sdb            Backing store flags:    Account information:    ACL information:        10.11.0.0/16

node2, node3 上发现并连接:

[root@node2 ~]# iscsiadm -m discovery -t sendtargets -p 10.11.8.18710.11.8.187:3260,1 iqn.2016.com.shiina:storage.disk1[root@node2 ~]# iscsiadm -m node -T iqn.2016.com.shiina:storage.disk1 -p 10.11.8.187 -lLogging in to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] (multiple)Login to [iface: default, target: iqn.2016.com.shiina:storage.disk1, portal: 10.11.8.187,3260] successful.[root@node2 ~]# fdisk -l......   Device Boot      Start         End      Blocks   Id  System/dev/sdb1               1        1044     8385898+  83  Linux

配置RHCS, 添加一个集群: 详细操作见前文

配置CLVM:

lvm2-cluster 使用的已然是 lvm 程序, 但要让 lvm 支持集群需要修改 lvm 的配置

[root@node2 ~]# lvmconf --enable-cluster

也可以直接修改/etc/lvm/lvm.conf locking_type = 3

1.启动 clvmd 服务:

[root@node2 ~]# service clvmd startStarting clvmd:Activating VG(s):   2 logical volume(s) in volume group "vol0" now active  clvmd not running on node node3                                                           [  OK  ][root@node3 ~]# service clvmd startStarting clvmd:Activating VG(s):   2 logical volume(s) in volume group "vol0" now active                                                           [  OK  ]

2.创建逻辑卷:

[root@node2 ~]# pvcreate /dev/sdb  Physical volume "/dev/sdb" successfully created[root@node2 ~]# vgcreate cluster_vg /dev/sdb  Clustered volume group "cluster_vg" successfully created[root@node2 ~]# lvcreate -L 2G -n clusterlv cluster_vg  Logical volume "clusterlv" created.[root@node2 ~]# lvs  LV        VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  clusterlv cluster_vg -wi-a-----  2.00g                                                      root      vol0       -wi-ao----  4.88g                                                      usr       vol0       -wi-ao---- 13.92g[root@node3 ~]# lvs  LV        VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert  clusterlv cluster_vg -wi-a-----  2.00g                                                      root      vol0       -wi-ao----  4.88g                                                      usr       vol0       -wi-ao---- 13.92gmkfs.gfs2  -j # : 指定日志区域的个数,有几个就能够被几个节点所挂载;  -J # : 指定日志区域的大小,默认为128MB;  -p {lock_dlm|lock_nolock}: 指定使用的锁类型  -t : 锁表的名称,格式为clustername:locktablename, clustername为当前节点所在的集群的名称,locktablename要在当前集群惟一;      当集群内有多个共享存储时, 用以区别锁的位置gfs2-tool # 工具, 查看调整参数等gfs2_jadd -j # 增加的日志区域数量gfs2_grow # 扩展gfs文件系统[root@node2 ~]# mkfs.gfs2 -j 2 -p lock_dlm -t cluster:lvmstor /dev/cluster_vg/clusterlvThis will destroy any data on /dev/cluster_vg/clusterlv.It appears to contain: symbolic link to `../dm-2'Are you sure you want to proceed? [y/n] yDevice:                    /dev/cluster_vg/clusterlvBlocksize:                 4096Device Size                2.00 GB (524288 blocks)Filesystem Size:           2.00 GB (524288 blocks)Journals:                  2Resource Groups:           8Locking Protocol:          "lock_dlm"Lock Table:                "cluster:lvmstor"UUID:                      a3462622-2403-f3e4-b2ae-136b9274fa1d

挂载并测试文件系统:

[root@node2 ~]# mount -t gfs2 /dev/cluster_vg/clusterlv /mnt[root@node2 ~]# touch /mnt/1.txt[root@node2 ~]# ls /mnt/1.txt[root@node3 ~]# mount -t gfs2 /dev/cluster_vg/clusterlv /mnt[root@node3 ~]# ls /mnt/1.txt[root@node3 ~]# touch /mnt/2.txt # node2,node3同时挂载时创建文件[root@node2 ~]# ls /mnt/1.txt  2.txt # 正常查看

关键字:iscsi, lvm, cluster, storage

版权声明

本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处。如若内容有涉嫌抄袭侵权/违法违规/事实不符,请点击 举报 进行投诉反馈!

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部