
23
Deploying a Data Sharing Cluster
As a result of the aforementioned command, a new GFS file system with the gfs2 name for the
psbmCluster cluster will be created. The file system will use the lock_dlm protocol, contain
4 journals, and reside on the /vg01/lv01 volume.
4 Make sure that the created logical volumes can be accessed by all servers in the cluster. This
ensures that the clustering software can mount the /vz partition that you will create on the
logical volume in the next step to any of your cluster nodes.
5 Tell the node to automatically mount the /vz partition on the node boot. To do this, add the
/vz entry to the /etc/fstab file on the node. Assuming that your GFS file system resides on
the /vg01/lv01 logical volume, you can add the following entry to the fstab file:
/dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0
If you use LVM on a GFS filesystem over a partition provided via the iSCSI protocol, you need to
define the extra option _netdev in /etc/fstab in the order LVM tools search for the
volumes after network filesystems are initialized.
/dev/vg01/lv01 /vz gfs2 defaults,noatime,_netdev 0 0
Also make sure that the netfs service is enabled by default.
# chkconfig netfs on
6 Configure the gfs2 service on the node to start in the default runlevel. You can enable the
gfs2 service by executing the following command on each of the cluster nodes:
# chkconfig --level 3 gfs2 on
7 Enable the cluster mode, and stop the vz, parallels-server, and PVA Agent services:
# prlsrvctl set --cluster-mode on
# service vz stop
# service parallels-server stop
# service pvaagentd stop
# service pvapp stop
8 Move /vz to a temporary directory /vz1, and create a new /vz directory:
# mv /vz /vz1; mkdir /vz
Later on, you will mount a shared data storage located on a GFS volume to the created /vz
directory and move there all data from the /vz1 directory.
Configuring the Data Storage for Other Nodes in the Cluster
To configure the shared data storage for the second and all remaining nodes in the cluster, do the
following:
1 Tell each node in the cluster to automatically mount the /vz partition on the node boot. To do
this, add the /vz entry to the /etc/fstab file on each node in the cluster. Assuming that your
GFS file system resides on the /vg01/lv01 logical volume, you can add the following entry to
the fstab file:
/dev/vg01/lv01 /vz gfs2 defaults,noatime 0 0
2 Configure the gfs2 service on each node in the cluster to start in the default runlevel. For
example, if your system default runlevel is set to 3, you can enable the gfs2 service by
executing the following command on each of the cluster nodes:
# chkconfig --level 3 gfs2 on
Comentários a estes Manuais