Our cluster system consists of a Dell EqualLogic PS4000e iSCSI SAN (16T) storage array. I used it for database storage and home directory of regular users. The storage array was mounted to the master node using iSCSI initiator, mount point, /mnt/ps4000e/
. Then the sub-directory /mnt/ps4000e/home
was exported across the cluster, so each node has access to the same home directory. So everyday users do not need move their data files between nodes. NFS services provides the network-based mounting. NFS sever/client is easy to install by following the guideline at http://en.gentoo-wiki.com/wiki/NFS/Server
. Data transfer is via the IPoIB mechanism. But since we have Infiniband network, we could use RDMA network. NFS/RDMA achieves much faster speed. Here is my experience to setup NFS/RDMA.
Step 1: Kernel compilation
1) Requirements for NFS Server/Client
For the server node, it is needed to turn on File systems/Network File Systems/NFS server support
.
For the client node, it is needed to turn on File systems/Network File Systems/NFS client support
.
2) Requirements for RDMA support
Drivers for Infiniband should be compiled as module as said in a previous node. Check if RDMA support is enabled. Make sure that SUNRPC_XPRT_RDMA in the .config
file has a value of M.
Step 2: emerge net-fs/nfs-utils
The version of 1.2.3-r1 is installed. The portmap package is no longed needed. Instead, rpcbind as a dependency will be installed instead. If you see the error message that says the nfs-utils package is blocked portmap, un-emerge portmap first. If portmap is pulled by ypserv, un-emerge ypserv and ypbind packages first. After installation of nfs-utils, then emerge ypserv ypbind again.
Step 3: Create the mount point.
edit the /etc/exports file. add the following line,
# /etc/exports: NFS file systems being exported. See exports(5).
/mnt/ps4000e/home 10.0.0.0/255.255.255.0(fsid=0,rw,async,insecure,no_subtree_che
ck,no_root_squash)
The option insecure is important here because the NFS/RDMA client does not use a reserved port.
Step 4: Load necessary modules.
On the server node, svcrdma is needed. On the client node, xprtrdma is needed. I added them into the /etc/init.d/nfs
script file. Put the following sentences into an appropriate place in the init.d file.
# svcrdma: server-side module for NFS/RDMA
# xprtrdma: client-side module for NFS/RDMA
/sbin/modprobe svcrdma > /dev/null 2>&1
/sbin/modprobe xprtrdma > /dev/null 2>&1
Remember to unload them when stopping the services. Or add corresponding rmmod
commands into the script.
Step 5: Instruct the server to listen on the RDMA transport.
echo "rdma 20049" > /proc/fs/nfsd/portlist
I added it into the nfs script as well.
Step 6: Start the NFS service
/etc/init.d/nfs start
Or add the script to the default run level.
rc-update add nfs default
Step 7. Mount the file system on the client node.
First, ensure that the module xprtrdma has been loaded.
modprobe xprtrdma
Then, use the following command to mount the NFS/RDMA server:
mount -o rdma,port=20049 10.0.0.1:/mnt/ps4000e/home /mnt/ps4000e/home
To verify that the mount is using RDMA, run cat /proc/mounts
to check the proto field.
Alternatively for automatic mounting during the boot-up, add the following record to the file /etc/fstab
.
10.0.0.1:/mnt/ps4000e/home /mnt/ps4000e/home nfs _netdev,proto=rd
ma,port=20049 0 2
Use the init.d script netmount
to mount the NFS/RDMA server.