Shared and distributed storage deployment

The storage role allows users to configure PowerVault Storage devices, BeeGFS and NFS services on the cluster.

  1. Enter all required parameters in input/storage_config.yml

Parameters for Storage

Variables

Details

nfs_client_params

JSON List

Required

  • This JSON list contains all parameters required to set up NFS.

  • For a bolt-on set up where there is a pre-existing NFS export, set nfs_server to false.

  • When nfs_server is set to true, an NFS share is created on the control plane for access by all cluster nodes.

  • For more information on the different kinds of configuration available, click here.

beegfs_rdma_support

boolean Optional

This variable is used if user has RDMA-capable network hardware (e.g., InfiniBand)

Choices:

  • false <- Default

  • true

beegfs_ofed_kernel_modules_path

string Optional

  • The path where separate OFED kernel modules are installed.

  • Ensure that the path provided here exists on all target nodes.

    Default value: "/usr/src/ofa_kernel/default/include"

beegfs_mgmt_server

string Required

BeeGFS management server IP.

Note

The provided IP should have an explicit BeeGFS management server running .

beegfs_mounts

string Optional

Beegfs-client file system mount location. If storage_yml is being used to change the BeeGFS mounts location, set beegfs_unmount_client to true.

Default value: “/mnt/beegfs”

beegfs_unmount_client

boolean Optional

Changing this value to true will unmount running instance of BeeGFS client and should only be used when decommisioning BeeGFS, changing the mount location or changing the BeeGFS version.

Choices:

  • false <- Default

  • true

beegfs_version_change

boolean Optional

Use this variable to change the BeeGFS version on the target nodes.

Choices:

  • false <- Default

  • true

ansible_config_file_path

string

Required

  • Path to directory hosting ansible config file (ansible.cfg file)

  • This directory is on the host running ansible, if ansible is installed using dnf

  • If ansible is installed using pip, this path should be set

    Default values: /etc/ansible

beegfs_secret_storage_filepath

string Required

  • The filepath (including the filename) where the connauthfile is placed.

  • Required for Beegfs version >= 7.2.7

    Default values: /home/connauthfile

Note

If storage.yml is run with the input/storage_config.yml filled out, BeeGFS and NFS client will be set up.

  1. Ensure that the entry {"name": "beegfs", "version": "7.2.6"}, is included in input/software_config.json and a local repository is created. For more information, click here.

Installing BeeGFS Client

  • If the user intends to use BeeGFS, ensure that a BeeGFS cluster has been set up with beegfs-mgmtd, beegfs-meta, beegfs-storage services running.

    Ensure that the following ports are open for TCP and UDP connectivity:

    Port

    Service

    8008

    Management service (beegfs-mgmtd)

    8003

    Storage service (beegfs-storage)

    8004

    Client service (beegfs-client)

    8005

    Metadata service (beegfs-meta)

    8006

    Helper service (beegfs-helperd)

To open the ports required, use the following steps:

  1. firewall-cmd --permanent --zone=public --add-port=<port number>/tcp

  2. firewall-cmd --permanent --zone=public --add-port=<port number>/udp

  3. firewall-cmd --reload

  4. systemctl status firewalld

  • Ensure that the nodes in the inventory have been assigned only these roles: manager and compute.

Note

  • When working with RHEL, ensure that the BeeGFS configuration is supported using the link here.

  • If the BeeGFS server (MGMTD, Meta, or storage) is running BeeGFS version 7.3.1 or higher, the security feature on the server should be disabled. Change the value of connDisableAuthentication to true in /etc/beegfs/beegfs-mgmtd.conf, /etc/beegfs/beegfs-meta.conf and /etc/beegfs/beegfs-storage.conf. Restart the services to complete the task:

    systemctl restart beegfs-mgmtd
    systemctl restart beegfs-meta
    systemctl restart beegfs-storage
    systemctl status beegfs-mgmtd
    systemctl status beegfs-meta
    systemctl status beegfs-storage
    

NFS bolt-on

  • Ensure that an external NFS server is running. NFS clients are mounted using the external NFS server’s IP.

  • Fill out the nfs_client_params variable in the storage_config.yml file in JSON format using the samples provided above.

  • This role runs on manager, compute and login nodes.

  • Make sure that /etc/exports on the NFS server is populated with the same paths listed as server_share_path in the nfs_client_params in omnia_config.yml.

  • Post configuration, enable the following services (using this command: firewall-cmd --permanent --add-service=<service name>) and then reload the firewall (using this command: firewall-cmd --reload).

    • nfs

    • rpc-bind

    • mountd

  • Omnia supports all NFS mount options. Without user input, the default mount options are nosuid,rw,sync,hard,intr. For a list of mount options, click here.

  • The fields listed in nfs_client_params are:

    • server_ip: IP of NFS server

    • server_share_path: Folder on which NFS server mounted

    • client_share_path: Target directory for the NFS mount on the client. If left empty, respective server_share_path value will be taken for client_share_path.

    • client_mount_options: The mount options when mounting the NFS export on the client. Default value: nosuid,rw,sync,hard,intr.

  • There are 3 ways to configure the feature:

    1. Single NFS node : A single NFS filesystem is mounted from a single NFS server. The value of nfs_client_params would be:

      - { server_ip: xx.xx.xx.xx, server_share_path: "/mnt/share", client_share_path: "/mnt/client", client_mount_options: "nosuid,rw,sync,hard,intr" }
      
    2. Multiple Mount NFS Filesystem: Multiple filesystems are mounted from a single NFS server. The value of nfs_client_params would be:

      - { server_ip: xx.xx.xx.xx, server_share_path: "/mnt/server1", client_share_path: "/mnt/client1", client_mount_options: "nosuid,rw,sync,hard,intr" }
      - { server_ip: xx.xx.xx.xx, server_share_path: "/mnt/server2", client_share_path: "/mnt/client2", client_mount_options: "nosuid,rw,sync,hard,intr" }
      
    1. Multiple NFS Filesystems: Multiple filesystems are mounted from multiple NFS servers. The value of nfs_client_params would be:

      - { server_ip: xx.xx.xx.xx, server_share_path: "/mnt/server1", client_share_path: "/mnt/client1", client_mount_options: "nosuid,rw,sync,hard,intr" }
      - { server_ip: yy.yy.yy.yy, server_share_path: "/mnt/server2", client_share_path: "/mnt/client2", client_mount_options: "nosuid,rw,sync,hard,intr" }
      - { server_ip: zz.zz.zz.zz, server_share_path: "/mnt/server3", client_share_path: "/mnt/client3", client_mount_options: "nosuid,rw,sync,hard,intr" }
      

To run the playbook:

cd omnia/storage
ansible-playbook storage.yml -i inventory

(Where inventory refers to the inventory file listing kube_control_plane, login_node and compute nodes.)

Note

If a subsequent run of storage.yml fails, the storage_config.yml file will be unencrypted.

If you have any feedback about Omnia documentation, please reach out at omnia.readme@dell.com.