Introduction to the karvdash platform

Initialization

Access the platform from https://139.91.92.156.xip.io. The platform is accessible both from the VPN service of CSD and that of ICS FORTH.

Account Creation

You need to create an account by choosing “Sign up” and setting your username and password. The platform will not be immediately accessible as the system administrator must first approve your account.

Service Creation

The next step is to create the services that you are going to use. On the “Services” page, using the “New Service” option, create an MPI service to get a cluster of MPI pods on which to run MPI or SAS applications, and a Kubebox service to have console access to the cluster. You can name them whatever you want when prompted.

(Optional): You could also create a Zeppelin service to create executable notebooks to run on your cluster (instead of running via console)

Usage

Uploading files to your cluster

Choose “Files” from the left pane. The “Private” tab displays your private files, located in the “/private” directory in your cluster, while the “Shared” tab displays shared files across clusters, located in the “/shared” directory. The “New folder” and “Add files” options should be self explanatory.

(Note: You can only add one file at a time therefore you should compress your files and upload the archive directly.)

Accessing your cluster

From the “Services” page, click on your Kubebox service. You will be prompted to enter your username and password. Then, you will be prompted to choose a namespace. Navigate to karvdash-your username and choose it (navigation is possible both with the mouse and the keyboard arrows).

You should see a menu like this

kubebox menu

Now, choose one of your MPI pods and press “R” in order to open a console on that pod. A new tab with the name of your MPI service will be created with a terminal on your pod. You can navigate between the tabs by clicking on them.

Running your applications

SAS

The cluster pods contain many utilities, m4, gcc, and gdb among others, so you will most likely be able to compile, run, and debug your SAS applications directly from the console.

MPI

You can use SLURM to run MPI applications on the cluster without having to manually edit hostfiles. SLURM even offers advanced capabilities such as workload scheduling, reservations, and optimized resource selection.

SLURM is probably already configured on the cluster but if you need to reconfigure you can run:

  1. rm /etc/slurm.conf
  2. MPI_POD=`kubectl get pods -selector=app=mpi -o jsonpath=‘{.items[0].metadata.name}’`
  3. kubectl cp $MPI_POD:/etc/slurm.conf /etc/slurm.con

Finally, instead of using mpirun, use srun to run your MPI application. Check srun -help for the possible options, the -nodes and -ntasks-per-node options are especially useful.

(IMPORTANT: Also use option -mpi=pmix with srun for OMPI applications to run)

(Optional) Zeppelin

An MPI application example using Zeppelin and SLURM is located on the Shared tab in the Files menu, in the examples folder. Download the “MPI NAS Benchmarks (with SLURM).zpln” file, then choose your zeppelin service and import the downloaded file. Feel free to play around and experiment with the commands. Also check out how easy it is to visualize produced data in zeppelin.