INDUS : IIT Ropar High-Performance Computing (HPC) System

Overview and System Architecture

This document serves as the user manual for the IIT Ropar High-Performance Computing (HPC) facility. It provides an overview of the hardware architecture, software environment, and essential usage information required to effectively access and utilize the cluster.

The manual includes guidance on:

  • Logging into the HPC system
  • Submitting and monitoring computational jobs
  • Managing data and retrieving results
  • Understanding the underlying infrastructure

The HPC system consists of:

  • 2 master nodes
  • 4 I/O nodes
  • 19 compute nodes
  • 2 high-memory nodes

Together, these resources provide a peak computational capability of approximately 81 teraflops.

System Hardware Specifications

The cluster is built on Intel® Xeon® Gold 6548Y+ processors and interconnected through a Mellanox ConnectX-6 HDR-100 InfiniBand network (100 Gb/s) with active Subnet Manager support and MTU 4096.

A Lustre parallel file system is used for high-performance distributed storage.

Node Summary
Node Type Count
Master Nodes 2
I/O Nodes 4
Compute Nodes 19
High-Memory Nodes 2
Total Nodes 27
Master Nodes

Master nodes coordinate overall cluster operation, including:

  • System monitoring and health management
  • Resource scheduling and workload control
  • Administrative and orchestration services

Configuration (per node):

  • 2 × Intel Xeon Gold 6548Y+ (32 cores each, up to 4.1 GHz)
  • 64 total CPU cores
  • 250 GB RAM
  • 2 × 1 TB SSD
I/O Nodes

I/O nodes provide backend services for the parallel storage system, including:

  • Hosting Lustre metadata and object storage services
  • Managing filesystem mounting and data flow
  • Supporting high-throughput data access for compute workloads

Configuration (per node):

  • 2 × Intel Xeon Gold 6548Y+
  • 64 CPU cores
  • 250 GB RAM
  • 2 × 500 GB SSD
CPU Compute Nodes

Compute nodes are the primary execution resources where user applications run in interactive or batch mode via the scheduler.

Configuration (per node):

  • 2 × Intel Xeon Gold 6548Y+
  • 64 CPU cores
  • 250 GB RAM
  • 2 × 500 GB SSD
High-Memory Nodes

High-memory nodes support workloads requiring large in-memory datasets, such as:

  • Big-data analytics
  • Large-scale simulations
  • Memory-intensive scientific applications

Configuration (per node):

  • 2 × Intel Xeon Gold 6548Y+
  • 64 CPU cores
  • 1 TB RAM
  • 2 × 500 GB SSD
Storage Subsystem
  • Based on the Lustre parallel file system
  • Usable capacity: ~762 TB primary storage
  • Aggregate throughput: ~18 GB/s
Operating System
  • Rocky Linux 8.7
  • x86-64 architecture
Network Infrastructure

Efficient HPC operation relies on multiple logical network functions:

  • Management network – monitoring, control, and administration
  • Storage network – high-speed filesystem access
  • I/O network – external connectivity and campus integration
  • Message-passing network – low-latency processor communication

Primary Interconnection Network

InfiniBand HDR-100 (100 Gb/s)

  • High bandwidth and extremely low latency
  • Optimized for MPI-based parallel applications
  • Connects all compute nodes within the cluster

Secondary Interconnection Network

Gigabit Ethernet (1 Gb/s)

  • Used for management, login access, and general connectivity
  • Compatible with standard MPI implementations when InfiniBand is unavailable
System Software Environment
CategoryComponents
Base Operating SystemRocky Linux 8.7
Architecturex86-64
MonitoringGanglia
Resource ManagerSlurm 20.11.9
Parallel Filesystem ClientLustre 2.15.2
High-Speed InterconnectMellanox InfiniBand
CompilersGNU (gcc, g++, gfortran), Intel oneAPI 2025
System Access and Job Submission Guide

Login Instructions

Users can access the HPC system securely using SSH from Linux, Windows, or macOS.

Basic SSH Login

ssh username@IP_ADDRESS

Check Installed Applications

module use /opt/modulefiles
module avail
module load <module_name>

Access from Windows

Method 1: Using MobaXterm (Recommended)

  • Connect your system to the campus network or Wi-Fi.
  • Download MobaXterm Portable from the official website.
  • Launch the application and allow required permissions.
  • Click Session → SSH.
  • Enter the server IP address (e.g., 192.168.1.33) and your username.
  • Provide your password when prompted to log in.

Method 2: Using Windows Terminal or PowerShell

ssh username@IP_ADDRESS

Access from macOS

ssh username@IP_ADDRESS

Submitting Jobs Using Slurm

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=64
#SBATCH --time=00:01:00
#SBATCH --partition=short

module load <module_name>
<application_command>

Useful Slurm Commands

  • sbatch <scriptname>
  • squeue
  • sinfo -l
Software List

Compilers

  • Intel oneAPI Base Toolkit 2025.3
  • Intel oneAPI HPC Toolkit 2025.3
  • OpenMPI 5.0.8
  • OpenMPI 4.1.8
  • GCC 12.2.0
  • GCC 7.3.0

Licensed Software

  • Gaussian 16 (G16) — Licensed for all IIT users

Open-Source Software

  • LAMMPS 2025
  • GROMACS 2025.3
  • Quantum ESPRESSO 7.4.1, 7.5
  • Phonopy 2.47.1
  • NAMD 3.0.1
SOPs - IIT HPC access and Job Submission

HPC Access Request

  1. Click here to download the form.
  2. Upload the signed form at below link:
    https://forms.gle/KDb3kw6LugBQmQNo9
  3. Send the Hardcopy of this form to IT Office.

Contact Details