Skoltech HPC cluster Arkuda by Lenovo

  • General description
    Vendor: Lenovo
    Platform: LeSI/LeROM Best recipe 17D
  • Frontends
    Server: 2x x3550 M5
  • The compute part
    The regular compute blades: 54 (nx360 M5)
    The monster SMP nodes: 2 (x3850 X6) +1 (x3950 X6)
    Processor type A (nx360 M5): Intel Xeon Processor E5-2667 v4 (Broadwell) 8C 3.2GHz
    Processor type B (nx360 M5): Intel Xeon Processor E5-2698 v4 (Broadwell) 20C 2.2GHz
    Processor type C (x3850 M5): Intel Xeon Processor E7-4850 v4 (Broadwell) 16C 2.1GHz
    Processor type D (x3950 M5): Intel Xeon Processor E7-8890 v4 (Broadwell) 24C 2.2GHz
    GPGPU type A: NVIDIA K80s
    GPGPU type B: NVIDIA M40
    Computing CPU processors quantity: 36 (A) + 72 (B) + 8 (C) +8 (D)
    Computing GPU processors quantity: 6 (A) + 6 (B)
    Computing cores CPU quantity: 288 (A) + 1440 (B) + 128 (C) + 192 (D)
    CPU Theoretical peak performance (Rpeak): 14,74 TFlops (A) + 50,69 TFlops (B) + 4,30 TFlops (C) + 6,76 TFlops (D)
    GPU Theoretical peak performance (Rpeak): 17,46 TFlops (A, FP64) + 42 TFlops (B, FP32)
    Total CPU RAM: 4608 GB (16GB per Core type A) + 18432 GB (12.8GB per core type B) + 6144 GB (48GB per core type C) + 6144 GB (32GB per core type D)
    Total GPU RAM: 144 GB (type A) + 72GB (type B)
  • VDI
    Servers: 3x x3650 M5 equipped by 2x NVIDIA M60, or by 2x NVIDIA K6000, each
  • The data storage system
    Parallel file system of data storage: GSS24/GPFS
    Available disk spaces (Operational): 0.9 PB (GSS24) as fast scratch + 84 TB (DRAID6) for user data (aka /home)
    Local scratch SSD disk: from 347 GB to 16000 GB
    Backup: 84TB, GPFS/GSS asynchronous mirroring
  • Interconnect
    Data networks: GPFS/GSS24 – InfiniBand EDR (100 Gb/s)
    Service and management networks: 1GbE
  • OS
    Linux Red Hat 7.3 (Maipo)
  • Workload Manager
    Moab HPC Suite v9.1.1
    The list of available queues
    The list of software

If you have a technical questions regarding the cluster, please send your question(s)/request(s) to: