What is the Blueshark Cluster?

Tech Support
2020-04-10 10:00

The Blueshark Cluster is comprised of IBM iDataplex Systems with

  • 50 compute nodes
  • 13 big memory nodes
  • 11 GPU nodes
  • 1 head node

totalling 1720 cores and 4,397 GB of RAM. It was funded by the National Science Foundation (NSF) through the Major Research Intrumentation Grant.

The configuration of each of the 50 compute nodes is:

  • IBM System x iDataPlex dx360 M3
  • 2 x Hexa-Core Intel Xeon X5650 @ 2.67GHz CPUs
  • 24GB of RAM
  • 250GB SATA HDD
  • 1 GbE Ethernet Interconnect

The head node configuration is:

  • IBM System x iDataPlex dx360 M3
  • 2 x 4 Core Intel Xeon X5550 @ 2.67GHz
  • 24GB of RAM
  • LSI MegaRAID SAS Controller
  • Storage Expansion Unit
  • 8 x 1 TB 7200RPM SAS Hot-Swap HDD
  • 10 GbE link to compute nodes via Chelsio T310 10GbE Adapter
  • Redundant Hot-swap Power Supplies

There are 11 GPU compute nodes with this configuration:

  • Dell PowerEdge C4130
  • 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
  • 131GB of RAM
  • 1TB SATA HDD
  • 4 x Nvidia Tesla K40m
  • 1 Gb Ethernet Interconnect
  • Mellanox InfiniBand Interconnect

The 13 big memory compute nodes are configuration:

  • SuperMicro 1u Servers
  • 2 x 10 core Intel Xeon E5-2650 @ 2.30GHz
  • 131GB of RAM
  • 120GB SATA HDD
  • 1 Gb Ethernet Interconnect

The storage node configuration:

  • Dell PowerEdge R630
  • Dell PowerVault MD3060
  • 60 x 4TB HDD
  • 240TB raw capacity

Other hardware resources:

  • 2 x BNT 48port 1GbE switches with dual 10GbE

The HPC software environment is implemented in CentOS 7 Linux.

Software Installed:

  • ATLAS - Automatically Tuned Linear Algebra Software
  • BLAS - Basic Linear Algebra Subprograms
  • Boost C++
  • CUDA - Nvidia CUDA Programming
  • DMTCP - Distributed MultiThreaded CheckPointing
  • Environmental Modeling System - http://strc.comet.ucar.edu/
  • Fluent
  • Gaussian - http://www.gaussian.com
  • GNU Compilers - C/C++/Fortran
  • Java
  • LAPACK - Linear Algebra Package
  • MPI - Message Passing Interface - MPICH and OpenMPI
  • NetCDF - Network Common Data Form
  • Octave - GNU Octave
  • Paraview - Data Analysis and Visualization
  • PETSc - Portable, Extensible Toolkit for
    Scientific Computation
  • Portland Group Compiler - C/C++/Fortran/MPICH
  • Python
  • SAGE Math - http://www.sagemath.org
Average rating: 5 (1 Vote)

You can comment this FAQ