Infrastructure

All the infrastructure is running on AlmaLinux 9, which is a community owned and governed, forever-free enterprise Linux distribution, focused on long-term stability, providing a robust production-grade platform which is binary compatible with Red Hat Enterprise Linux (RHEL). More information on:

The hardware nodes of the infrastructure are divided in three classes:

  • User Interfaces (UI) : the entry points for users, also provide a working environment to compile and test their programs. These machines have GPUs and access to the storage services. These nodes are aimed for testing, so they have limited computing time and power. When the jobs are ready, users can submit their jobs to Worker Nodes in batch mode.

  • Worker Nodes (WN): where the batch user jobs are executed. Contain high-end CPUs, larger memory configuration and nodes with up to 8 high-end GPGPUs. These nodes allow more computing time and power.

  • Storage Nodes(SN): disk servers to store user and project data, accessible from both User Interfaces and Worker Nodes.

_images/infrastructure.png

System Overview

Artemisa’s is equipped with:

User interfaces:

  • 1 x User Interface (mlui01.ific.uv.es) with:
    2 x Intel Xeon Gold 6130 CPU @ 2.10GHz 16c
    192 GBytes ECC DDR4 a 2666 MHz
    2 x GPU Tesla Pascal P100 PCIe
  • 1 x User Interface (mlui02.ific.uv.es) with:
    2 x Intel Xeon Gold 6130 CPU @ 2.10GHz 16c
    192 GBytes ECC DDR4 a 2666 MHz
    2 x GPU Tesla Pascal V100 PCIe

Worker nodes:

  • 11 x worker node with:
    2 x AMD Rome 7532 32c 2.4GHz
    512 GBytes ECC DDR4 a 3200 MHz
    1 x GPU Nvidia Tesla Ampere A100 40GB PCIe
  • 1 x worker node with:
    2 x AMD Rome 7642 48c 2.3GHz
    512 GBytes ECC DDR4 a 3200 MHz
    8 x GPU Nvidia Tesla Ampere A100 40GB SXM4
  • 2 x worker node with:
    2 x Intel Xeon Platinum 8160 CPU @ 2.10GHz 24c
    384 GBytes ECC DDR4 a 2666 MHz
    1 x GPU Tesla Volta V100 PCIe
  • 20 x worker node with:
    2 x Intel Xeon Gold 6248 CPU @ 2.50GHz 20c
    384 GBytes ECC DDR4 2933 MHz
    1 x GPU Tesla Volta V100 PCIe
  • 1 x worker node with:
    2 x Intel Xeon Platinum 8180 CPU @ 2.50GHz 28c
    768 GBytes ECC DDR4 2666 MHz
    4 x GPU Tesla Volta V100 SXM2
  • 2 x worker node with:
    2 x AMD EPYC 9454 @ 2.75GHz 48c
    384 GBytes ECC DDR5 4800 MT/s
    2 x Nvidia H100 NVL 94 GBytes (NVLink)

Disk servers:

  • 3 x disk server with:
    2 x Intel Xeon Goold 6248R 24c 3.0GHz
    192 GBytes ECC DDR4 2933 MHz
    24 x 3.8TB SSD SATA3
  • 5 x disk server with:
    2 x Intel Xeon Gold 6130 CPU @ 2.10GH 16c
    192 GBytes ECC DDR4 2666 MHz
    6 x 8TB SAS 12 Ggb/s SEAGATE ST8000NM0065

Networking:

  • All nodes have a 10 Gbps ethernet connection to the IFIC’s data center network.