Infrastructure

All the infrastructure is running on Linux CentOS 7, which is a free linux distribution based on Red Hat Enterprise Linux (RHEL) 7. More information on:

The hardware nodes of the infrastructure are divided in three classes:

  • User Interfaces (UI) : the entry point for users, provide a working environment where to compile and test their programs. These machines have a GPGPU and access to the storage services. When the jobs are ready, users can submit their production batch jobs with Job Management System to the Worker Nodes.

  • Worker Nodes (WN): where the production user jobs are executed. Contain high-end CPUs, a large memory configuration, and up to 8 high-end GPGPU to execute the jobs.

  • Storage Nodes(SN): disk servers to store user and project data, that are accessible from both User Interfaces and Worker Nodes.

System Overview

Artemisa is composed of:

User interfaces:

  • 2 x User Interface (mlui01.ific.uv.es, mlui02.ific.uv.es) with:
    2 x Intel Xeon Gold 6130 CPU @ 2.10GHz 16c
    192 GBytes ECC DDR4 a 2666 MHz
    1 x GPU Tesla Pascal P100 PCIe

Worker nodes:

  • 11 x worker node with:
    2 x AMD Rome 7532 32c 2.4GHz
    512 GBytes ECC DDR4 a 3200 MHz
    1 x GPU Nvidia Tesla Ampere A100 40GB PCIe
  • 1 x worker node with:
    2 x AMD Rome 7642 48c 2.3GHz
    512 GBytes ECC DDR4 a 3200 MHz
    8 x GPU Nvidia Tesla Ampere A100 40GB SXM4
  • 2 x worker node with:
    2 x Intel Xeon Platinum 8160 CPU @ 2.10GHz 24c
    384 GBytes ECC DDR4 a 2666 MHz
    1 x GPU Tesla Volta V100 PCIe
  • 20 x worker node with:
    2 x Intel Xeon Gold 6248 CPU @ 2.50GHz 20c
    384 GBytes ECC DDR4 a 2933 MHz
    1 x GPU Tesla Volta V100 PCIe
  • 1 x worker node with:
    2 x Intel Xeon Platinum 8180 CPU @ 2.50GHz 28c
    768 GBytes ECC DDR4 s 2666 MHz
    4 x GPU Tesla Volta V100 SMX2

Disk servers:

  • 3 x disk server with:
    2 x Intel Xeon Goold 6248R 24c 3.0GHz
    192 GBytes ECC DDR4 a 2933 MHz
    24 x 3.8TB SSD SATA3
  • 5 x disk server with:
    2 x Intel Xeon Gold 6130 CPU @ 2.10GH 16c
    192 GBytes ECC DDR4 a 2666 MHz
    6 x 8TB SAS 12 Ggb/s SEAGATE ST8000NM0065

Networking:

  • All nodes have a 10 Gbps ethernet connection to the IFIC’s data center network.