Technical Information

This topic lists relevant technical information.


Currently the cluster contains the following GPU nodes:

  • 32 GPU nodes with:
    • 8 x NVidia GTX 1080 Ti
    • 2 x 10 core Xeon E5-2630 V4 @ 2.2GHz
    • 256GB RAM

For Jupyter notebooks that only require CPUs there are three types of nodes:

  • 1 CPU node with:
    • 2 x 22 core Xeon E5-2699 V4 @ 2.2GHz
    • 512GB RAM
  • 1 CPU node with:
    • 2 x 18 core Xeon E5-2699 V3 @ 2.3GHz
    • 384GB RAM
  • 4 CPU nodes with:
    • 2 x 12 core Xeon E5-2697 v2 @ 2.7GHz
    • 256GB RAM

All nodes have 350GB shared scratch space for running jobs.

Only one GPU and one CPU node is always on. The remaining nodes are turned on when needed for running jobs and shut down again automatically when idle for more than one hour.

The GPU nodes are distributed over four racks and are powered on balanced over the racks. GPUs are running at 250W power limit until more than four nodes per rack are running. Then the power for nodes in a rack is reduced gradually until it is only 125W per GPU when all eight nodes are running. This is required to stay in the power budget for our server racks.


A dedicated Ceph cluster provides all the storage:

  • 5 servers with
    • 2 x 6 core Xeon 3204 @ 1.9 GHz
    • 16 GB RAM

The following storage devices are used for providing storage:

  • 3 x 12.8 TB Samsung PM1735
    • 8000 MB/s read
    • 3800 MB/s write
    • 1500000 IOPS read
    • 250000 IOPS write

Triple redundancy is used for improved resilience and read speed, resulting in 12.8 TB effective storage capacity. Performance is currently limited by the network and allows concurrent reads with 3 GB/s from the nodes and writes with 1 GB/s.

Login Nodes

Two login nodes are available to students to prepare and start jobs:

  • 2 login nodes ( and with:
    • 2 x 12 core Xeon E5-2697 v2 @ 2.7GHz
    • 256 GB RAM

On both login nodes users are restricted to 2 cores and 16 GB of RAM.

Page URL:
© 2024 Eidgenössische Technische Hochschule Zürich