Student Cluster
This topic gives you a quick overview of the steps involved to run a job on the D-INFK student cluster.
Login in
If your course or project uses
Jupyter then go to
https://student-jupyter.inf.ethz.ch and log in there. More information can be found
here.
For running jobs directly you can log in to
student-cluster.inf.ethz.ch
or to the actual login nodes
student-cluster1.inf.ethz.ch and
student-cluster2.inf.ethz.ch via secure shell (
ssh). Both hosts have the same host keys with the following key fingerprints:
SHA256:uHitVtIWntg4nYTq7Rs83xhKl1x7XLPLagYJoRSppWs (ECDSA)
SHA256:HUDN67JaBd19Z67bobx4e1VDFG08KxhEzpFjINEdnZQ (ED25519)
SHA256:VSq8zIQ/Sg6UyibS0pDuEwyNDthrpihxpGfkf0JrxPY (RSA)

When you log in you will be informed about the remaining time per course or project and how much free space you have in your home directory and other places. Keep an eye on these numbers so that you do not run out of time or space before a deadline.
Running Jobs
Please read on
here.
GPUs
Currently there are the following NVidia GPUs in the cluster:
Jobs without requesting a specific GPU are scheduled on nodes according to the priority above.
GB10 nodes must be requested specifically by a job.
Space
You have several options to store data. To get a list of all paths of network file systems where you can write run the following command on any node:
Work Space
Some courses and projects provide additional work space for you or your team under
/work/courses,
/work/projects or
/work/users.
Home Directory
You have 20 GB of space in your home directory, independent of how many courses or projects you have.
Scratch Space
Your individual scratch space under
/work/scratch/{your user name} has a hard space limit of 100GB and 100000 files. Data in there has a retention period that depends on the amount of data:
The cleaning job that deletes data according to age starts
23:00 every day.
You are not allowed to keep data alive by automatically updating modification time of files.
Jobs
Resources available for jobs have the following default limits:
| |
GPU (jobs/Jupyter) |
CPU (Jupyter) |
| Number of running jobs |
1 |
1 |
| GPUs per job |
1 |
- |
| CPU cores per job |
2 |
1, with time sharing |
| RAM per job |
24 GB |
4 to 8 GB, course or project specific |
Space in /tmp |
40 GB |
4 GB |
| Queued jobs |
2 |
- |
When selecting a specific GPU for a job then the limits of the specific node containing the GPU apply:
| |
RTX 5060 Ti |
RTX 2080 Ti |
GTX 1080 Ti |
GB10 |
| Number of running jobs |
1 |
1 |
1 |
1 |
| GPUs per job |
1 |
1 |
1-4 |
1 |
| CPU cores per job |
3 |
4 |
2 |
20 |
| RAM per job |
24 GB |
36 GB |
24 GB |
116GB |
Space in /tmp |
40 GB |
40 GB |
40 GB |
850 GB |
| Queued jobs |
2 |
2 |
2 |
2 |
| Job runtime cap |
7 days |
7 days |
7 days |
24 hours |
Information about the specific nodes can be found
here.
Job Runtime
The amount of hours that you have as well as maximum runtime per job depends on your courses and projects. You can get a report of all courses with their course tags, time limits and compute resources by running the following command on any node:
Similar information is displayed on the server launch page of
JupyterHub.

For teams this report will also show if another team member is currently running a job, i.e., you have to wait until that job is finished.
Login Nodes
On the login nodes you also have the following restrictions:
Expiration of Access
For courses access to the cluster will be disabled in the morning of the last Monday of the semester holidays. For BsC and MsC projects the access ends on the date requested by the supervisor.
All data in your home directory will also be deleted. Please
copy all data that you still need away before this happens.