Computational facilities

Beowulf clusters looking for new frontiers.

VIDI
VIDI

This Beowulf cluster contains 38 compute nodes with dual processor, octo core 2.4 GHz EM64T Xeon E5 processors (E5-2630 V3) supporting AVX all with Hyper- Threading and Turbo Boost enabled. In total there are 608 cores for computing available. The nodes are connected by a SuperMicro gigabit switch.  Each node has 64Gb DDRIII 1866 MHz RAM and four 3 Tb SATA hard-disks available in a RAID-1 (OS) and a RAID-5 (scratch). On the head node a total storage capacity of ~11 TB, using software RAID-10 over eight 3 Tb SATA disks, is available and with the system disks running from a software based RAID-1 configuration also over these eight disks.

Read more
ERC2
ERC 1

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.4 GHz EM64T Xeon E5 processors (E5-2630 v3) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10G SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40Gb copper links resulting in a triangle topology with an aggregated bandwidth of 80 Gb/s. Each node has 64Gb DDRIII 1866 MHz RAM and four 2 Tb SATA hard-disks available in a RAID-5 for a local XFS scratch file system.

Read more
Lewis: ERC1
ERC 2

This Beowulf cluster contains 121 nodes with dual processor, octo core 2.6 GHz EM64T Xeon E5 processors (E5-2650 v2) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 1920 cores for computing available. The nodes are located in three 19” racks and by using three 10G SuperMicro switches using Cat 6+ cabling. Each switch in each rack defines a subnet and IP traffic is routed between switches using two 40Gb copper links resulting in a triangle topology with an aggregated bandwidth of 80 Gb/s. Each node has 64Gb DDRIII 1866 MHz RAM and four 2 Tb SATA hard-disks (WD-red) available in a RAID-5 for a local XFS scratch file system.

Read more
Octo
OCTO

This Beowulf cluster contains 39 nodes with dual processor, octo core 2.0 GHz EM64T Xeon E5 processors (E5-2650) supporting AVX all with Hyper-Threading and Turbo Boost enabled. In total there are 624 cores for computing available. The nodes are connected by an HP Procurve 2848 gigabit switch.  Each node has 64Gb DDRIII 1600 MHz RAM and two 1 Tb SATA hard-disks available in a stripe.

Read more
Hexa
HEXA

This Beowulf cluster contains 36 nodes with dual processor, hexa core 2.67 GHx EM64T Xeon processors (5650) all with Hyper-Threading enabled. In total there are 432 cores for computing available. The nodes are connected by HP Procurve 2848 gigabit switch.

Read more
Server room
Theor clusters

Currently all our clusters and servers are located in the server room in the Gorlaeus.

SARA Huygens
SurfSara

We frequently make use of the national supercomputer Cartesius and the national super cluster LISA. These machines are located at the SurfSARA institute in Amsterdam.

Read more
BOINC desktop grid
BOINC desktop grid application

Our group also has access to a BOINC based desktop grid, which at the moment, consists of more than 68k computers. On this grid students and reachers can run classical trajectory calculations through a personalised queuing system we developed here in Leiden. More information on this grid can be found at http://boinc.gorlaeus.net. The current state of this grid can be viewed at: http://boinc.gorlaeus.net/totals.php.

Read more
Leiden Grid Infrastructure
The Leiden Grid Infrastructure interface

Currently we have connected all these computational resources in a computer grid. For this we developped a grid middleware with the support of NWO-NCF. The report of that project and the LGI software can be found here

Read more