Search

link to homepage

Navigation and service


JURECA: Successor to JUROPA

Since 2009, the JUROPA cluster has been an invaluable research tool for many projects at JSC and many outstanding scientific results have been obtained on the system. Now, after more than five years of successful operation, the end of life of JUROPA is approaching.

In spring 2015, JUROPA will be succeeded by a new system JURECA (Jülich Research on Exascale Cluster Architectures). This supercomputer will be provided by T-Platforms who won a competitive procurement process. JSC and the companies ParTec and T-Platforms will be engaged in a cooperative project to further improve the JURECA operation after installation and to address important research questions for next-generation cluster architectures.

Once fully installed, JURECA will consist of about 1,700 compute nodes and will have a peak performance of at least 1.6 petaflops. The majority of the compute nodes will feature two Intel Haswell E5-2680 v3 12 core CPUs and 128 GB DDR4 RAM. For applications that require even higher memory per node, the system will additionally feature about 100 nodes with double memory and about 50 nodes with 512 GB main memory each. Several visualization nodes with large memory configurations and latest-generation NVIDIA GPUs will complement the JURECA configuration and enhance the pre- and post-processing capabilities for the users.

In comparison to the JUROPA architecture, each JURECA compute node features three times the number of cores – each with a slightly lower clock frequency but improved microarchitecture – resulting in an improvement by a factor of 10 in terms of peak floating-point performance. While JURECA – like its predecessor – has been designed by JSC as a general-purpose supercomputer to serve a wide variety of user needs, users will have to optimize their codes (e.g. by improving vectorization) to take full advantage of the performance increase offered by the new system. At the same time, the architectural changes, such as the increased core count per node, open up new possibilities for code scalability using mixed-mode parallelization techniques.

The JURECA compute nodes will be interconnected with a cutting-edge Mellanox 100 Gbps EDR InfiniBand network organized in a full fat-tree topology. JURECA will connect to the central storage cluster JUST and mount the GPFS home and work filesystems from there. This consolidation of the storage filesystems on the different compute platforms at JSC will allow users to work more easily with their data on several systems and will reduce the necessity for data movement. The filesystem bandwidth on work is projected to reach about 800 Gbps.

In addition to the major hardware improvements, JURECA will be launched with a state-of-the-art software stack such as an up-to-date Enterprise Linux distribution and a ParaStation MPI with MPI-3 support. It will also be the first large-scale system at JSC to use the open-source Slurm batch system in combination with the ParaStation resource management system, which has a proven track record on JUROPA and JUDGE.

In order to minimize service interruption for users, JURECA will be installed in two phases. The first phase – to be available in the second quarter of 2015 – will reach a performance level equivalent to the JUROPA system with a significantly reduced floor and energy footprint. Once users have successfully transitioned to the new machine, JUROPA will be decommissioned and the second stage of JURECA will be installed.

Until JURECA is available, JUROPA users can use the Haswell test system JUROPATEST to port and optimize their applications and prepare their workflows for the new system. Users will be informed when data migration from the work filesystem on JUROPA should begin. Information on JURECA will be available at http://www.fz-juelich.de/ias/jsc/jureca.
(Contact: Dr. Dorian Krause, d.krause@fz-juelich.de)

JSC, 11 February 2015


Servicemeu