Centrale Nantes Supercomputing

Supercomputer LIGER

The supercomputer, which was installed at Centrale Nantes at the end of 2015 as part of the Connect Talent project in the Pays de la Loire region, is one of the most powerful in its category in France (Tier-2). Giving access to unprecedented levels of precision, this state-of-the-art equipment is a real game changer which will release new potential for innovation.

Intensive numerical calculation is an indispensable tool for both research and industry: it reduces the cost of testing, facilitates optimization and promotes creativity and the search for new solutions. The means of calculation are also an indicator of the research and development of a region, or a nation.


A 24-hour calculation on a conventional PC takes only a few minutes on a supercomputer.


High Performance Computing (HPC) consists of using software for acquisition, modeling or analysis on supercomputers of several thousand processors capable of executing several billion operations per second to model complex phenomena, process or characterise large volumes of data. It is also a set of servers designed to run long-term processes to massively and intensively calculate programs. These supercomputers can be seen as a large number of machines (computing servers) linked together by very high-speed networks.
Centrale Nantes is the only higher education establishment in France to have such a powerful supercomputer which is open to different users.

Students at the school benefit from this state-of-the-art equipment, through courses requiring significant computing resources (e.g. Mechanical Engineering for Materials and Manufacturing Processes or Virtual Reality) is also involved.

Access to part of this supercomputer is also open to industrialists and publicly funded research projects (subject to a charge), as well as to certain projects selected by a technical and scientific committee (free of charge).

Technical details

  • LIGER is a BULL/Atos DLC720 cluster
  • 280 TFlop/s Peak (200TF Rmax Linpack)
  • 252 compute nodes (Dual x86 nodes with 12c @2.5GHz)
  • 6.432 cores Intel Xeon (Haswell and Cascade Lake)
  • 28 GPUs NVIDIA K80
  • 4 GPUs NVIDIA V100
  • FDR Infiniband interconnect (56 GB/s)
  • 900 TB GPFS Fast Storage (12 GB/s)
  • Cumulative system memory of over 32 TB
  • Direct Liquid Cooling technology for optimal energy efficiency
  • Hosted and administered by the High Performance Computing Institute
     
  • Learn more
Published on October 10, 2018 Updated on February 23, 2021