We come with news that could mean a complete turnaround for the industry of processors. We have before us the main rival of ARM and x86, the RISC-V architecture, this time by a team of university students Catalans. As you have read, the team of the Universitat Politècnica de Catalunya (CPU), rode the first RISC-V supercomputer: Monte Cimone. For this they have been equipped with the “Monte Cimone” cluster, with an excellent balance between energy consumption and performance.
It all started when the team belonging to the Polytechnic University of Catalonia Under the name “NotOnlyFLOPs”, he stood out in a competition held in Hamburg. This is called the 2022 Student Cluster Competition, which ran from May 30 to June 1. In it, the Catalan team assembled a RISC-V supercomputer under the name “Mont Cimone“.
Each RISC-V cluster consists of 2 SiFive bases with Freedom U740 SoC
As a proposal in the competition to achieve the best balance between energy consumption and performance, the University of Bologna, CINECA and E4 have contributed to this band. It was designed at the end of 2021 and is based on a RISC-V platform with six compute nodes and powered by the SoC SiFive Freedom U740.
The SoC Freedom U740, created in 2020 consists of a total of five hearts, four for U7 applications and one core for the system, the S7. It would work 1.4GHz and has 2 MB of L2 cache, peripheral controllers and Gigabit Ethernet. In addition, as can be seen in the photo, it feeds using two power supplies improvement of 250W each.
This cluster is mounted on a shelf size standard 25U, including compute nodes, switches, and air cooling. In total there are six dual card servers and a form factor of 4.44 cm (1U) high, 42.5 cm wide and 40 cm deep.
Each motherboard of the RISC-V Monte Cimone supercomputer is equipped with the essentials
The base plates used will have dimensions Mini ITX standard, 17cm x 17cm, which will house a total of 16 GB by heart DDR3 a 1866MHzaccompanied by a bus PCIe Generation 3 x8. Although it may look like generations past, it will be compatible with USB 3.2 Gen 1 interface and M.2 expansion slots. In fact, this one is occupied by a 1 TB NVME SSD the Samsung 970EVO Plus.
In addition, each plate has a memory card microSD for UEFI boot. Inside the board’s SoC, two of the six compute nodes are equipped with InfiniBand host channel adapters (HCA). Although the objective was to deploy InfiniBand at 56 GB/s to achieve this RDMA achieve I/O performance.
However, the RDMA functionality of HCA could not be used, due to an incompatibility between the software stack and the kernel driver. Despite this, it has been shown, through ping tests between two cards and between a card and an HPC server, that there is compatibility entirely with InfiniBand.
“The HPC software framework turned out to be easier than previously thought. “We ported all the essential services needed to run HPC workloads in a production environment, namely NFS, LDAP and the SLURM Task Scheduler, on Monte Cimone”.Porting all necessary packages to RISC-V was relatively easy“says the team.