Just a few hours ago we published an article about the future AMD Ryzen 7000 and now it would be Intel’s turn, with its processors Intel Xeon Sapphire Rapids-SP. According to leaks from YuuKi_AnS on Twitter, a total of 23 references which are part of the Sapphire Rapids-SP family that will release end of this year. Among these, the highest rangewill have 60 cores/120 threads a 350W (Sapphire Rapids-SP60) and we will reduce until 24 cores/48 threads a 225W of the input range of this CPU family (Sapphire Rapids-SP 24).
The new Intel Sapphire Rapids-SP will be the fourth generation of Intel Xeon Scalable processors, replacing Ice Lake-SP, upgrading from 10nm Enhanced SuperFin to the new “Intel 7” node (7nm), as well as Lake Alder. This new range of CPUs for servers will have the architecture golden covewhich provides improved 20% CPI comparing it to Willow Cove. These nuclei will be present in several tiles and will join via an integrated multi-chip interconnect bridge (EMIB).
The Intel Xeon Sapphire Rapids-SP will have an EMIB interconnect
This design will allow each tile to operate on its own and act as a single SOC, allowing each thread to have full access to all tiles. This would amount to providing a low latency and one high bandwidth across the entire SOC using EMIB. Regarding the new instructions, the Sapphire Rapids-SP will have AMX, AiA, FP16 and CLDEMOTE as well as dedicated acceleration engines to increase the efficiency of each core when downloading and reduce the time needed to complete tasks.
In addition, it should be added that Sapphire Rapids-SP will have a design of quad chip and will be available with both hbm like without HBM. About memory hbm we know that the version with it of the Intel Xeon will host up to 4 HBM packageswith a much higher DRAM bandwidth to a Xeon Sapphire Rapids with only DDR5 octa-channel RAM. SKUs with HBM will have 2 modes: HBM Flat and HBM Caching.
Let’s move on to improvements IT ISthese new Xeons will introduce CXL 1.1, to expand memory and accelerators in the data center segment. There has also been improved Intel UPI multisocket scaling, offering up to 4 x24 UPI links and 8S-4UPI topology. As a last improvement of the inputs/outputs, it is worth highlighting the cache increase exceeding 100 MB as well as compatibility with the Optane Persistent Memory 300 series. As for the platform, it will be called Eagle Stream (C740 chipset) and will use a new LGA 4677 socket, replacing the LGA 4189 and introducing memory Octa-channel DDR5 until 4800MHz Yes PCIe 5.0.
Intel responds to AMD and its EPYC, will there be competition?
We also know the dimensions chip, and it’s just a Sapphire Rapids-SP without HBM will measure 4,446mm², with 10 EMIB interconnects, while the version with HBM will have a total of 14 EMIB interconnects. This HBM version of the future Intel Xeon will have 4 memory packs HBM2E with a minimum of 16 GB per stack and a 64 GB total. Leaving us some dimensions for the variant with HBM from 5,700 mm²it’s a 5% bigger than an amd Epyc Genoa.
Finally, we will show the individual specifications of the new Sapphire Rapids:
- Sapphire Rapids-SP 24 cores / 48 threads / 45 MB / 225W. Stage Money
- Sapphire Rapids-SP 28 cores / 56 threads / 52.5 MB / 250W. Stage Money
- Sapphire Rapids-SP 40 cores / 80 threads / 75 MB / 300W. Stage requested
- Sapphire Rapids-SP 44 cores / 88 threads / 82.5 MB / 270W. Stage requested
- Sapphire Rapids-SP 48 cores / 96 threads / 90 MB / 350W. Stage Platinum
- Sapphire Rapids-SP 56 cores / 112 threads / 105 MB / 350W. Stage Platinum
- Sapphire Rapids-SP 60 cores / 120 threads / 110 MB / 350W. Stage Platinum
It should be noted that the PDT of these specifications corresponds to PL1so when the CPU is in PL2consumption is planned more than 400W until reaching a limit of 700W per BIOS. Keep in mind that these leaks are the ES1/ES2 version, i.e. they are samples far from the final version and therefore the exact ones are not known. actual speeds. If we compare these new Intel Xeons with Epyc Genoa, its main rival, amd stay ahead until 96 hearts; while Intel can hold up to 8 processors at a time.