Home » Intel Sapphire Rapids HBMs promise to triple Xeon Ice Lake performance
Technology

Intel Sapphire Rapids HBMs promise to triple Xeon Ice Lake performance

Intel expanded information on its new generation of Xeon Scalable processors, known as Sapphire Rapids HBMa variant that will hide under the encapsulation up to 64 GB of HBM2e memory to dramatically accelerate performance on many types of workloads.

It must be remembered that this will be one of the existing variants, since The Intel Sapphire Rapids will be available in two flavors: one standard and one with HBM memory. The standard variant will feature a chiplet design consisting of four XCC dies which will have a die size of approximately 400 mm2. That’s the size of a single XCC die and there will be four in total on the top Xeon Sapphire Rapids-SP chip.

Each matrix will be interconnected by 10 EMIB interconnectswhich have a pitch size of 55u and a core pitch of 100u.

Intel Sapphire Rapids HBM

In contrast, the HBM variant will have 14 of these interconnects to connect HBM2e memory with cores. This memory can be present in up to four 8-Hi chips (of 8 dies), where each chip offers 16 GB of capacityadding a total of 64 GB. This increases the packet size: 5700mm228% more than the 4446mm2 of the non-HBM variant, and 5% larger than that of the AMD EPYC Genoa, its direct rival.

“The codenamed Intel Xeon Sapphire Rapids Processor with High Bandwidth Memory (HBM) is a great example of how we are leveraging advanced packaging technologies and silicon innovations to deliver dramatic performance improvements. , bandwidth and power savings for HPC. With up to 64 gigabytes of high-bandwidth HBM2e memory in the package and in-processor accelerators, we are able to unleash workloads that require high memory bandwidth, while delivering significant improvements performance in key HPC use cases.

Comparing 3rd Gen Intel Xeon Scalable processors to upcoming Sapphire Rapids HBM processors, we saw a two to three times increase in performance for weather, energy, manufacturing, and physics research workloads2. In the presentation, Prith Banerjee, Ansys CTO, also showed that Sapphire Rapids HBM delivers up to twice the performance in real-world Ansys Fluent and ParSeNet workloads. »

Intel Sapphire Rapids HBM Performance

Intel Intel Sapphire Rapids processors should offer up to 56 cores and 112 processing threads under the architecture Golden Cove (same cores as Alder Lake) to a manufacturing process Intel 7. Expected a 19% improvement in the CPIaccess to up to 105 MB of L3 cache, support for Octa(8)-Channel DDR5 @ 4800 MHz configurations, access to up to 80 PCI-Express 5.0 lanes, and a TDP of up to 350W.

In terms of performance, in its version with HBM memory, it is expected that triple performance current Intel Xeon processors based on the Ice Lake architecture, in specific workloads. In the worst case, they tell us that they will double the performance.

About the author

admin

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *