{"id":53423,"date":"2022-08-23T02:55:58","date_gmt":"2022-08-23T02:55:58","guid":{"rendered":"https:\/\/harchi90.com\/intel-details-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-up-to-2-5x-faster-than-nvidia-a100\/"},"modified":"2022-08-23T02:55:58","modified_gmt":"2022-08-23T02:55:58","slug":"intel-details-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-up-to-2-5x-faster-than-nvidia-a100","status":"publish","type":"post","link":"https:\/\/harchi90.com\/intel-details-ponte-vecchio-gpu-sapphire-rapids-hbm-performance-up-to-2-5x-faster-than-nvidia-a100\/","title":{"rendered":"Intel Details Ponte Vecchio GPU & Sapphire Rapids HBM Performance, Up To 2.5x Faster Than NVIDIA A100"},"content":{"rendered":"
\n

During Hot Chips 34, Intel once again detailed its Ponte Vecchio GPUs running on a Sapphire Rapids HBM server platform.<\/p>\n

Intel Shows off Ponte Vecchio 2-Stack GPU & Sapphire Rapids HBM CPU Performance Against NVIDIA’s A100<\/h2>\n

In the presentation by Intel Fellow & Chief GPU Compute Architect, Hong Jiang, we get some more details regarding the upcoming server powerhouses from the blue team. The Ponte Vecchio GPU comes in three configurations starting with a singular OAM and ranging up to an x4 Subsystem with Xe Links, either running solo or with a dual-socket Sapphire Rapids platform.<\/p>\n

\n<\/figure>\n

The OAM supports all-to-all topologies for both 4 GPU and 8 GPU platforms. Complementing the entire platform is Intel’s oneAPI software stack which is a Level-Zero API that provides a low-level hardware interface to support cross-architecture programming. Some of the main features of the oneAPI include:<\/p>\n

    \n
  • Interface for oneAPI and other tools to accelerator devices<\/li>\n
  • Fine gain control and low-latency to accelerator capabilities<\/li>\n
  • Multi-Threaded Design<\/li>\n
  • For GPUs, ships as a part of the driver<\/li>\n<\/ul>\n

    So coming to the performance metrics, a 2-Stack Ponte Vecchio GPU configuration like the one featured on a singular OAM is capable of delivering up to 52 TFLOPs of FP64\/FP32 compute, 419 TFLOPs of TF32 (XMX Float 32), 839 TFLOPs of BF16\/FP16 and 1678 TFLOPs of INT8 horsepower.<\/p>\n

    \n<\/figure>\n

    Intel also details its maximum cache sizes and the peak bandwidth offered by each of them. The Register File size on Ponte Vecchio GPU is 64 MB and offers 419 TB\/s of bandwidth, the L1 cache also comes in at 64 MB and offers 105 TB\/s (4:1), and the L2 cache comes in at 408 MB and offers 13 TB\/s bandwidth (8:1) while the HBM memory pools up to 128 GB and offers 4.2 TB\/s bandwidth (4:1). There is a range of compute efficiency techniques within Ponte Vecchio such as:<\/p>\n

    Register File:<\/strong><\/p>\n

      \n
    • Register Caching<\/li>\n
    • accumulators<\/li>\n<\/ul>\n

      L1\/L2 Cache:<\/strong><\/p>\n

        \n
      • Write Through<\/li>\n
      • Write Back<\/li>\n
      • Write Streaming<\/li>\n
      • uncached<\/li>\n<\/ul>\n

        Prefetch:<\/strong><\/p>\n

          \n
        • Software (instruction) prefetch to L1 and\/ or L2<\/li>\n
        • Command Streamer prefetch to L2 for instruction and data<\/li>\n<\/ul>\n

          Intel explains that the larger L2 cache can deliver some huge gains in workloads such as 2D-FFT Case and DNN Case. Some performance comparisons between a full Ponte Vecchio GPU and a module down-configured to 80 MB and 32 MB have been shown.<\/p>\n

          \n<\/figure>\n

          But that’s not all, Intel also has performance comparisons between the NVIDIA Ampere A100 running CUDA and SYCL against its own Ponte Vecchio GPUs using SYCL. In miniBUDE, which is a computational workload that can predict the binding energy of the ligand with the target, the Ponte Vecchio GPU simulates the test results 2 times faster than Ampere A100. There’s another performance metric in ExaSMR (Small Modular Reactors for large nuclear reactor designs). Here, the Intel GPU is shown to offer a 1.5x performance lead over the NVIDIA GPU.<\/p>\n

          It is a bit interesting that Intel is still comparing its Ponte Vecchio GPUs to Ampere A100 because the green team has since launched its next-gen Hopper H100 to the market and it’s already been shipping to customers. If Chipzilla feels so confident within its 2-2.5x performance figures, then I don’t think it will have any trouble competing well with Hopper unless otherwise.<\/p>\n

          Here’s Everything We Know About The Intel 7 Powered Ponte Vecchio GPUs<\/strong><\/h4>\n

          Moving over to the Ponte Vecchio specs Intel outlined some key features of its flagship data center GPU such as 128 Xe cores, 128 RT units, HBM2e memory, and a total of 8 Xe-HPC GPUs that will be connected together. The chip will feature up to 408 MB of L2 cache in two separate stacks that will connect via the EMIB interconnect. The chip will feature multiple dies based on Intel’s own ‘Intel 7’ process and TSMC’s N7 \/ N5 process nodes.<\/p>\n

          <\/figure>\n

          Intel also previously detailed the package and die size of its flagship Ponte Vecchio GPU based on the Xe-HPC architecture. The chip will consist of 2 tiles with 16 active dies per stack. The maximum active top die size is going to be 41mm2 while the base die size which is also referred to as the ‘Compute Tile’ sits at 650mm2. We have all the chiplets and process nodes that the Ponte Vecchio GPUs will utilize, listed below:<\/p>\n

            \n
          • Intel 7nm<\/li>\n
          • TSMC 7nm<\/li>\n
          • Foveros 3D Packaging<\/li>\n
          • EMIB<\/li>\n
          • 10nm Enhanced Superfin<\/li>\n
          • Rambo Cache<\/li>\n
          • HBM2<\/li>\n<\/ul>\n

            Following is how Intel gets to 47 tiles on the Ponte Vecchio chip:<\/p>\n

              \n
            • 16 Xe HPC (internal\/external)<\/li>\n
            • 8 Rambos (internal)<\/li>\n
            • 2 Xe Base (internal)<\/li>\n
            • 11 EMIB (internal)<\/li>\n
            • 2 Xe Links (external)<\/li>\n
            • 8 HBM (external)<\/li>\n<\/ul>\n

              The Ponte Vecchio GPU makes use of 8 HBM 8-Hi stacks and contains a total of 11 EMIB interconnects. The whole Intel Ponte Vecchio package would measure 4843.75mm2. It is also mentioned that the bump pitch for Meteor Lake CPUs using High-Density 3D Forveros packaging will be 36u.<\/p>\n

              \"\"<\/figure>\n

              The Ponte Vecchio GPU is not 1 chip but a combination of several chips. It’s a chiplet powerhouse, packing the most chiplets on any GPU\/CPU out there, 47 to be precise. And these are not based on just one process node but several process nodes as we had detailed just a few days back.<\/p>\n

              Although the Aurora Supercomputer in which the Ponte Vecchio GPUs and Sapphire Rapids CPUs were to be used has been pushed back due to several delays by the blue team, it is still good to see the company offering more details. Intel has since teased its next-generation Rialto Bridge GPU as the successor to the Ponte Vecchio GPUs and is said to begin sampling in 2023. You can read more details on that here.<\/p>\n

              Next-Gen Data Center GPU Accelerators<\/h2>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
              GPU Name<\/th>\nAMD Instinct MI250X<\/th>\nNVIDIA Hopper GH100<\/th>\nIntel Ponte Vecchio<\/th>\nIntel Rialto Bridge<\/th>\n<\/tr>\n<\/thead>\n
              Packaging Design<\/td>\nMCM (Infinity Fabric)<\/td>\nmonolithic<\/td>\nMCM (EMIB + Foveros)<\/td>\nMCM (EMIB + Foveros)<\/td>\n<\/tr>\n
              GPU Architecture<\/td>\nAldebaran (CDNA 2)<\/td>\nHopper GH100<\/td>\nXe-HPC<\/td>\nXe-HPC<\/td>\n<\/tr>\n
              GPU Process Node<\/td>\n6nm<\/td>\n4N<\/td>\n7nm (Intel 4)<\/td>\n5nm (Intel 3)?<\/td>\n<\/tr>\n
              GPU Cores<\/td>\n14,080<\/td>\n16,896<\/td>\n16,384 ALUs
              (128 Xe Cores)<\/td>\n
              20,480 ALUs
              (160 Xe Cores)<\/td>\n<\/tr>\n
              GPU Clock Speed<\/td>\n1700MHz<\/td>\n~1780MHz<\/td>\nTBA<\/td>\nTBA<\/td>\n<\/tr>\n
              L2 \/ L3 Cache<\/td>\n2 x 8MB<\/td>\n50MB<\/td>\n2 x 204MB<\/td>\nTBA<\/td>\n<\/tr>\n
              FP16 Compute<\/td>\n383 TOPs<\/td>\n2000 TFLOPs<\/td>\nTBA<\/td>\nTBA<\/td>\n<\/tr>\n
              FP32 Compute<\/td>\n95.7 TFLOPs<\/td>\n1000 TFLOPs<\/td>\n~45 TFLOPs (A0 Silicon)<\/td>\nTBA<\/td>\n<\/tr>\n
              FP64 Compute<\/td>\n47.9 TFLOPs<\/td>\n60 TFLOPS<\/td>\nTBA<\/td>\nTBA<\/td>\n<\/tr>\n
              Memory Capacity<\/td>\n128GB HBM2E<\/td>\n80GB HBM3<\/td>\n128GB HBM2e<\/td>\n128GB HBM3?<\/td>\n<\/tr>\n
              Memory Clock<\/td>\n3.2Gbps<\/td>\n3.2Gbps<\/td>\nTBA<\/td>\nTBA<\/td>\n<\/tr>\n
              Memory Bus<\/td>\n8192-bit<\/td>\n5120-bit<\/td>\n8192-bit<\/td>\n8192-bit<\/td>\n<\/tr>\n
              Memory Bandwidth<\/td>\n3.2TB\/s<\/td>\n3.0TB\/s<\/td>\n~3TB\/s<\/td>\n~3TB\/s<\/td>\n<\/tr>\n
              Form Factor<\/td>\nOAM<\/td>\nOAM<\/td>\nOAM<\/td>\nOAM v2<\/td>\n<\/tr>\n
              cooling<\/td>\nPassive Cooling
              Liquid Cooling<\/td>\n
              Passive Cooling
              Liquid Cooling<\/td>\n
              Passive Cooling
              Liquid Cooling<\/td>\n
              Passive Cooling
              Liquid Cooling<\/td>\n<\/tr>\n
              TDP<\/td>\n560W<\/td>\n700W<\/td>\n600W<\/td>\n800W<\/td>\n<\/tr>\n
              Launch<\/td>\nQ4 2021<\/td>\n2H 2022<\/td>\n2022?<\/td>\n2024?<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n

              <\/p><\/div>\n