Networking Trends in High-Performance Computing

Michael Kagan, CTO, Mellanox Technologies
459
732
133

Michael Kagan, CTO, Mellanox Technologies

Over the past twenty years, there has been a steady progression in the high-performance computing (HPC) industry, as companies constantly seek to increase the performance of their data center. Traditionally, the paradigm for building a high-performance compute cluster has been to maximize the performance from a single-node CPU and to try to scale that performance across the entire cluster. This began with the move from symmetric multiprocessing (SMP) architecture to compute clusters and continued with the advancement from single-core to multi-core processors. While this has enabled the capability to achieve Terascale performance, and later Petascale, the performance improvement in CPUs has essentially reached its peak, leading the industry to seek new ways to keep up with the heavy demands for added performance, and to pave the road to Exascale. 

"The trends that are most affecting HPC today are highly influenced by the need to have all aspects of the infrastructure working in unison”

As applications are more and more aware of distributed calculations, there is a need to take a holistic view of the system instead of approaching data center design as a set of connected compute nodes. The compute paradigm has changed; whereas it was previously common to assemble the highest performing building blocks and to connect them, today’s model is to design a data center scale system with the entire network in mind, and to optimize the network according to an application’s end-to-end needs. The ability to communicate with processes and access resources on the other side of the network has become the heart of the process itself.

This concept of collaboration between all system devices and software to produce a well-balanced architecture across the various compute elements, networking, and data storage infrastructures is known as the Co-Design architecture. Co-Design exploits system efficiency and optimizes performance by ensuring that all components serve as co-processors in the data center, basically creating synergies between the hardware and the software, and between the different hardware elements within the datacenter.

The trends that are most affecting HPC today are highly influenced by the need to have all aspects of the infrastructure working in unison. Let’s take a look at some of these trends and see how they are furthering the shift from CPU-based computing to intelligent Co-Designed networks.

Quick Access to Storage: In today’s HPC environments, there is a requirement to efficiently access storage from anywhere in the network without disturbing the processing engines on the target machines. This can only be accomplished with an intelligent network that can handle such tasks without creating additional overhead for the target CPU.

In addition, recent developments in non-volatile memory technologies have brought NVM access time to DRAM speeds. These technologies will introduce new tiers in the storage hierarchy that will need to be extended over the network. To take full advantage of NVM speeds and introduce these new storage tiers, the SW and HW collaboration of Co-Design is required for efficient extension of fast NVM access across the entire cluster.

Accelerators: The new compute paradigm has also seen the development of special purpose GPU and FPGA software/hardware Co-Design as accelerators in the clustertoassistgeneral purpose CPUs inperforming compute-intensive processes. This requires a flexible pool of resources to operate these accelerators, and a high-speed, low-latency interconnect to ensure that, regardless of their physical location in the infrastructure, the benefit from such accelerators reaches the applications that demand them.

Virtualization: Another area of focus in HPC has been to segregate the physical infrastructure from the applications by way of virtualization. A high-speed network with built-in resilience enables live migration of virtual machines, processes, and applications from one server to another to ensure that applications are kept running at all times, even when there is downtime in the physical infrastructure, whether because of planned maintenance, load balancing, or an unexpected crash.

Offloading: At the very core of the Co-Design revolution is the idea that the CPU has reached its performance limit, such that the rest of the network must be better utilized to enable additional performance gains. Implicit in this shift is the recognition that the overhead on the CPU that has been building over time, as the CPU handled more and more operations, must be reduced to allow the CPU to do what it was designed for, that is, compute. As such, Co-Design architecture depends heavily on offloading technologies that free the CPU from non-compute functions, and instead places those functions on the intelligent network. 

One example of the innovative Co-Design technologies is SHArP, Scalable Hierarchical Aggregation Protocol. Designed in collaborations between government agencies and HPC vendors, SHArP enables MPI operations (the de-facto communication framework for high-performance applications) to be executed and managed by the datacenter interconnect. The benefit is dramatic – it enables companies to overcome the performance bottlenecks of the old CPU-centric architecture and to achieve an order of magnitude performance increase. SHArP is one example of offloading technologies that will enable the industry to achieve the desired Exascale mark within a few years.  

Today’s high performance computing applications are serving much more than just the classic HPC users. Whereas once HPC workloads were primarily used in the domain of government agencies and university research labs, today all applications that use data analytics rely on an HPC framework. The sheer magnitude of data that exists and must be associated into such computation can seem overwhelming, to the point that the demands on the HPC infrastructure have now surpassed the ability of CPU-centric architectures to keep up. 

As such, the latest paradigm shift in the industry has led to a Co-Design approach to HPC architecture. By taking a holistic view of the system and optimizing all the components of a data center (compute, interconnect, and storage) toward an application’s end-to-end needs, companies can overcome the CPU’s performance gap and ensure efficient and cost-effective computation and scalability.

Read Also

High Performance Computing Grows Up

David Rukshin, CTO, WorldQuant, LLC

Making High-Performance Computing Work for Industry

Mark Shephard, Director, Scientific Computation Research Center, Rensselaer Polytechnic Institute

Container Revolution Enables Science Breakthroughs

Jack Wells, Director of Science, Oak Ridge Leadership Computing Facility

Why AI is the New Frontier in Healthcare?

Simon Lin, MD, MBA, CRIO, Nationwide Children’s Hospital