From the Data Center to the Edge and to the Cloud The Future of HPC Is Hybrid

Jean-Luc Assor, Worldwide Manager Hybrid HPC, Hewlett Packard Enterprise
75
132
28
Jean-Luc Assor, Worldwide Manager Hybrid HPC, Hewlett Packard Enterprise

Jean-Luc Assor, Worldwide Manager Hybrid HPC, Hewlett Packard Enterprise

Today, high-performance computing (HPC) can support the entire product lifecycle from design to operations. This requires HPC to embrace hybrid models across the edge, core data centers, and cloud.

One of the main attractions of the Internet of Things (IoT) is that it continuously provides manufacturers with sensor data from the ongoing operation of devices, machines, and facilities. This information forms the basis for an improved user experience and improved operations such as predictive maintenance, fault analysis via digital twins.

  ​HPC growth often can’t be served by the space, energy, and environmental footprint of constrained legacy data center operations  

A single autonomous vehicle generates 4 TB of data per day, and a modern aircraft turbine 20 TB per hour. Such a mass sprint of data can be easily valorized by means of HPC. This means HPC is no longer just about product development and prototyping, but spans the entire product lifecycle.

For example, engineering simulation software provider ANSYS created the digital twin of a Flowserve pump. After ANSYS had developed a reduced-order, system-level model of the pump, the model was connected to real-time on-site sensor data through PTC ThingWorx and mimicked the operation of the hydraulic system. HPC-powered machine learning allowed for detailed analysis of deviations in real time to calculate medium and long-term effects on the pump to determine the causes of anomalous conditions such as vibrations.HPC facilitated performance monitoring and accelerated troubleshooting during operations, visualized by augmented reality, leading to cost reductions through optimized product design.

The Bottom Line

New IoT use cases like predictive maintenance fuel HPC growth. However, this very HPC growth drives some underlying challenges and pressure on HPC to embrace new deployment models and technologies.

a. Movement to off-premises and to the edge

HPC growth often can’t be served by the space, energy, and environmental footprint of constrained legacy data center operations, which is why more companies are looking for ways to hand off their HPC operations, either through classic outsourcing or by hosting operations in a private or public cloud.

A parallel HPC evolution is the move to the network edge. Often, the amount of data created at the edge is too large and time sensitive to be transferred to a remote data center. In many cases, the initial analysis and integration must be performed close to the data source at the edge of the network, e.g.in an autonomous vehicle, at the aircraft maintenance hangar or right next to the assembly line.

This dual movement of HPC workloads–to off-premises and to the edge–requires integrated hybrid architecture from the edge to the data center to the cloud. For example, analysis results from many production machines deployed in the field can be correlated on central HPC systems in a remote data center to enrich data models and improve algorithms.

b. Agility

Purpose-built IaaS and PaaS solutions enable an agile, open and interconnected HPC infrastructure. For instance, HPC PaaS environments facilitate HPC application ‘portability’ on-premises, off-premises and in the public cloud.

An example of the importance of PaaS technologies in HPC is the Living Heart Project, developed by Dassault Systèmes and Stanford University with the support of Uber Cloud and its partners. The project uses HPC to model and simulate a digital twin of the human heart to enable a personalized treatment of cardiovascular patients.

A growing number of partners are collaborating in the Living Heart Project. To facilitate such collaboration, digitized heart model simulations powered by HPC should be accessible from doctors’ offices, benefit managers’ locations, and hospitals, etc. Heart model simulations should be able to run on-premises, off-premises, at the edge, and in the public cloud. To achieve this goal, Uber Cloud has developed a containerized version of the Living Heart Model.

c. Consumption-based models for on-premises HPC

With the expansion of HPC across the product lifecycle, companies need models that allow for a reduction of upfront investments while helping to speed up deployment. Consequently, consumption-based payment models are becoming essential for on-premises deployments in the data center and at the edge. In this scenario, companies can leverage an on-site buffer to scale up or down on demand, paying only for the capacity they use. This flexibility enables them to achieve business outcomes such as cash flow improvement, accelerated deployment, and cost-effective capacity management.

HPC has the potential to transform the entire product lifecycle with simulation, big data and deep learning. To capitalize on this opportunity, we need to redefine the way we deploy, use and pay for HPC technologies. That’s why hybrid HPC is the model of the future. It creates financial flexibility with consumption-based models. It can be deployed from edge to core to cloud to provide the optimal performance for each task. And it drives agility and collaboration with purpose-built IaaS and PaaS platforms.

Read Also

High Performance Computing Grows Up

High Performance Computing Grows Up

David Rukshin, CTO, WorldQuant, LLC
Making High-Performance Computing Work for Industry

Making High-Performance Computing Work for Industry

Mark Shephard, Director, Scientific Computation Research Center, Rensselaer Polytechnic Institute
The Human Capital of High-Performance Computing

The Human Capital of High-Performance Computing

Mike Fisk, CIO, Los Alamos National Laboratory
Container Revolution Enables Science Breakthroughs

Container Revolution Enables Science Breakthroughs

Jack Wells, Director of Science, Oak Ridge Leadership Computing Facility