Resource Center - Industry Insights

Why modern HPC workloads need balanced hardware



From artificial intelligence to computational fluid dynamics, today's HPC workloads make different demands on compute hardware. One server design doesn't fit all.

According to the World Economic Forum[1], we must democratize access to high-performance computing (HPC). We are creating more data, faster than ever before and HPC systems are crucial to helping us understand and gain insights from that data. Perhaps crucially, more people in more organizations and businesses across the world should be able to access the power of HPC and the benefits it can bring.

Those benefits include creating simulations instead of running expensive physical tests (for scenario planning or product design) and the ability to perform complex calculations faster with the latest CPUs, GPUs and solid state storage devices. Democratizing HPC can help businesses and organizations across areas such as medicine, education, energy, IT and telecommunications drive innovation and reduce costs doing so.



Of course, today's high-performance computing systems require a powerful technology foundation to read and process increasingly the vast amounts of data we're talking about. While general purpose servers have often been used to meet a variety of processing needs - from Deep Learning (DL) to electronic design automation (EDA) - performance can be significantly improved by customizing server design to fit specific workload requirements.

Optimizing for simulation and database performance

For example, HPC is increasingly being used to simulate complex processes - molecular systems, climate models, computational fluid dynamics, to name but a few. These tasks require an HPC server architecture that can support multiple GPU-based accelerators in parallel. The more GPUs that can be leveraged, the higher the computational performance that can be achieved. Optimizing a server for these types of workload ideally needs to pair the latest CPU technology from Intel and AMD with space for multiple DW and/or SW GPU cards.



But as we've said: one server design doesn't fit all. Database-focused tasks like genome sequencing and weather forecasting are traditionally IO-intensive. This requires a system that prioritizes read/write disk speed and data storage, while supporting a large core count and generous memory footprint. Using solid state NVMe in this scenario offers higher IOPS performance than legacy SATA storage devices. So, any server designed specifically for IO-heavy compute needs to incorporate a large number of drive bays and DIMM slots.

Balancing hardware design for big data scenarios

Again, a design balanced for IO-heavy database computing doesn't necessarily work in another HPC scenario. The area of big data computing comes with its own set of challenges. Working with massive datasets and employing advanced data analytics to gain insights from them requires customized server systems with high processor performance and high-density bulk storage. The solution here is a multi-node server that comes with as many SATA/NVMe drive bays as possible.

Modern HPC, from the workstation to the data center, needs to balance compute, storage, memory and IO to efficiently manage growing data volumes. Advances in CPU technology provide the best foundation for server hardware that can be tailored to specific workloads. It's why the latest 3rd Gen Intel® Xeon® Scalable processors are optimized for cloud, enterprise, AI, HPC, network, security and IoT workloads. While 3rd Gen AMD EPYC™ processors with AMD 3D V-Cache™ are designed to accelerate big data applications that range from seismic topography to quantum mechanics.

Which HPC approach is right for you?




 

SUBSCRIBE NOW to get the latest news.

Clicking "OK" confirms your acceptance of our Terms of Use, Privacy Policy, and Cookie Policy.
Compare (xx)