For optimal reading, please switch to desktop mode.
The cloud-native (or, as we like to call it, software-defined) supercomputer looks to bring to bear the modern techniques of research operations (ResOps) through automation and infrastructure as code.
For HPC this recognises the fact that while the use of public cloud increases, organisations currently remain better suited to exploit their own on-premise resources, in order to maximise investment in advanced high-performance technologies.
At GTC21 (GTC registration required), Prof. DK Panda and Dr. Paul Calleja gave presentations on the new Nvidia Data Processing Unit (DPU) and its use in cloud native supercomputing environments, respectively, to deliver secure HPC platforms for clinical research without compromising performance. StackHPC has been collaborating with the University for a number of years on this new mode of operation for HPC services.
However, this does not mean that the use of public cloud for HPC remains static and over time, particular workflows may well migrate to public cloud, as pointed out by Dr. Calleja. In order to be prepared for these circumstances, on-premise HPC needs to move to a more cloud native model, ensuring that operations can take advantage of a range of cloud resources (not necessarily fixed to one Cloud Service Provider) and adopt the Hybrid Cloud model. Achieving this state of interoperability however requires renewed investment in DevOps.
The experience and expertise of StackHPC in terms of high performance networking and cloud methodologies provides a unique capability to help address these aspects and minimise the impact of this new engineering method.
The Software-Defined Supercomputer
For more details, please watch our recent presentation from the 2020 OpenInfra Summit:
Get in touch
If you would like to get in touch we would love to hear from you. Reach out to us via Twitter or directly via our contact page.