Principles for Automated and Reproducible Benchmarking

Tuomas Koskela, Ilektra Christidi, Mosè Giordano, Emily Dubrovska, Jamie Quinn, Christopher Maynard, Dave Case, Kaan Olgu, Tom Deakin

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)

2 Citations (Scopus)

Abstract

The diversity in processor technology used by High Performance Computing (HPC) facilities is growing, and so applications must be written in such a way that they can attain high levels of perfor- mance across a range of different CPUs, GPUs, and other accelera- tors. Measuring application performance across this wide range of platforms becomes crucial, but there are significant challenges to do this rigorously, in a time efficient way, whilst assuring results are scientifically meaningful, reproducible, and actionable. This paper presents a methodology for measuring and analysing the perfor- mance portability of a parallel application and shares a software framework which combines and extends adopted technologies to provide a usable benchmarking tool. We demonstrate the flexibility and effectiveness of the methodology and benchmarking frame- work by showcasing a variety of benchmarking case studies which utilise a stable of supercomputing resources at a national scale.
Original languageEnglish
Title of host publicationSC-W '23
Subtitle of host publicationProceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis
Place of PublicationOnline
PublisherAssociation for Computing Machinery (ACM)
Pages609–618
Number of pages10
ISBN (Electronic)9798400707858
DOIs
Publication statusPublished - 12 Nov 2023
EventWorkshops of The International Conference on High Performance Computing, Network, Storage, and Analysis: Workshop on Artificial Intelligence and Machine Learning for Scientific Applications - Denver, United States
Duration: 12 Nov 202317 Nov 2023
https://ai4s.github.io/

Publication series

NameACM International Conference Proceeding Series

Workshop

WorkshopWorkshops of The International Conference on High Performance Computing, Network, Storage, and Analysis
Abbreviated titleAI4S'23
Country/TerritoryUnited States
CityDenver
Period12/11/2317/11/23
Internet address

Bibliographical note

Funding Information:
This work was supported by the Engineering and Physical Sciences Research Council [EP/X031829/1].

Funding Information:
This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.

Funding Information:
This work used the Isambard 2 UK National Tier-2 HPC Service (http://gw4.ac.uk/isambard/) operated by GW4 and the UK Met Office, and funded by EPSRC (EP/T022078/1).

Funding Information:
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).

Funding Information:
The authors gratefully acknowledge the computing time provided to them on the high-performance computers Noctua2 at the NHR Center PC2. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national high-performance computing at universities (www.nhr-verein.de/ unsere-partner).

Publisher Copyright:
© 2023 Owner/Author.

Fingerprint

Dive into the research topics of 'Principles for Automated and Reproducible Benchmarking'. Together they form a unique fingerprint.

Cite this