Enabling task parallelism for many-core architectures

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)

Abstract

The requirements placed on computer architectures from modern computational workloads have driven constant performance improvements. In the last 15 years, the largest source of performance improvements has arisen from increased numbers of processor cores. Leveraging the performance of these multi- and many-core architectures remains a significant problem for application developers. This has led to the rise of numerous new parallel programming models, each with their own interfaces, underlying implementation mechanisms, and range of supported architectures. The two types of parallelism offered by these programming models, loop-based and task-based, are often used to parallelise regular and irregular computations, respectively. However, between the two types, the degree of support across programming models varies. Parallel loop constructs are widely used and supported, whilst task-parallel constructs often do not allow the inherent parallelism within applications to be fully expressed. Additionally, support for task parallelism on many-core architectures, such as GPUs, is lacking or non-existent. Thus, irregular computations cannot fully realise the performance of these architectures. This thesis investigates the current and future state of task parallelism on many-core architectures. Starting with a point of comparison, the expressiveness and performance of current parallel programming models is first explored on CPU architectures. The evaluation is done through the implementation of the fast-multipole method (FMM). Subsequently, this widely used irregular method is employed to explore new and emerging tasking frameworks for GPUs, leading to the development of a new task parallel runtime for these architectures. The performance of this runtime is evaluated with both the FMM and a range of commonly used task parallel benchmarks, finding significant improvements over the state-of-the-art, and making recommendations for other task-parallel runtimes.
Date of Award28 Sep 2021
Original languageEnglish
Awarding Institution
  • The University of Bristol
SupervisorSimon N McIntosh-Smith (Supervisor)

Cite this

'