Extending previous efforts in the field to expose detailed information from the OpenMP and OmpSs runtimes, regarding the activity and performance of task-based parallel applications.
Facing design issues for task affinity in OpenMP. Exploring and evaluating several approaches using Nanos++ and LLVM OpenMP runtimes.
The sixth edition of the Programming and Tuning Massively Parallel Systems summer school (PUMPS) is aimed at enriching the skills of attendees in developing applications for many core processors.
The objectives of this course are to understand the fundamental concepts supporting message-passing and shared memory programming models.
Parallel programming course based on different HPC tools: MPI, OpenMP, Paraver, etc.
We present the specification of task-parallel reductions and explore issues for programmers and software vendors regarding programming transparency as well as the impact on the current standard with respect to nesting, untied task support and task data dependencies.
Abstract
An extension to the OpenMP task construct to add support for reductions in while-loops and general-recursive algorithms.