Exploiting Data Locality on Scalable Shared Memory Machines with Data Parallel Programs

Exploiting Data Locality on Scalable Shared Memory Machines with Data Parallel Programs

Abstract

OpenMP offers a high-level interface for parallel programming on scalable shared memory (SMP) architectures providing the user with simple work-sharing directives while relying on the compiler to generate parallel programs based on thread parallelism. However, the lack of language features for exploiting data locality often results in poor performance since the non-uniform memory access times on scalable SMP machines cannot be neglected. HPF, the de-facto standard for data parallel programming, offers a rich set of data distribution directives in order to exploit data locality, but has mainly been targeted towards distributed memory machines. In this paper we describe an optimized execution model for HPF programs on SMP machines that avails itself with the mechanisms provided by OpenMP for work sharing and thread parallelism while exploiting data locality based on user-specified distribution directives. This execution model has been implemented in the ADAPTOR HPF compilation system and experimental results verify the efficiency of the chosen approach.

Grafik Top
Authors
  • Benkner, Siegfried
  • Brandes, T.
Grafik Top
Shortfacts
Category
Technical Report (Technical Report)
Divisions
Scientific Computing
Publisher
Institute for Software Science, University of Vienna
Date
December 2000
Official URL
http://www.par.univie.ac.at/publications/download/...
Export
Grafik Top