Header menu link for other important links
X
A transformation framework for optimizing task-parallel programs
Published in
2013
Volume: 35
   
Issue: 1
Abstract
Task parallelism has increasingly become a trend with programming models such as OpenMP 3.0, Cilk, Java Concurrency, X10, Chapel and Habanero-Java (HJ) to address the requirements of multicore programmers. While task parallelism increases productivity by allowing the programmer to express multiple levels of parallelism, it can also lead to performance degradation due to increased overheads. In this article, we introduce a transformation framework for optimizing task-parallel programs with a focus on task creation and task termination operations. These operations can appear explicitly in constructs such as async, finish in X10 and HJ, task, taskwait in OpenMP 3.0, and spawn, sync in Cilk, or implicitly in composite code statements such as foreach and ateach loops in X10, forall and foreach loops in HJ, and parallel loop in OpenMP. Our framework includes a definition of data dependence in task-parallel programs, a happens-before analysis algorithm, and a range of program transformations for optimizing task parallelism. Broadly, our transformations cover three different but interrelated optimizations: (1) finish-elimination, (2) forall-coarsening, and (3) loop-chunking. Finish-elimination removes redundant task termination operations, forall-coarsening replaces expensive task creation and termination operations with more efficient synchronization operations, and loop-chunking extracts useful parallelism from ideal parallelism. All three optimizations are specified in an iterative transformation framework that applies a sequence of relevant transformations until a fixed point is reached. Further, we discuss the impact of exception semantics on the specified transformations, and extend them to handle task-parallel programs with precise exception semantics. Experimental results were obtained for a collection of task-parallel benchmarks on three multicore platforms: a dual-socket 128-thread (16-core) Niagara T2 system, a quad-socket 16-core Intel Xeon SMP, and a quad-socket 32-core Power7 SMP. We have observed that the proposed optimizations interact with each other in a synergistic way, and result in an overall geometric average performance improvement between 6.28× and 10.30×, measured across all three platforms for the benchmarks studied. © 2013 ACM.
About the journal
JournalACM Transactions on Programming Languages and Systems
ISSN01640925
Open AccessNo
Concepts (17)
  •  related image
    Experimentation
  •  related image
    ITERATIVE TRANSFORMATION
  •  related image
    MULTIPLE LEVELS OF PARALLELISMS
  •  related image
    Performance
  •  related image
    Performance degradation
  •  related image
    Performance improvements
  •  related image
    Program transformations
  •  related image
    SYNCHRONIZATION OPERATION
  •  related image
    Algorithms
  •  related image
    Application programming interfaces (api)
  •  related image
    Benchmarking
  •  related image
    Iterative methods
  •  related image
    Java programming language
  •  related image
    MULTICORE PROGRAMMING
  •  related image
    Parallel architectures
  •  related image
    Semantics
  •  related image
    Optimization