Evaluating the Impact of Programming Language Features on the Performance of Parallel Applications on Cluster Architectures

Published: 01 Jan 2003, Last Modified: 13 Nov 2024LCPC 2003EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We evaluate the impact of programming language features on the performance of parallel applications on modern parallel architectures, particularly for the demanding case of sparse integer codes. We compare a number of programming languages (Pthreads, OpenMP, MPI, UPC) on both shared and distributed-memory architectures. We find that language features can make parallel programs easier to write, but cannot hide the underlying communication costs for the target parallel architecture. Powerful compiler analysis and optimization can help reduce software overhead, but features such as fine-grain remote accesses are inherently expensive on clusters. To avoid large reductions in performance, language features must avoid degrading the performance of local computations.
Loading