How to write parallel programs: a guide to the perplexed
- 1 September 1989
- journal article
- Published by Association for Computing Machinery (ACM) in ACM Computing Surveys
- Vol. 21 (3), 323-357
- https://doi.org/10.1145/72551.72553
Abstract
We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented. We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.Keywords
This publication has 21 references indexed in Scilit:
- Linda in contextCommunications of the ACM, 1989
- A fast algorithm for particle simulationsJournal of Computational Physics, 1987
- Data parallel algorithmsCommunications of the ACM, 1986
- Para-Functional ProgrammingComputer, 1986
- MULTILISP: a language for concurrent symbolic computationACM Transactions on Programming Languages and Systems, 1985
- Distributed process groups in the V KernelACM Transactions on Computer Systems, 1985
- Implementing remote procedure callsACM Transactions on Computer Systems, 1984
- Communicating sequential processesCommunications of the ACM, 1978
- The programming language Concurrent PascalIEEE Transactions on Software Engineering, 1975
- MonitorsCommunications of the ACM, 1974