Instruction-Level Parallel Processing
- 13 September 1991
- journal article
- research article
- Published by American Association for the Advancement of Science (AAAS) in Science
- Vol. 253 (5025), 1233-1241
- https://doi.org/10.1126/science.253.5025.1233
Abstract
The performance of microprocessors has increased steadily over thepast 20 years at a rate of about 50% per year. This is the cumulative result of architectural improvements as well as increases in circuit speed. Moreover, this improvement has been obtained in a transparent fashion, that is, without requiring programmers to rethink their algorithms and programs, thereby enabling the tremendous proliferation of computers that we see today. To continue this performance growth, microprocessor designers have incorporated instruction-level parallelism (ILP) into new designs. ILP utilizes the parallel execution ofthe lowest level computer operations—adds, multiplies, loads, and so on—to increase performance transparently. The use of ILP promises to make possible, within the next few years, microprocessors whose performance is many times that of a CRAY-IS. This article provides an overview of ILP, with an emphasis on ILP architectures—superscalar, VLIW, and dataflow processors—and the compiler techniques necessary to make ILP work well.Keywords
This publication has 7 references indexed in Scilit:
- The nonuniform distribution of instruction-level and machine parallelism and its effect on performanceIEEE Transactions on Computers, 1989
- The Cydra 5 departmental supercomputer: design philosophies, decisions, and trade-offsComputer, 1989
- Highly concurrent scalar processingACM SIGARCH Computer Architecture News, 1986
- The Manchester prototype dataflow computerCommunications of the ACM, 1985
- The U-InterpreterComputer, 1982
- The IBM System/360 Model 91: Machine Philosophy and Instruction-HandlingIBM Journal of Research and Development, 1967
- An Efficient Algorithm for Exploiting Multiple Arithmetic UnitsIBM Journal of Research and Development, 1967