16-bit vs. 32-bit instructions for pipelined microprocessors
- 1 May 1993
- journal article
- Published by Association for Computing Machinery (ACM) in ACM SIGARCH Computer Architecture News
- Vol. 21 (2), 237-246
- https://doi.org/10.1145/173682.165159
Abstract
In any stored-program computer system, information is constantly transferred between the memory and the instruction processor. Machine instructions are a major portion of this traffic. Since transfer bandwidth is a limited resource, inefficiency in the encoding of instruction information (low code density) can have definite hardware and performance costs. Starting with a parameterized baseline RISC design, we compare performance for two instruction encodings for the same instruction processing core. One is a variant of DLX, a typical 32-bit RISC instruction set. The other is a 16-bit format which sacrifices some expressive power while retaining essential RISC features. Using optimizing compilers and software simulation, we measure code density and path length for a suite of benchmark programs, relating performance differences to specific instruction set features. We measure time to completion performance while varying memory latency and instruction cache size parameters. The 16-bit format is shown to have significant cost-performance advantages over the 32-bit format under typical memory system performance constraints.Keywords
This publication has 5 references indexed in Scilit:
- The impact of code density on instruction cache performancePublished by Association for Computing Machinery (ACM) ,1989
- And Now a Case for More Complex Instruction SetsComputer, 1987
- Reduced instruction set computersCommunications of the ACM, 1985
- The 801 MinicomputerIBM Journal of Research and Development, 1983
- Architecture of a VLSI instruction cache for a RISCPublished by Association for Computing Machinery (ACM) ,1983