A case for direct-mapped caches
- 1 December 1988
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Computer
- Vol. 21 (12), 25-40
- https://doi.org/10.1109/2.16187
Abstract
Direct-mapped caches are defined, and it is shown that trends toward larger cache sizes and faster hit times favor their use. The arguments are restricted initially to single-level caches in uniprocessors. They are then extended to two-level cache hierarchies. How and when these arguments for caches in uniprocessors apply to caches in multiprocessors are also discussed.Keywords
This publication has 13 references indexed in Scilit:
- Performance tradeoffs in cache designPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- A simulation study of two-level cachesPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2003
- Coherency for multiprocessor virtual address cachesACM SIGARCH Computer Architecture News, 1987
- Line (Block) Size Choice for CPU Cache MemoriesIEEE Transactions on Computers, 1987
- Cache design of a sub-micron CMOS system/370Published by Association for Computing Machinery (ACM) ,1987
- Cache memory performance in a unix enviromentACM SIGARCH Computer Architecture News, 1986
- An in-cache address translation mechanismACM SIGARCH Computer Architecture News, 1986
- Using cache memory to reduce processor-memory trafficPublished by Association for Computing Machinery (ACM) ,1983
- A Comparative Study of Set Associative Memory Mapping Algorithms and Their Use for Cache and Main MemoryIEEE Transactions on Software Engineering, 1978
- An Investigation of Alternative Cache OrganizationsIEEE Transactions on Computers, 1974