Competitive snoopy caching
- 1 October 1986
- conference paper
- Published by Institute of Electrical and Electronics Engineers (IEEE)
- Vol. 3 (02725428), 244-254
- https://doi.org/10.1109/sfcs.1986.14
Abstract
In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. For several of the proposed models of snoopy caching, we present new on-line algorithms which decide, for each cache, which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.Keywords
This publication has 6 references indexed in Scilit:
- Implementing a cache consistency protocolACM SIGARCH Computer Architecture News, 1985
- Amortized efficiency of list update and paging rulesCommunications of the ACM, 1985
- Dynamic decentralized cache schemes for mimd parallel processorsPublished by Association for Computing Machinery (ACM) ,1984
- A low-overhead coherence solution for multiprocessors with private cache memoriesPublished by Association for Computing Machinery (ACM) ,1984
- Using cache memory to reduce processor-memory trafficPublished by Association for Computing Machinery (ACM) ,1983
- A study of replacement algorithms for a virtual-storage computerIBM Systems Journal, 1966