Using processor-cache affinity information in shared-memory multiprocessor scheduling

Abstract
In a shared-memory multiprocessor system, it may be more efficient to schedule a task on one processor than on another if relevant data already reside in a particular processor's cache. The effects of this type of processor affinity are examined. It is observed that tasks continuously alternate between executing at a processor and releasing this processor due to I/O, synchronization, quantum expiration, or preemption. Queuing network models of different abstract scheduling policies are formulated, spanning the range from ignoring affinity to fixing tasks on processors. These models are solved via mean value analysis, where possible, and by simulation otherwise. An analytic cache model is developed and used in these scheduling models to include the effects of an initial burst of cache misses experienced by tasks when they return to a processor for execution. A mean-value technique is also developed and used in the scheduling models to include the effects of increased bus traffic due to these bursts of cache misses. Only a small amount of affinity information needs to be maintained for each task. The importance of having a policy that adapts its behavior to changes in system load is demonstrated.<>

This publication has 14 references indexed in Scilit: