PowerPC e500 Core Family Reference Manual, Rev. 1
Freescale Semiconductor 11-1
Chapter 11
L1 Caches
The e500 core complex contains separate 32-Kbyte, eight-way set associative level 1 (L1)
instruction and data caches to provide the execution units and registers rapid access to instructions
and data.
This chapter describes the organization of the on-chip L1 instruction and data caches, cache
coherency protocols, cache control instructions, and various cache operations. It describes the
interaction that occurs in the memory subsystem, which consists of the memory management unit
(MMU), the caches, the load/store unit (LSU), and the core complex bus (CCB). This chapter also
describes the replacement algorithms used for L1 caches.
Note that in this chapter, the term ‘multiprocessor’ is used in the context of maintaining cache
coherency. These multiprocessor devices could be actual processors or other devices that can
access system memory, maintain their own caches, and function as bus masters requiring cache
coherency.
11.1 Overview
The core complex L1 cache implementation has the following characteristics:
• Separate 32-Kbyte instruction and data caches (Harvard architecture)
• Eight-way set associative, non-blocking caches
• Physically addressed cache directories. The physical (real) address tag is stored in the cache
directory.
• 2-cycle access time provides 3-cycle read latency for instruction and data caches accesses;
pipelined accesses provide single-cycle throughput from caches.
• Instruction and data caches have 32-byte cache blocks. A cache block is the block of
memory that a coherency state describes, also referred to as a cache line.
• Four-state modified/exclusive/shared/invalid (MESI) protocol supported for the data cache.
See Section 11.3.1, “Data Cache Coherency Model.”
• Both L1 caches support parity generation and checking (enabled through L1CSR0 and
L1CSR1 bits), as follows:
— Instruction cache: 1 parity bit per byte of instruction
— Data cache: 1 parity bit per byte of data
See Section 11.2.3, “L1 Cache Parity.”