e200z6 Core Complex
MPC5566 Microcontroller Reference Manual, Rev. 2
Freescale Semiconductor 3-25
 
To determine if the address is already allocated in the cache the following steps are taken:
1. The cache set index, virtual address bits A[20:26] are used to select one cache set. A set is defined 
as the grouping of four or eight lines (one from each way), corresponding to the same index into 
the cache array. 
2. The higher order physical address bits A[0:19] are used as a tag reference or used to update the 
cache line tag field.
3. The four or eight tags from the selected cache set are compared with the tag reference. If any one 
of the tags matches the tag reference and the tag status is valid, a cache hit has occurred.
4. Virtual address bits A[27:28] are used to select one of the four doublewords in each line. A cache 
hit indicates that the selected doubleword in that cache line contains valid data (for a read access), 
or can be written with new data depending on the status of the W access control bit from the MMU 
(for a write access).
3.3.2.3 Cache Line Replacement Algorithm
On a cache read miss, the cache controller uses a pseudo-round-robin replacement algorithm to determine 
which cache line will be selected to be replaced. There is a single replacement counter for the entire cache. 
The replacement algorithm acts as follows: on a miss, if the replacement pointer is pointing to a way which 
is not enabled for replacement by the type of the miss access (the selected line or way is locked), it is 
incremented until an available way is selected (if any). After a cache line is successfully filled without 
error, the replacement pointer increments to point to the next cache way.
3.3.2.4 Cache Power Reduction
The device provides additional user control over cache power utilization via the L1CSR0[WID], [AWID], 
[WDD], and [AWDD] way disable bits and the L1CSR0[WAM] control bit. When WAM is set to 1, ways 
that are disabled for allocation on miss by a particular access type (instruction or data) via the 
L1CSR0[WID], [AWID], [WDD], and [AWDD] way disable bits are also disabled (not selected) during 
normal cache lookup operations, thus avoiding the power associated with reading tag and data information 
for a disabled way. This provides the capability of disabling some ways for instruction accesses and some 
ways for data accesses to reduce power. In doing so however, certain restrictions must be followed, and 
the ability to lock by way is no longer functional, since a locked way would never be accessed.
When setting WAM to 1, restrictions are required to avoid coherency issues between instruction and data 
accesses, and to avoid multiple ways hitting on a given access. The restriction on coherency is due to the 
fact that a given line could possibly be present twice in the cache; a copy in a way disabled for instruction 
access which can be read and written by data accesses, and a second copy in a way disabled for data access 
which can be executed via an instruction fetch. A data write to the line will result in the possibility of 
instruction fetches obtaining stale data, in the same manner as exists in a non-unified cache. Another 
restriction is that multiple hits to the same line must be avoided on any given instruction or data access. 
This must be avoided by controlling the ways via the L1CSR0[WID,] [WDD], [AWID], and [AWDD] bits 
such that no common way exists that can be accessed by both instructions and data, or by ensuring that 
MMU permissions are set so that no cacheable page has X (execute) permission which also has R (read) 
or W (write) permission, i.e. can be cacheable and accessed with both instruction and data accesses.