40 Network Circuitry TB9100 Reciter Service Manual
© Tait Electronics Limited January 2006
ensure that the execution units always have a continual flow of instructions. 
Static branch prediction minimizes disruptions in the instruction flow 
through the pipelines by pre-fetching instructions following on from a 
program branch.
A full description of the operation and instruction set of the RISC core is 
outside the scope of this document. It is recommended that the MPC866 
user’s manual (reference 2) be consulted for comprehensive coverage of the 
topic.
Memory Caches and 
Memory Managers
Ordinarily, the fastest rate that the MPC can fetch data from external 
memory is one 32-bit instruction per two memory clock cycles. However, 
the CPU can execute two instructions per memory clock cycle since its 
internal clock is configured to be twice the memory clock (see “Clock 
Generation” on page 42 and “MPC Configuration” on page 45). 
Therefore, the CPU performance is restricted by the available transfer 
bandwidth from its program memory.
To enable the CPU to run at full speed, internal cache memory, which 
enables fetching of data or instructions without wait states, is provided. 
The MPC859T includes separate instruction and data caches of 1kwords 
(4kbytes) each.
The CPU implements a Harvard architecture internally, ie. there are 
separate instruction and data caches, each with separate data and address 
buses. This allows data and instruction fetches from the caches to occur 
simultaneously. Although the internal structure of the instruction and data 
caches differs, they operate in an essentially similar fashion.
Considering instruction fetches only, when the CPU fetches an instruction 
word and that instruction is found within the instruction cache, then it can 
be retrieved without delay. Should the required instruction not be in cache 
then the cache issues a request to the SIU (see “System Interface Unit 
(SIU)” on page 42) to fetch that instruction from main memory. Actually, a 
total of four words is fetched in a burst fetch (see “SDRAM Burst Cycles” 
on page 64) since the cache memory is organized in lines of four words each, 
which are updated together.
If there is spare space in the cache, the new instruction data is stored in the 
cache. Otherwise, space must be cleared in the cache to enable the newly 
fetched data to be retained. To accomplish this, cache entries are examined 
to find the oldest entry using a least-recently-used (LRU) algorithm. 
The oldest cache entry thus found is discarded to make way for the new 
data.
The MPC provides memory management support for high-end operating 
systems through separate instruction and data memory managers. These 
translate the logical address from the RISC core into a real address used to 
access external memory and peripherals. Separate instruction and data 
memory managers support the Harvard architecture of the CPU but their