EasyManua.ls Logo

Intel 80386

Intel 80386
308 pages
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Loading...
CACHE SUBSYSTEMS
ascending order (code accesses, for example), an access to the first byte of a block
in
main
memory results
in
a lookahead block fetch. When memory locations are accessed
in
descend-
ing order, the block fetch
is
look-behind.
Block size
is
one of the most important parameters
in
the design of a cache memory system.
If
the block size
is
too small, the lookahead and look-behind are reduced, and therefore the
hit rate
is
reduced, particularly for programs that
do
not contain many loops. However, too
large a block size has the following disadvantages:
Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch
overwrites older cache contents, a small number of blocks results
in
data being overwrit-
ten shortly after it
is
fetched.
As
a block becomes larger, each additional word
is
further from the requested word,
therefore less likely to be needed by the processor (according to program locality).
Large blocks tend to require a wider bus between the cache and the main memory,
as
well
as
more static and dynamic memory, resulting
in
increased cost.
As with all cache parameters, the block size must be determined
by
weighing performance
(as estimated from simulation) against cost.
7.2
CACHE ORGANIZATIONS
7.2.1 Fully Associative Cache
Most programs make reference to code segments, subroutines, stacks, lists, and buffers located
in
different parts of the address space. An effective cache must therefore hold several
noncontiguous blocks of data.
Ideally, a 128-block cache would hold the
128
blocks most likely to be used by the processor
regardless of the distance between these words
in
main memory. In such a cache, there
would be
no
single relationship between all the addresses of these
128
blocks,
so
the cache
would have to store the entire address of each block
as
well
as
the block itself. When the
processor requested data from memory, the cache controller would compare the address of
the requested data with each of the
128
addresses
in
the cache.
If
a match were found, the
data for that address would be sent
to
the processor. This type of cache organization, depicted
in Figure 7-2,
is
called fully associative.
A fully associative cache provides the maximum flexibility
in
determining which blocks are
stored in the cache
at
any time. In the previous example, up to
128
unrelated blocks could
be stored
in
the cache. Unfortunately, a l28-address compare
is
usually unacceptably
slow,
expensive, or both. One of the basic issues of cache organization
is
how
to minimize the
restrictions
on
which words may be stored
in
the cache while limiting the number of required
address comparisons.
7-3

Other manuals for Intel 80386

Related product manuals