EasyManua.ls Logo

Intel 80386

Intel 80386
308 pages
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Loading...
CACHE SUBSYSTEMS
In a system such
as
shown in Figure 7-3, a request for the data at the address 12FFE9H
in
the main memory
is
handled
as
follows:
1.
The cache controller determines the cache location from the
14
most significant bits of
the index field (FFE8H).
2.
The controller compares the tag field (12H) with the tag stored
at
location FFE8H in
the cache.
3.
If
the tag matches, the processor reads the least significant byte from the data in the
cache.
4.
If
the tag does not match, the controller fetches the 4-byte block
at
address 12FFE8H in
the main memory and loads it into location FFE8H of the cache, replacing the current
block. The controller must also change the tag stored
at
location FFE8H to 12H. The
processor then reads the least significant byte from the
new
block.
Any address whose index field
is
FFE8H can be loaded into the cache only
at
location
FFE8H; therefore, the cache controller makes only one comparison to determine if the
requested word
is
in the cache. Note that the address comparison requires only the tag field
of the address. The index field need not be compared because anything stored in cache
location FFE8H has an index field of FFE8H. The direct mapped cache uses direct address-
ing to eliminate all but one comparison operation.
The direct mapped cache, however,
is
not without drawbacks.
If
the processor
in
the example
above makes frequent requests for locations 12FFE8H and 44FFE8H, the controller must
access the main memory frequently, because only one of these locations can be in the cache
at a time. Fortunately, this sort of program behavior
is
infrequent enough that the direct
mapped cache, although offering poorer performance than a fully associative cache, still
provides an acceptable performance at a much lower cost.
7.2.3 Set Associative Cache
The set associative cache compromises between the extremes of fully associative and direct
mapped caches. This type of cache has several sets (or groups) of direct mapped blocks that
operate
as
several direct mapped caches
in
parallel. For each cache index, there are several
block locations allowed, one in each set. A block of data arriving from the main memory
can
go
into a particular block location of any set. Figure 7-4
shows
the organization for a
2-way set associative cache.
With the same amount of memory
as
the direct mapped cache of the previous example, the
set associative cache contains half
as
many locations, but allows two blocks for each location.
The index field
is
thus reduced to
15
bits, and the extra bit becomes part of the tag field.
Because the set associative cache has several places for blocks with the same cache index
in
their addresses, the excessive main memory traffic that
is
a drawback of a direct mapped
cache
is
reduced and the hit rate increased. A set associative cache, therefore, performs more
efficiently than a direct mapped cache.
7-6

Other manuals for Intel 80386

Related product manuals