EasyManuals Logo

Intel ARCHITECTURE IA-32 User Manual

Intel ARCHITECTURE IA-32
568 pages
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Page #380 background imageLoading...
Page #380 background image
IA-32 Intel® Architecture Optimization
7-34
Conserve Bus Bandwidth
In a multi-threading environment, bus bandwidth may be shared by
memory traffic originated from multiple bus agents (These agents can
be several logical processors and/or several processor cores). Preserving
the bus bandwidth can improve processor scaling performance. Also,
effective bus bandwidth typically will decrease if there are significant
large-stride cache-misses. Reducing the amount of large-stride cache
misses (or reducing DTLB misses) will alleviate the problem of
bandwidth reduction due to large-stride cache misses.
One way for conserving available bus command bandwidth is to
improve the locality of code and data. Improving the locality of data
reduces the number of cache line evictions and requests to fetch data.
This technique also reduces the number of instruction fetches from
system memory.
User/Source Coding Rule 27. (M impact, H generality) Improve data and
code locality to conserve bus command bandwidth.
Using a compiler that supports profiler-guided optimization can
improve code locality by keeping frequently used code paths in the
cache. This reduces instruction fetches. Loop blocking can also improve
the data locality.
Other locality enhancement techniques, see “Memory Optimization
Using Prefetch” in Chapter 6, can also be applied in a multi-threading
environment to conserve bus bandwidth.
Because the system bus is shared between many bus agents (logical
processors or processor cores), software tuning should recognize
symptoms of the bus approaching saturation. One useful technique is to
examine the queue depth of bus read traffic (See “Workload
Characterization” in Appendix A). When the bus queue depth is high,
locality enhancement to improve cache utilization will benefit
performance more than other techniques, such as inserting more
software prefetches or masking memory latency with overlapping bus

Table of Contents

Questions and Answers:

Question and Answer IconNeed help?

Do you have a question about the Intel ARCHITECTURE IA-32 and is the answer not in the manual?

Intel ARCHITECTURE IA-32 Specifications

General IconGeneral
Instruction Setx86
Instruction Set TypeCISC
Memory SegmentationSupported
Operating ModesReal mode, Protected mode, Virtual 8086 mode
Max Physical Address Size36 bits (with PAE)
Max Virtual Address Size32 bits
ArchitectureIA-32 (Intel Architecture 32-bit)
Addressable Memory4 GB (with Physical Address Extension up to 64 GB)
Floating Point Registers8 x 80-bit
MMX Registers8 x 64-bit
SSE Registers8 x 128-bit
RegistersGeneral-purpose registers (EAX, EBX, ECX, EDX, ESI, EDI, ESP, EBP), Segment registers (CS, DS, SS, ES, FS, GS), Instruction pointer (EIP), Flags register (EFLAGS)
Floating Point UnitYes (x87)

Related product manuals