EasyManua.ls Logo

IBM Power7 - JIT Code Cache

IBM Power7
224 pages
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Loading...
Chapter 7. Java 129
To alleviate this impact, use the -Xcompressedrefs option. When this option is enabled, the
JVM uses 32-bit references to objects instead of 64-bit references wherever possible. Object
references are compressed and extracted as necessary at minimal cost. The need for
compression and decompression is determined by the overall heap size and the platform the
JVM is running on; smaller heaps can do without compression and decompression,
eliminating even this impact. To determine the compression and decompression impact for a
heap size on a particular platform, run the following command:
java -Xcompressedrefs -verbose:gc -version ...
The resulting output has the following content:
<attribute name="compressedRefsDisplacement" value="0x0" />
<attribute name="compressedRefsShift" value="0x0" />
Values of 0 for the named attributes essentially indicate that no work must be done to convert
between 32-bit and 64-bit references for the invocation. Under these circumstances, 64-bit
JVMs running with -Xcompressedrefs can reduce the impact of 64-bit addressing even more
and achieve better performance.
With -Xcompressedrefs, the maximum size of the heap is much smaller than the theoretical
maximum size allowed by a 64-bit JVM, although greater than the maximum heap under a
32-bit JVM. Currently, the maximum heap size with -Xcompressedrefs is around 31 GB on
both AIX and Linux.
7.3.5 JIT code cache
JIT compilation is an important factor in optimizing performance. Because compilation is
carried out at run time, it is complicated to estimate the size of the program or the number of
compilations that are carried out. The JIT compiler has a cap on how much memory it can
allocate at run time to store compiled code and for most of applications the default cap is
more than sufficient.
However, certain programs, especially those programs that take advantage of certain
language features, such as reflection, can produce a number of compilations and use up the
allowed amount of code cache. After the limit of code cache is consumed, no more
compilations are performed. This situation can have a negative impact on performance if the
program begins to call many interpreted methods that cannot be compiled as a result. The
-Xjit:codetotal=<
nnn> (where nnn is a number in KB units) option can be used to specify
the cap of the JIT code cache. The default is 64 MB or 128 MB for 32-bit and 64-bit JVMs.
Another consideration is how the code caches are allocated. If they are allocated far apart
from each other (more than 32 MB away), calls from one code cache to another carry higher
processing impact. The -Xcodecache<
size> option can be used to specify how large each
allocation of code cache is. For example, -Xcodecache4m means 4 MB is allocated as code
cache each time the JIT compiler needs a new one, until the cap is reached. Typically, there
are multiple pieces (for example, 4) of code cache available at boot-up time to support
multiple compilation threads. It is important to alter the default code cache size only if it is
insufficient, as a large but empty code cache needlessly consumes resources.
Two techniques can be used to determine if the code cache allocation sizes or total limit must
be altered. First, a Java core file can be produced by running kill -3 <
pid> at the end/stable
state of your application. The core file shows how many pieces of code cache are allocated.
The active amount of code cache can be estimated by summing up all of the pieces.

Table of Contents

Related product manuals