188 IBM eX5 Implementation Guide
5.8 Scalability
This section explains how the HX5 can be expanded to increase the number of processors
and the number of memory DIMMs.
The HX5 blade architecture allows for a number of scalable configurations, including the use
of a MAX5 memory expansion blade, but the blade currently supports three configurations:
A single HX5 server with two processor sockets. This server is a standard 30 mm blade,
which is also known as
single-wide server or single-node server.
Two HX5 servers connected to form a single image 4-socket server. This server is a 60
mm blade, which is also known as a
double-wide server or 2-node server.
A single HX5 server with two processor sockets, plus a MAX5 memory expansion blade
attached to it, resulting in a 60 mm blade configuration. This configuration is sometimes
referred to as a
1-node+MAX5 configuration.
We describe each configuration in the following sections. We list the supported BladeCenter
chassis for each configuration in 5.3, “Chassis support” on page 182.
5.8.1 Single HX5 configuration
This server is the base configuration and supports one or two processors that are installed in
the single-wide 30 mm server.
When the server has two processors installed, ensure that the server has the Speed Burst
Card installed for maximum performance, as described in 5.6, “Speed Burst Card” on
page 185. This card is not required but strongly suggested.
5.8.2 Double-wide HX5 configuration
In the 2-node configuration, the two HX5 servers are physically connected and a 2-node
scalability card is attached to the side of the blades, which provides the path for the QPI
scaling.
Each node can have one or two processors installed (that is, 2-node configurations with a
total of two processors or four processors are supported). All installed processors must be
identical, however.
The two servers are connected using a 2-node scalability card, as shown in Figure 5-8 on
page 189. The scalability card is immediately adjacent to the processors and provides a
direct connection between the processors in the two nodes.