Understanding the Software Configurations
■
CPU4/CPU5: NET0, NET2 (through virtual network devices)
■
CPU6/CPU7: NET2-3
10-GbE Client Access Network
Two PCI root complex pairs, and therefore two 10-GbE NICs, are associated with the Medium
Domain on the SPARC T5-8 server in the Full Rack. One port is used on each dual-ported 10-
GbE NIC. The two ports on the two separate 10-GbE NICs would be part of one IPMP group.
The following 10-GbE NICs and ports are used for connection to the client access network for
the Medium Domains, depending on the CPUs that the Medium Domain is associated with:
■
CPU0/CPU1:
■
PCIe slot 1, port 0 (active)
■
PCIe slot 9, port 1 (standby)
■
CPU2/CPU3:
■
PCIe slot 2, port 0 (active)
■
PCIe slot 10, port 1 (standby)
■
CPU4/CPU5:
■
PCIe slot 5, port 0 (active)
■
PCIe slot 13, port 1 (standby)
■
CPU6/CPU7:
■
PCIe slot 6, port 0 (active)
■
PCIe slot 14, port 1 (standby)
A single data address is used to access these two physical ports. That data address allows traffic
to continue flowing to the ports in the IPMP group, even if the connection to one of the two
ports on the 10-GbE NIC fails.
Note - You can also connect just one port in each IPMP group to the 10-GbE network rather
than both ports, if you are limited in the number of 10-GbE connections that you can make to
your 10-GbE network. However, you will not have the redundancy and increased bandwidth in
this case.
InfiniBand Network
The connections to the InfiniBand network vary, depending on the type of domain:
■
Database Domain:
Understanding the System 79