HPE Apollo 2000 System User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.
DIMM slot locations ............................. 17 Fan locations ................................18 Drive bay numbering .............................. 18 HPE Apollo r2200 Chassis drive bay numbering ..................18 HPE Apollo r2600 Chassis drive bay numbering ..................19 HPE Apollo r2800 Chassis drive bay numbering ..................20 M.2 SATA SSD bay numbering ........................
Page 4
Power capping ................................ 63 Power capping modes ..........................63 Configuring a power cap..........................64 Drive bay mapping for the HPE Apollo r2800 Chassis ................... 65 Factory default configuration ........................65 Mapping drive bays ............................. 66 Registering the server ............................66 Hardware options installation ........................
Page 5
Active Health System ............................149 iLO RESTful API support ........................... 150 Integrated Management Log ........................151 HPE Insight Remote Support ........................151 HPE Insight Remote Support central connect ...................... 151 HPE Insight Online direct connect ......................151 Insight Online............................. 152 Intelligent Provisioning..........................152 HPE Insight Diagnostics ............................
Page 6
Operating System Version Support ......................161 Version control............................161 Operating systems and virtualization software support for ProLiant servers ..........162 HPE Technology Service Portfolio ......................162 Change control and proactive notification ....................162 Troubleshooting ............................ 163 Troubleshooting resources ........................... 163 System battery ............................
HPE Apollo 2000 System Introduction The HPE Apollo 2000 System consists of a chassis and nodes. There are three chassis options with different storage configurations. The four server tray slots on the chassis must be populated with server nodes or node blanks.
• HPE Apollo r2800 Chassis Item Description Left bezel ear SFF HPE SmartDrives Right bezel ear Chassis serial label pull tab Non-removable bezel blank Chassis front panel LEDs and buttons Item Description Status Power On/Standby button and Solid green = System on...
Item Description Status Health LED (Node 4) Solid green = Normal Flashing amber = System degraded Flashing red = System critical Power On/Standby button and Solid green = System on system power LED (Node 4) Flashing green = Performing power on sequence Solid amber = System in standby Off = No power present UID button/LED...
• Two 2U nodes Item Description Node 3 RCM module (optional) Power supply 2 Power supply 1 Node 1 Chassis rear panel LEDs Item Description Status Power supply 2 LED Solid green = Normal Off = One or more of the following conditions exists: •...
Node rear panel components • 1U node rear panel components Item Description Node serial number and iLO label pull tab SUV connector USB 3.0 connector Dedicated iLO port (optional) NIC connector 1 NIC connector 2 • 2U node rear panel components Item Description Node serial number and iLO label pull tab...
Node rear panel LEDs and buttons • 1U node Item Description Status Power button/LED Solid green = System on Flashing green = Performing power on sequence Solid amber = System in standby Off = No power present UID button/LED Solid blue = Activated •...
Page 14
Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has occurred, or the front I/O cable is disconnected. If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health status.
8 flashes Power backplane or storage backplane 9 flashes Power supply System board components NOTE: HPE ProLiant XL170r and XL190r Gen9 Server Nodes share the same system board. Item Description Bayonet board slot DIMMs for processor 2 DIMMs for processor 1...
IMPORTANT: Before using the S7 switch to change to Legacy BIOS Boot Mode, be sure the HPE Dynamic Smart Array B140i Controller is disabled. Do not use the B140i controller when the node is in Legacy BIOS Boot Mode.
("System board components" on page 15). For more information, see the Hewlett Packard Enterprise website (http://www.hpe.com/support/NMI-CrashDump). DIMM slot locations DIMM slots are numbered sequentially (1 through 8) for each processor. The supported AMP modes use the letter assignments for population guidelines.
("SATA and Mini-SAS cable options" on page 82). HPE Apollo r2200 Chassis drive bay numbering One 1U node corresponds to a maximum of three low-profile LFF hot-plug drives. • Node 1 corresponds to drive bays 1-1 through 1-3.
Node 3 corresponds to drive bays 3-1 through 4-6. If using the Dynamic Smart Array B140i Controller, HPE H240 Host Bus Adapter, or HPE P440 Smart Array Controller: one 2U node corresponds to a maximum of eight SFF SmartDrives. The remaining drives bays must be populated with drive blanks.
Packard Enterprise recommends installing an HPE H240 Host Bus Adapter or HPE P440 Smart Array Controller. For information on drive bay mapping in the HPE Apollo r2800 Chassis and the factory default configuration, see "Drive bay mapping for the HPE Apollo r2800 Chassis (on page 65)."...
• The Locate, Drive status, and Do not remove LEDs of the affected drives are disabled. Use BIOS/Platform Configuration (RBSU) in the UEFI System Utilities ("HPE UEFI System Utilities" on page 154) to enable or disable the B140i controller (System Configuration →...
Low-profile LFF hot-plug drive LED definitions Item Definition Fault/UID (amber/blue) Online/Activity (green) Online/Activity Fault/UID LED Definition LED (green) (amber/blue) Alternating One or more of the following conditions exist: On, off, or amber and blue flashing • The drive has failed. •...
Accelerator numbering One accelerator in a FlexibleLOM 2U node riser cage assembly Item Description Accelerator 1 Two accelerators in a three-slot riser cage assembly Item Description Accelerator 1 Accelerator 2 For more information, see "Accelerator options (on page 104)." Component identification 24...
RCM module components Item Description iLO connector HPE APM 2.0 connector iLO connector For more information, see "RCM module (on page 70)." RCM module LEDs Item Description iLO activity LED Green or flashing green = Network activity Off = No network activity...
Item Description iLO link LED Green = Linked to network Off = No network connection iLO activity LED Green or flashing green = Network activity Off = No network activity For more information, see "RCM module (on page 70)." PCIe riser board slot definitions •...
Page 27
• Single-slot 1U right PCI riser cage assembly for Processor 2 (PN 798182-B21) Form factor Slot number Slot description PCIe3 x16 (16, 8, 4, 1) for Storage controller or Processor 2 low-profile PCIe NIC card For more information on installing a storage controller, see "Controller options (on page 96)." •...
Page 28
• FlexibleLOM 1U node riser cage assembly (PN 798180-B21) Form factor Slot number Slot description FlexibleLOM slot PCIe3 x8 for Processor 1 FlexibleLOM • Single-slot 2U node PCI riser cage assembly (PN 800293-B21) Form factor Slot number Slot description PCIe3 x16 (16, 8, 4, 1) for Storage controller or Processor 1 low-profile PCIe NIC card...
Page 29
• FlexibleLOM 2U node riser cage assembly (PN 798184-B21) Item Form factor Slot number Slot description FlexibleLOM FlexibleLOM slot PCIe3 x8 for Processor 1 Storage controller or PCIe3 x16 (16, 8, 4, 1) for accelerator card Processor 1 For more information on installing a storage controller, see "Controller options (on page 96)." For more information on installing an accelerator, see "Accelerator options (on page 104)."...
Page 30
Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 For more information on installing a storage controller, see "Controller options (on page 96)." For more information on installing an accelerator, see "Accelerator options (on page 104)." • Three-slot GPU-direct PCI riser cage assembly (PN 798188-B21) Item Form factor Slot number...
Page 31
• Three-slot GPU-direct with re-timer PCI riser cage assembly (PN 827353-B21) Item Form factor Slot number Slot description Accelerator card PCIe3 x16 (16, 8, 4, 1) for Processor 2 Storage controller or PCIe3 x16 (8, 4, 1) for low-profile PCIe NIC card Processor 2 Accelerator card PCIe3 x16 (16, 8, 4, 1) for...
When the node goes from the standby mode to the full power mode, the node power LED changes from amber to green. For more information about iLO, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/ilo). Power down the node CAUTION: Before powering down the node, perform a backup of critical server data and programs.
Page 33
CAUTION: To avoid damage to the node, always support the bottom of the node when removing it from the chassis. Power down the node (on page 32). Disconnect all peripheral cables from the node. Remove the node from the chassis: Loosen the thumbscrew.
Remove the RCM module To remove the component: Power down all nodes ("Power down the node" on page 32). Access the product rear panel. Disconnect all cables from the RCM module. Remove the RCM module. Remove the power supply To remove the component: Power down all nodes ("Power down the node"...
Remove the power supply. Remove the security bezel To access the front panel components, unlock and then remove the security bezel. Removing the drive CAUTION: For proper cooling, do not operate the node without the access panel, baffles, expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize the amount of time the access panel is open.
SFF SmartDrive Low-profile LFF hot-plug drive Remove the chassis access panel Power down all nodes ("Power down the node" on page 32). Disconnect all peripheral cables from the nodes and chassis. Remove all nodes from the chassis ("Remove the node from the chassis"...
Lift and remove the access panel. Install the chassis access panel Install the chassis access panel. Place the access panel and align the pin on the chassis, and slide it towards the front of the server. Lock the access panel latch using the T-15 Torx screwdriver. Install the chassis into the rack ("Installing the chassis into the rack"...
Remove the chassis from the rack WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling. • Remove all installed components from the chassis before installing or moving the chassis. •...
Remove the rear I/O blank Power down the node (on page 32). Disconnect all peripheral cables from the node. Remove the node from the chassis (on page 32). Place the node on a flat, level surface. Remove the rear I/O blanks: 1U left rear I/O blank 1U right rear I/O blank Operations 39...
2U rear I/O blank CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. Install the rear I/O blank Install the rear I/O blanks: 1U right rear I/O blank...
1U left rear I/O blank 2U rear I/O blank Install the node into the chassis ("Installing a node into the chassis" on page 60). Connect all peripheral cable to the node. Power up the node ("Power up the nodes" on page 32). Remove the air baffle Power down the node (on page 32).
If installed in a 2U node, remove the three-slot riser cage assembly ("Three-slot riser cage assemblies" on page 52). Remove the air baffle: 1U air baffle 2U air baffle Install the air baffle CAUTION: To prevent damage to the server, ensure that all DIMM latches are in closed and locked position before installing the air baffle.
Align the air baffle over the DIMM slot latches and lower the air baffle. If a second processor and heatsink are installed, press down on the rear of the air baffle until it snaps into place on the heatsink. Install any removed PCI riser cage assemblies ("PCI riser cage assembly options"...
Page 44
If installed in a 2U node, remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node riser cage assembly" on page 52). If installed in a 2U node, remove the three-slot riser cage assembly ("Three-slot riser cage assemblies" on page 52). If an accelerator power cable is installed, disconnect it from the bayonet board.
1U bayonet board bracket 2U bayonet board bracket Install the bayonet board assembly Connect the SATA or mini-SAS cable to the bayonet board. Operations 45...
Page 46
1U bayonet board IMPORTANT: If connecting a SATA or Mini-SAS cable to the 2U bayonet board, route the cable under the padding before installing the 2U bayonet board bracket. 2U bayonet board Install the bayonet board bracket onto the bayonet board. Operations 46...
Page 47
1U bayonet board bracket 2U bayonet board bracket Install the bayonet board assembly into the node: 1U bayonet board assembly Operations 47...
2U bayonet board assembly If any SATA or mini-SAS cables are installed, secure the cables under the thin plastic cover along the side of the node tray. If removed, connect the B140i SATA cable to the system board ("SATA and mini-SAS cabling"...
1U node 2U node CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. Single-slot 1U node right PCI riser cage assemblies NOTE: Single-slot 1U node right PCI riser cage assemblies feature different riser boards.
Do one of the following: Remove the 1U left rear I/O blank ("Remove the rear I/O blank" on page 39). Remove the single-slot left PCI riser cage assembly (on page 48). Remove the single-slot 1U node right PCI riser cage assembly. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
Remove the FlexibleLOM 1U node riser cage assembly. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. FlexibleLOM 2U node riser cage assembly To remove the component: Power down the node (on page 32).
Page 53
Remove the three-slot riser cage assembly. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed. Operations 53...
Planning the installation Optional services Delivered by experienced, certified engineers, HPE support services help you keep your servers up and running with support packages tailored specifically for HPE ProLiant systems. HPE support services let you integrate both hardware and software support into a single package. A number of service level options are available to meet your business and IT needs.
CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation openings. When vertical space in the rack is not filled by a server or rack component, the gaps between the components cause changes in airflow through the rack and across the servers. Cover all gaps with blanking panels to maintain proper airflow.
For more information on the hot-plug power supply and calculators to determine server power consumption in various system configurations, see the Hewlett Packard Enterprise Power Advisor website (http://www.hpe.com/info/poweradvisor/online). Electrical grounding requirements The server must be grounded properly for proper operation and safety. In the United States, you must install the equipment in accordance with NFPA 70, 1999 Edition (National Electric Code), Article 250, as well as any local and regional building codes.
Flathead screwdriver (to remove the knockout on the dedicated iLO connector opening) • Hardware options Installation overview To set up and install the HPE Apollo 2000 System: Set up and install the rack. For more information, see the documentation that ships with the rack. Setup 57...
("Removing the drive" on page 35). NOTE: If planning to install the HPE Smart Storage Battery or redundant fan option, install these options into the chassis before installing the chassis into the rack. NOTE: Install the chassis into the rack before installing drives, power supplies, the RCM module, or nodes.
Page 59
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the equipment: • Observe local occupational health and safety requirements and guidelines for manual material handling. • Remove all installed components from the chassis before installing or moving the chassis. •...
Chassis component installation Installing a node into the chassis • 1U node • 2U node Installing a drive Remove the drive blank ("Removing a drive blank" on page 68). Install the drives ("Installing a hot-plug drive" on page 68). Installing the power supplies Setup 60...
HPE Advanced Power Manager (optional) HPE APM is a point of contact for system administration. To install, configure, and access APM, see the HPE Advanced Power Manager User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/support/APM_UG_en). Connecting the optional HPE APM module Connect the APM to the network (shown in red).
F10 key to access Intelligent Provisioning. For more information on automatic configuration, see the UEFI documentation on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ProLiantUEFI/docs). Installing the operating system To operate properly, the node must have a supported operating system installed. For the latest information on operating system support, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/supportos).
Power capping The HPE ProLiant XL family of products provides a power capping feature that operates at the server enclosure level. The capping feature can be activated with PPIC.EXE, a stand-alone utility that runs in the environment of one of the resident servers in the chassis to be power capped. After a power cap is set for the enclosure, all the resident servers in the enclosure will have the same uniform power cap applied to them until the cap is either modified or canceled.
Power capping modes show the valid values for mode. Power is required when setting Power Control Configuration to User Configurable. For more information, see the ProLiant Power Interface Control (PPIC) Utility User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/support/PPIC_UG_en). Setting the chassis power cap mode with HPE APM Log in to APM: Setup 64...
Drive bay mapping configuration changes may be made from any server node and take effect after all server nodes in the HPE Apollo r2800 Chassis are turned off and the Chassis firmware is able to reset the storage expander backplane. All nodes must remain powered off for at least 5 seconds after executing the configuration changes.
Smart Array Controller. For detailed information and examples on drive bay mapping configuration changes in the HPE Apollo r2800 Chassis, see the HPE iLO 4 Scripting and Command Line Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ilo/docs). To map drives in the HPE Apollo r2800 Chassis: Determine which drive bays to map to each node.
Hewlett Packard Enterprise Power Advisor website (http://www.hpe.com/info/poweradvisor/online). For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Hardware options installation 67...
Removing a drive blank If installed, remove the security bezel (on page 35). Remove the drive blank. Installing a hot-plug drive WARNING: To reduce the risk of injury from electric shock, do not install more than one drive carrier at a time. The chassis can support up to 12 drives in an LFF configuration and up to 24 drives in an SFF configuration.
("Security bezel option" on page 67). For information on drive bay mapping in the HPE Apollo r2800 Chassis and the factory default configuration, see "Drive bay mapping for the HPE Apollo r2800 Chassis (on page 65)." To configure arrays, see the HPE Smart Storage Administrator User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/smartstorage/docs).
• Install the node blank into the left side of the server chassis. • Install the node blank into the right side of the server chassis. RCM module Observe the following rules and limitations when installing an RCM module: • If a dedicated iLO management port module is installed in a node, the node cannot be accessed through the RCM module.
Page 71
Having both ports connected at the same time results in a loopback condition. For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To install the component: Power down all nodes ("Power down the...
Page 72
Secure the power cord in the strain relief strap. If two power supplies are installed, do the following: Install the RCM module onto the bottom power supply. Release the strain relief strap on the top power supply handle. Secure both power cords in the strain relief strap on the top power supply handle. Hardware options installation 72...
RCM module and the network. Multiple chassis can be connected to the same network. NOTE: Arrow indicates connection to the network. If an HPE APM is installed, connect the cables to the RCM module, the APM, and the network ("Connecting the optional HPE APM module"...
Connect the RCM 2.0 to 1.0 adapter cable to the RCM module. Connect the cables to the RCM module, the APM, and the network ("Connecting the optional HPE module" on page 61). Reconnect all power: Connect each power cord to the power source.
The minimum fan requirement for this server to power on is four fans (fans 1, 2, 3, and 4). Installing the fan option For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To install the component: Power down all nodes ("Power down the...
RDIMM, the information applies to that type only. All memory installed in the node must be the same type. Memory and processor information For the latest memory configuration information, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). DIMM type Hardware options installation 76...
Page 77
Operating memory speed is a function of rated DIMM speed, the number of DIMMs installed per channel, processor model, and the speed selected in the BIOS/Platform Configuration (RBSU) of the UEFI System Utilities ("HPE UEFI System Utilities" on page 154). Populated DIMM speed - Intel Xeon E5-2600 v3 processor installed...
Maximum memory capacity Maximum memory capacity is a function of DIMM capacity, number of installed DIMMs, memory type, and number of installed processors. Maximum memory capacity - Intel Xeon E5-2600 v3 processor installed Capacity (GB) Maximum capacity for Maximum capacity for two DIMM type DIMM rank one processor (GB)
This multi-channel architecture provides enhanced performance in Advanced ECC mode. This architecture also enables Online Spare Memory mode. DIMM slots in this server are identified by number and by letter. Letters identify the population order. Slot numbers indicate the DIMM slot ID for spare replacement. Single-, dual-, and quad-rank DIMMs To understand and configure memory protection modes properly, an understanding of single-, dual-, and quad-rank DIMMs is helpful.
AMP mode is not supported by the installed DIMM configuration, the node boots in Advanced ECC mode. For more information, see the HPE UEFI System Utilities User Guide for ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ProLiantUEFI/docs).
For DIMM spare replacement, install the DIMMs per slot number as instructed by the system software. For more information about node memory, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/memory). Advanced ECC population guidelines For Advanced ECC mode configurations, observe the following guidelines: •...
32). SATA and Mini-SAS cable options IMPORTANT: The HPE Apollo r2800 Chassis does not support nodes using the HPE Dynamic Smart Array B140i Controller or the HPE P840 Smart Array Controller. Hewlett Packard Enterprise recommends installing an HPE H240 Host Bus Adapter or HPE P440 Smart Array Controller.
Page 83
For more information on the riser board slot specifications, see "PCIe riser board slot definitions (on page 26)." For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To connect the cable option: Power down the node (on page 32).
For more information on the riser board slot specifications, see "PCIe riser board slot definitions (on page 26)." For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Single-slot left PCI riser cage assembly option To install the component: Power down the node (on page 32).
Page 85
If you are installing an expansion board, remove the PCI blank. Install any optional expansion boards. Connect all necessary internal cabling to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. In a 1U node, install the single-slot left PCI riser cage assembly and then secure it with three T-10 screws.
Install the single-slot left PCI riser cage assembly and then secure it with two T-10 screws. Install the three-slot riser cage assembly and then secure it with six T-10 screws ("Three-slot riser cage assembly options" on page 93). CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
Page 87
If you are installing an expansion board, remove the PCI blank. Install any optional expansion boards. Connect all necessary internal cabling to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. Install the single-slot 1U node right PCI riser cage assembly and then secure it with four T-10 screws.
Install the node into the chassis ("Installing a node into the chassis" on page 60). Connect all peripheral cables to the nodes. Power up the node ("Power up the nodes" on page 32). Single-slot 2U node PCI riser cage assembly option To install the component: Power down the node (on page 32).
Install the single-slot 2U node PCI riser cage assembly and secure it with two T-10 screws. Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws ("FlexibleLOM 2U node riser cage assembly option" on page 91). CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
Page 90
Remove the PCI blank. Install the FlexibleLOM adapter. Hardware options installation 90...
Install the FlexibleLOM riser cage assembly. Do one of the following: Install the 1U left rear I/O blank ("Install the rear I/O blank" on page 40). Install the single-slot left PCI riser cage assembly ("Single-slot left PCI riser cage assembly option"...
Page 92
Remove the PCI blank. Install the FlexibleLOM adapter. Do the following: Install the single-slot 2U node PCI riser cage assembly and secure it with two T-10 screws. Hardware options installation 92...
Install the FlexibleLOM 2U node riser cage assembly and secure it with five T-10 screws. CAUTION: To prevent improper cooling and thermal damage, do not operate the node unless all PCI riser cages or rear I/O blanks are installed, and do not operate the node unless all PCI slots have either an expansion slot cover or an expansion board installed.
Page 94
Install the single-slot left PCI riser cage assembly and then secure it with two T-10 screws. If installing an expansion board, do the following: Remove the riser cage bracket. Hardware options installation 94...
Page 95
Select the appropriate PCIe slot and remove any PCI blanks. Install any optional expansion boards. Connect all necessary internal cables to the expansion board. For more information on these cabling requirements, see the documentation that ships with the option. Install the riser cage bracket. Hardware options installation 95...
The node ships with an embedded Dynamic Smart Array B140i Controller. This embedded controller is supported in UEFI Boot Mode only. For more information about the controller and its features, see the HPE Dynamic Smart Array B140i RAID Controller User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/smartstorage/docs).
Smart Array Controller. • The HPE P840 Smart Array controller can only be installed in slot 2 of the FlexibleLOM 2U node riser cage assembly (PN 798184-B21) For more information on the riser board slot specifications, see "PCIe riser board slot definitions (on page 26)."...
Page 98
Connect the Smart Storage Battery cable to the power distribution board. Install the Smart Storage Battery holder into the chassis. IMPORTANT: Ensure that the battery cable is connected to the correct connector. For detailed cabling information, see "HPE Smart Storage Battery cabling (on page 139)." Hardware options installation 98...
Installing the storage controller and FBWC module options IMPORTANT: If planning to install a Smart Storage Battery, install it in the chassis before installing the storage controller and FBWC module in the node ("Installing the HPE Smart Storage Battery" on page 97).
Page 100
If installed, remove the air scoop from the controller. Open the latch on the controller. Connect the cache module backup power cable to the module. Install the cache module on the storage controller. If you installed a cache module on the storage controller, connect the cache module backup power cable to the riser board ("FBWC module cabling"...
Page 101
Connect all necessary internal cables to the storage controller. For internal cabling information, see "SATA and mini-SAS cabling (on page 140)." Install the storage controller into the riser cage assembly and secure it to the riser cage with one T-15 screw. Slot 1 of the single-slot left PCI riser cage assembly Slot 1 of the single-slot 2U node PCI riser cage assembly Hardware options installation 101...
Page 102
Slot 2 of the FlexibleLOM 2U riser cage assembly Slot 2 of a three-slot riser cage assembly Connect the SATA or mini-SAS cable to the bayonet board. Hardware options installation 102...
Page 103
Connect all peripheral cables to the nodes. Power up the node ("Power up the nodes" on page 32). For more information about the integrated storage controller and its features, select the relevant user documentation on the Hewlett Packard Enterprise website (http://www.hpe.com/info/smartstorage/docs). Hardware options installation 103...
To configure arrays, see the HPE Smart Storage Administrator User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/smartstorage/docs). Accelerator options This hardware option might require a power supply with a higher wattage rating. To accurately estimate the power consumption of your server and select the appropriate power supply and other system components, see the Hewlett Packard Enterprise Power Advisor website (http://www.hpe.com/info/poweradvisor/online).
For more information, see "Accelerator cabling (on page 144)." Installing one accelerator in a FlexibleLOM 2U node riser cage assembly For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Hardware options installation 105...
Page 106
To install the component: Power down the node (on page 32). Disconnect all peripheral cables from the node. Remove the node from the chassis (on page 32). Place the node on a flat, level surface. Remove the FlexibleLOM 2U node riser cage assembly ("FlexibleLOM 2U node riser cage assembly"...
Page 107
If installing a NVIDIA Tesla K40 GPU, install the front support bracket for Accelerator 1 with four M2.5 screws. Install the accelerator into the PCI riser cage assembly. NVIDIA Tesla K40 GPU Hardware options installation 107...
Installing two accelerators in a three-slot riser cage assembly For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To install the component: Power down the node (on page 32).
Page 110
Remove the riser cage bracket. Remove the two top PCI blanks from the riser cage assembly. Turn the riser cage assembly over and lay it along the bayonet board side of the node. Remove the existing rear support bracket from Accelerator 1. Hardware options installation 110...
Page 111
If installing a NVIDIA Tesla K40 GPU, install the front support bracket for Accelerator 1 with four M2.5 screws. Install the rear support bracket for Accelerator 1. NVIDIA K40, K80, or M60 GPU Hardware options installation 111...
Page 116
AMD FirePro S9150 GPU IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector. Connect the Accelerator 1 power cable to Accelerator 1. For more information, see "Accelerator cabling (on page 144)."...
Page 117
AMD FirePro S7150 GPU If installed, remove the existing front support bracket from Accelerator 2. AMD FirePro S9150 GPU or AMD FirePro S7150 GPU Install the rear and front support brackets onto Accelerator 2. NVIDIA K40, K80, or M60 GPU Secure the rear support bracket for Accelerator 2 with three T-10 screws.
Page 118
Secure the front support bracket for Accelerator 2 with two M2.5 screws. NVIDIA GRID K2 Reverse Air Flow GPU Accelerator Secure the front support bracket for Accelerator 2 with four M2.5 screws. Secure the rear support bracket for Accelerator 2 with three T-10 screws. Hardware options installation 118...
Page 119
Intel Xeon Phi Coprocessor 5110P AMD FirePro S7150 GPU Secure the front support bracket with four M2.5 screws. Reinstall the accelerator cover. Hardware options installation 119...
Page 120
Secure the rear support bracket with four T-10 screws. iii. AMD FirePro S9150 GPU Secure the front support bracket with four M2.5 screws. Reinstall the accelerator cover. Hardware options installation 120...
Page 121
Secure the rear support bracket with four T-10 screws. iii. Install Accelerator 2 into slot 4. NVIDIA K40, K80, or M60 GPU Hardware options installation 121...
Page 123
AMD FirePro S7150 GPU AMD FirePro S9150 GPU IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only. Do not connect the power cable to the 2x3 connector. Connect the Accelerator 2 power cable to Accelerator 2. Connect the Accelerator 1 power cable to the Accelerator 2 power cable.
(PN 798178-B21) or the single-slot 2U node PCI riser cage assembly (PN 800293-B21). For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Hardware options installation 124...
Page 125
To install the component: Power down the node (on page 32). Disconnect all peripheral cables from the node. Remove the node from the chassis (on page 32). Place the node on a flat, level surface. Do one of the following: Remove the single-slot left PCI riser cage assembly (on page 48).
32). Processor and heatsink For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To install the component: Power down the node (on page 32).
Page 127
Open each of the processor locking levers in the order indicated in the following illustration, and then open the processor retaining bracket. Remove the clear processor socket cover. Retain the processor socket cover for future use. CAUTION: THE PINS ON THE SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED.
Page 128
Install the processor. Verify that the processor is fully seated in the processor retaining bracket by visually inspecting the processor installation guides on either side of the processor. THE PINS ON THE SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED. Close the processor retaining bracket.
Page 129
Press and hold the processor retaining bracket in place, and then close each processor locking lever. Press only in the area indicated on the processor retaining bracket. CAUTION: Always use a new heatsink when replacing processors. Failure to use new components can cause damage to the processor.
10/100 Mb/s or 10 Gb/s. For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). To install the component: Power down the node (on page 32).
Twist and pull to remove the knockout from the node. Install the dedicated iLO management port card into the node. If removed, install all rear I/O blanks ("Install the rear I/O blank" on page 40). Install any removed PCI riser cage assemblies. Install the node into the chassis ("Installing a node into the chassis"...
HP Trusted Platform Module option For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). Use these instructions to install and enable a TPM on a supported node. This procedure includes three sections: Installing the Trusted Platform Module board (on page 133).
Installing the Trusted Platform Module board WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them. Power down the node (on page 32). Disconnect all peripheral cables from the node. Remove the node from the chassis (on page 32).
For more information on firmware updates and hardware procedures, see the HP Trusted Platform Module Best Practices White Paper on the Hewlett Packard Enterprise Support Center website (http://www.hpe.com/support/hpesc). For more information on adjusting TPM usage in BitLocker, see the Microsoft website (http://technet.microsoft.com/en-us/library/cc732774.aspx).
Drive backplane power cabling HPE Apollo r2600 Chassis Item Description Power cable for Node 1 and Node 2 Power cable for drives Power cable for Node 3 and Node 4 PDB pass-through cable HPE Apollo r2200 Chassis Item Description Power cable for Node 1 and Node 2...
Item Description Power cable for Node 3 and Node 4 PDB pass-through cable HPE Apollo r2800 Chassis Item Description Power cable for Node 1 and Node 2 Power cable for Node 3 and Node 4 Power cable for drives PDB pass-through cable RCM 2.0 cabling...
Fan power cabling HPE Apollo r2200 Chassis and HPE Apollo r2600 Chassis HPE Apollo r2800 Chassis Item Description PDB to left fan cage power cable Storage expander card to right fan cage power cable PDB to storage expander card fan power cable...
Fan module cabling Item Description Fan 1 cable Fan 2 cable Fan 3 cable Fan 4 cable Fan 5 cable Fan 6 cable Fan 7 cable Fan 8 cable HPE Smart Storage Battery cabling Cabling 139...
Node cabling SATA and mini-SAS cabling B140i 1U node SATA cabling B140i 2U node SATA cabling Item Description Connection SATA 1 cable Mini-SAS connector 1 (SATA x4) on the system board to Port 1 on the bayonet board SATA 2 cable Mini-SAS connector 2 (SATA x4) on the system board to Port 2 on the bayonet board Cabling 140...
Mini-SAS P440/P840 cabling HPE P440 Smart Array controller installed in a 1U node HPE P840 Smart Array controller installed in FlexibleLOM 2U node riser cage assembly Item Description Connection Mini-SAS P440/P840 cable Port 1 on P840 Smart Array controller to Port 1 on the bayonet...
Page 143
HPE P440 Smart Array controller in a single-slot left PCI riser cage assembly HPE P440 Smart Array controller in a single-slot 2U node PCI riser cage assembly HPE P440 Smart Array controller in a single-slot 1U node right PCI riser cage assembly Cabling 143...
NVIDIA Tesla K40 GPU or AMD FirePro S9150 GPU NOTE: Depending on the accelerator model purchased, the accelerator and cabling might look slightly different than shown. Intel Xeon Phi Coprocessor 5110P IMPORTANT: If installing an Intel Xeon Phi Coprocessor 5110P, connect the power cable to the 2x4 connector only.
Page 146
Single NVIDIA GRID K2 Reverse Air Flow GPU Item Description Accelerator 2 power cable (PN 825635-001) Accelerator 1 power cable (PN 825634-001) Dual NVIDIA Tesla K40 GPUs, NVIDIA GRID K2 Reverse Air Flow GPUs, AMD FirePro S9150 GPUs, or AMD FirePro S7150 GPUs Item Description Accelerator 2 power cable (PN 825635-001)
Page 147
Dual Intel Xeon Phi Coprocessor 5110P Item Description Accelerator 2 power cable (PN 825635-001) Accelerator 1 power cable (PN 825634-001) Single NVIDIA Tesla K80 GPU or NVIDIA Tesla M60 GPU Item Description Accelerator 2 power cable (PN 825637-001) Accelerator 1 power cable (PN 825636-001) Cabling 147...
QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs). HPE iLO iLO is a remote server management processor embedded on the system boards of HPE ProLiant and Synergy servers. iLO enables the monitoring and controlling of servers from remote locations. HPE iLO management is a powerful tool that provides multiple ways to configure, update, monitor, and repair servers remotely.
(http://www.hpe.com/info/intelligentprovisioning/docs) iLO RESTful API support HPE iLO 4 firmware version 2.00 and later includes the iLO RESTful API. The iLO RESTful API is a management interface that server management tools can use to perform configuration, inventory, and monitoring of the ProLiant server via iLO. The iLO RESTful API uses basic HTTPS operations (GET, PUT, POST, DELETE, and PATCH) to submit or return JSON-formatted data with iLO web server.
HPE Insight Remote Support Hewlett Packard Enterprise strongly recommends that you register your device for remote support to enable enhanced delivery of your Hewlett Packard Enterprise warranty, HPE support services, or Hewlett Packard Enterprise contractual support agreement. Insight Remote Support supplements your monitoring...
Insight Online HPE Insight Online is a capability of the Support Center portal. Combined with Insight Remote Support central connect or Insight Online direct connect, it automatically aggregates device health, asset, and support information with contract and warranty information, and then secures it in a single, personalized dashboard that is viewable from anywhere at any time.
For more information or to download SPP, see one of the following pages on the Hewlett Packard Enterprise website: • Service Pack for ProLiant download page (http://www.hpe.com/servers/spp/download) Software and configuration utilities 153...
Launching other pre-boot environments such as the Embedded UEFI Shell and Intelligent Provisioning For more information on the UEFI System Utilities, see the HPE UEFI System Utilities User Guide for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
User Defined Defaults feature in UEFI System Utilities to override the factory default settings. For more information, see the HPE UEFI System Utilities User Guide for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs). Restoring and customizing configuration settings You can reset all configuration settings to the factory default settings, or you can restore system default configuration settings, which are used instead of the factory default settings.
Secure Boot and have an EFI boot loader signed with one of the authorized keys can boot when Secure Boot is enabled. For more information about supported operating systems, see the HPE UEFI System Utilities and Shell Release Notes for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
ProLiant Gen8 servers, HPE SSA replaces ACU with an enhanced GUI and additional configuration features. The HPE SSA exists in three interface formats: the HPE SSA GUI, the HPE SSA CLI, and HPE SSA Scripting. Although all formats provide support for configuration tasks, some of the advanced tasks are available in only one format.
The pre-OS behavior of the USB ports is configurable in the UEFI System Utilities, so that the user can change the default operation of the USB ports. For more information, see the HPE UEFI System Utilities User Guide for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
To download the flash components, see the Hewlett Packard Enterprise Support Center website (http://www.hpe.com/support/hpesc). For more information about the One-Time Boot Menu, see the HPE UEFI System Utilities User Guide for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
Page 160
To obtain the assigned file system volume for the USB key, enter Map –r . For more information about accessing a file system from the shell, see the HPE UEFI Shell User Guide for HPE ProLiant Gen9 Servers on the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
SPP, see the Hewlett Packard Enterprise website (http://www.hpe.com/servers/spp/download). To locate the drivers for a particular server, go to the Hewlett Packard Enterprise Support Center website (http://www.hpe.com/support/hpesc). Under Select your HPE product, enter the product name or number and click Go.
Hewlett Packard Enterprise offers Change Control and Proactive Notification to notify customers 30 to 60 days in advance of upcoming hardware and software changes on Hewlett Packard Enterprise commercial products. For more information, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/pcn). Software and configuration utilities 162...
• Simplified Chinese (http://www.hpe.com/support/Gen9_TSG_zh_cn) The HPE ProLiant Gen9 Troubleshooting Guide, Volume II: Error Messages provides a list of error messages and information to assist with interpreting and resolving error messages on ProLiant servers and server blades. To view the guide, select a language: •...
System battery If the node no longer automatically displays the correct date and time, then replace the battery that provides power to the real-time clock. Under normal use, battery life is 5 to 10 years. WARNING: The computer contains an internal lithium manganese dioxide, a vanadium pentoxide, or an alkaline battery pack.
Warranty and regulatory information Warranty information HPE ProLiant and x86 Servers and Options (http://www.hpe.com/support/ProLiantServers-Warranties) HPE Enterprise Servers (http://www.hpe.com/support/EnterpriseServers-Warranties) HPE Storage Products (http://www.hpe.com/support/Storage-Warranties) HPE Networking Products (http://www.hpe.com/support/Networking-Warranties) Regulatory information Safety and regulatory compliance For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise website (http://www.hpe.com/support/Safety-Compliance-EnterpriseProducts).
Local representative information Kazakh: • Russia: • Belarus: • Kazakhstan: Manufacturing date: The manufacturing date is defined by the serial number. CCSYWWZZZZ (serial number format for this product) Valid date formats include: • YWW, where Y indicates the year counting from within each new decade, with 2000 as the starting point;...
Electrostatic discharge Preventing electrostatic discharge To prevent damaging the system, be aware of the precautions you must follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor may damage system boards or other static-sensitive devices. This type of damage may reduce the life expectancy of the device.
410 ft) above 900 m (2953 ft) to a maximum of 3048 m (10,000 ft). The approved hardware configurations for this system are listed on the Hewlett Packard Enterprise website (http://www.hpe.com/servers/ASHRAE). Mechanical specifications HPE Apollo r2200 Chassis (12 LFF) Specifications Value —...
Depending on installed options, the node is configured with one of the following power supplies: • HPE 800W Flex Slot Titanium Hot Plug Power Supply Kit – 96% efficiency • HPE 800W Flex Slot Platinum Hot Plug Power Supply Kit – 94% efficiency...
HPE 800W Flex Slot -48VDC Hot Plug Power Supply Kit – 94% efficiency • HPE 1400W Flex Slot Platinum Plus Hot Plug Power Supply Kit – 94% efficiency For more information about the power supply features, specifications, and compatibility, see the Hewlett Packard Enterprise website (http://www.hpe.com/servers/powersupplies).
Hewlett Packard Enterprise Support Center. You must have an HP Passport set up with relevant entitlements. Websites • Hewlett Packard Enterprise Information Library (http://www.hpe.com/info/enterprise/docs) • Hewlett Packard Enterprise Support Center (http://www.hpe.com/support/hpesc) • Contact Hewlett Packard Enterprise Worldwide (http://www.hpe.com/assistance)
Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix (http://www.hpe.com/storage/spock) • Storage white papers and analyst reports (http://www.hpe.com/storage/whitepapers) Customer Self Repair Hewlett Packard Enterprise products are designed with many Customer Self Repair (CSR) parts to minimize repair time and allow for greater flexibility in performing defective parts replacement. If during...
Page 173
Pour plus d'informations sur le programme CSR de Hewlett Packard Enterprise, contactez votre Mainteneur Agrée local. Pour plus d'informations sur ce programme en Amérique du Nord, consultez le site Web Hewlett Packard Enterprise (http://www.hpe.com/support/selfrepair). Riparazione da parte del cliente Per abbreviare i tempi di riparazione e garantire una maggiore flessibilità nella sostituzione di parti difettose, i prodotti Hewlett Packard Enterprise sono realizzati con numerosi componenti che possono essere riparati direttamente dal cliente (CSR, Customer Self Repair).
Page 174
Weitere Informationen über das Hewlett Packard Enterprise Customer Self Repair Programm erhalten Sie von Ihrem Servicepartner vor Ort. Informationen über das CSR-Programm in Nordamerika finden Sie auf der Hewlett Packard Enterprise Website unter (http://www.hpe.com/support/selfrepair). Reparaciones del propio cliente Los productos de Hewlett Packard Enterprise incluyen muchos componentes que el propio usuario puede reemplazar (Customer Self Repair, CSR) para minimizar el tiempo de reparación y ofrecer una mayor...
Page 175
Packard Enterprise, póngase en contacto con su proveedor de servicios local. Si está interesado en el programa para Norteamérica, visite la página web de Hewlett Packard Enterprise CSR (http://www.hpe.com/support/selfrepair). Customer Self Repair Veel onderdelen in Hewlett Packard Enterprise producten zijn door de klant zelf te repareren, waardoor de reparatieduur tot een minimum beperkt kan blijven en de flexibiliteit in het vervangen van defecte onderdelen groter is.
Page 176
Para obter mais informações sobre o programa de reparo feito pelo cliente da Hewlett Packard Enterprise, entre em contato com o fornecedor de serviços local. Para o programa norte-americano, visite o site da Hewlett Packard Enterprise (http://www.hpe.com/support/selfrepair). Support and other resources 176...
Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your product’s service level. Hewlett Packard Enterprise strongly recommends that you register your device for remote support. For more information and device support details, go to the Insight Remote Support website (http://www.hpe.com/info/insightremotesupport/docs). Support and other resources 179...
Acronyms and abbreviations ABEND abnormal end Array Configuration Utility Advanced Data Mirroring Advanced Memory Protection ASHRAE American Society of Heating, Refrigerating and Air-Conditioning Engineers Automatic Server Recovery Canadian Standards Association Customer Self Repair double data rate DIMMs per channel EuroAsian Economic Commission FBWC flash-backed write cache graphics processing unit...
Page 181
HP SUM HP Smart Update Manager HPE APM HPE Advanced Power Manager HPE SIM HPE Systems Insight Manager HPE SSA HPE Smart Storage Administrator International Electrotechnical Commission Integrated Lights-Out Integrated Management Log International Organization for Standardization large form factor LAN on Motherboard...
Page 182
power distribution unit POST Power-On Self Test RBSU ROM-Based Setup Utility Rack control management RDIMM registered dual in-line memory module Remote Desktop Protocol RoHS Restriction of Hazardous Substances redundant power supply serial attached SCSI SATA serial ATA small form factor Systems Insight Manager Service Pack for ProLiant serial, USB, video...
Page 183
UEFI Unified Extensible Firmware Interface unit identification universal serial bus Version Control Agent VCRM Version Control Repository Manager Virtual Machine Acronyms and abbreviations 183...
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (mailto:docsfeedback@hpe.com). When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page.
Need help?
Do you have a question about the Apollo 2000 and is the answer not in the manual?
Questions and answers