Abstract This document describes how to install and configure the HPE Hyper Converged 380 appliance and expansion nodes. This document is for the person who installs, administers, and troubleshoots servers and is skilled in network configuration and virtual environments. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.
Contents Introduction ............................... 7 Product introduction ..............................7 Purpose of this guide ............................. 7 Supported versions ..............................7 Before you begin ............................8 Before installing ................................ 8 Networking requirements..........................8 Preinstallation worksheets ........................14 Appliance networks ..............................14 Remote vCenter ..............................15 Settings ..................................
Page 4
Completing the Settings screen........................41 Completing the Review Configuration Screen ..................... 42 Completing the Next Steps screen ......................43 Installing HC380 Management UI .......................... 44 Completing the initial HC380 Management UI configuration ..............46 Configuring the settings ............................46 Performing the initial HC380 Management UI setup ....................46 Configuring LDAP or Active Directory ........................
Page 5
Issues creating first management appliance ....................79 CloudSystem was not deployed successfully ....................80 Could not update the hpcs-data* distributed switch ..................80 Could not register vCenter with CloudSystem ..................... 80 Could not activate compute nodes ......................81 Could not create Tenant VLAN Segment Ranges ..................81 Could not create Tenant and Provider VLAN Networks ................
Page 6
Remote support ..............................120 Insight Remote Support installation ......................120 Acronyms and abbreviations......................... 121 Documentation feedback ........................123 Index ..............................124 Contents 6...
Introduction Product introduction The Hyper Converged 380 system is a virtualization appliance that combines compute and storage resources in the same chassis. It is designed to be deployed easily and yet manage a variety of virtualized workloads in medium-sized businesses and enterprises.
Before you begin Before installing The HC380 installation process is designed for an IT specialist who is familiar with computer hardware and software concepts and virtual machine networking. You are encouraged to read through this document before you begin installation and familiarize yourself with each of the steps. You need the following items to successfully complete the installation: •...
Page 9
If IPv6 is not enabled, the HC380 nodes will not be discovered during the installation progress and the installation and deployment will not complete. IP address assignments Note the following points about the IP address assignments: • The appliance uses a private IPv4 network (192.168.42.0/24) for internal system communications that can not be used by other devices sharing the same network.
Page 10
Network Purpose Example Count Notes Addresses HC380 172.28.0.3 This value is the starting IP W.X.Y.n+2 Management VM address on the IP assignments screen of OneView InstantOn. ESXi Nodes (16 172.28.0.4 – 1 - 16 These address values are W.X.Y.n+3 - nodes) 172.28.0.19 automatically and sequentially...
Page 11
Purpose Example Count 172.28.0.1 HC380 Management UI VM 172.28.0.2 HC380 OneView VM 172.28.0.3* HC380 Management VM 172.28.0.4* ESXi nodes (two nodes) 172.28.0.5* 172.28.0.6* - Reserved for expansion 172.28.0.19* Total: 19 172.28.0.20- CloudSystem Management & Compute 172.28.0.44 VMs (optional) 172.28.0.45 – CloudSystem Console VIP (optional) 172.28.0.47 Total: 47...
Page 12
HPE UIM Port 3504 TCP (must be accessible from the Management VM of the system running OneView InstantOn) NOTE: • For more information about these ports, see “Default port values” in HP OneView for VMware vCenter Installation Guide found on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ovvcenter/docs).
Page 13
• Install VMware vCenter Server. Hewlett Packard Enterprise recommends that you always install the latest updates of the software. • Install OneView for VMware vCenter and ensure that it has access to the networks that will be used for HC380 ESXi management and for HC380 storage. •...
Preinstallation worksheets This section contains information lists for data needed during the appliance installation and configuration. If you are expanding an existing configuration, you will need to obtain data from the existing appliance and add any data for an expansion node. Appliance networks For each network type (ESXi, vSphere vMotion, Storage), you choose the first IP address in the range for that group.
Management VM ESX Connectivity area on that screen. These values are used to configure the Management VM so that HP OneView InstantOn can verify that the remote server can be accessed and that the remote instance of HPE OneView for VMware vCenter is installed at the correct, minimum version.
Item Value DNS Server on the HC380 ESXi management network (same as DNS value from Settings screen) Expansion During an expansion, you are required to provide the password for the remote vCenter user name. Remote vCenter server item Value Automatically populated from initial installation IP address Automatically populated from initial installation Port...
iLO addresses iLO IPv4 address requirements Each HC380 server has an iLO which is manually configured with an IPv4 address on the ESX management network. HC380 Management UI imports the iLO for each of the servers by IP address during its configuration process, so that it can then manage the iLOs. HC380 Management UI does not require that you enter the serial number, but it does associate each IP address to a serial number.
Appliance components The following diagrams are examples to help you understand important component locations. Because the HC380 is available with an array of options, storage, networking and power components will vary depending on your specific configuration. Front panel • Front panel with HDDs or SSDs in all three storage bays Item Description Bay 3, with 8 HDDs or SSDs (optional)
Rear panel components General virtualization and VDI rear panel components • 10 GbE appliance Item Description 10 GbE NIC Port 2 10 GbE NIC Port 1 (FlexLOM) nVIDIA graphics card (VDI only, optional) iLO connector 1 GbE RJ-45 port 1 (do not use during initial configuration) 1 GbE RJ-45 port 2 (for connection to a laptop or workstation for setup, on management VM) 1 GbE RJ-45 port 3 (not used during initial configuration;...
iLO connector 1 GbE RJ-45 port 1 (Do not use during initial configuration) 1 GbE RJ-45 port 2 (For connection to a laptop or workstation for setup, on management VM) 1 GbE RJ-45 port 3 (not used during initial configuration; available for customer network) 1 GbE RJ-45 port 4 (not used during initial configuration;...
Installing the HC380 appliance nodes This section provides instructions for installing the appliance nodes into a rack and cabling the network. Optimum environment When installing the server in a rack, select a location that meets the environmental standards described in this section.
Temperature requirements To ensure continued safe and reliable equipment operation, install or position the system in a well-ventilated, climate-controlled environment. The maximum recommended ambient operating temperature (TMRA) for most server products is 35°C (95°F). The temperature in the room where the rack is located must not exceed 35°C (95°F). CAUTION: To reduce the risk of damage to the equipment when installing third-party options: •...
WARNING: Remove power from the power supply before performing any installation steps or maintenance on the power supply. CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the earthing conductor at the equipment. For more information, see the documentation that ships with the power supply.
Installing the node into the rack CAUTION: Always plan the rack installation so that the heaviest item is on the bottom of the rack. Install the heaviest item first, and continue to populate the rack from the bottom to the top. Install the server and cable management arm into the rack.
Connect the power cord to the AC power source. WARNING: To reduce the risk of electric shock or damage to the equipment: • Do not disable the power cord grounding plug. The grounding plug is an important safety feature. • Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times.
General virtualization configuration (all 1 GbE appliance) The following cabling example shows the use of three 1 GbE switches with the all 1 GbE appliance. This example applies to the General Virtualization workload configuration only. Item Description Item Description 1 GbE Switch A (IPv6 enabled) Connect Node 2, Port 4 to Switch A (IPv6 enabled) Interconnect switch links...
General virtualization (10GbE appliance) and VDI configurations The following cabling example shows the use of two 10 GbE switches and one 1 GbE switch. This example applies to both the General Virtualization and VDI workload configurations. Though the rear components might vary for each configuration, the cabling for the two configurations is similar. Item Description Item...
CloudSystem configuration Item Description Item Description 10 GbE Switch A (IPv6 enabled) Connect Node 1, Port 4 to Switch A (IPv6 enabled) Interconnect switch links Connect Node 1, Port 6 to Switch A (IPv6 enabled) Interconnect switch links Connect Node 1, Port 5 to Switch B (IPv6 enabled) 10 GbE Switch B (IPv6 enabled) Connect Node 1, Port 3 to Switch B (IPv6...
Page 31
Item Description Item Description 1 GbE RJ-45 port 2 (for connection to a Connect Node 2, iLO port to 1 GbE Switch laptop or workstation for setup) 1 GbE RJ-45 port 3 (not used during initial Connect Node 1, iLO port to 1 GbE Switch configuration;...
Configuring the system Configuring the HC380 system This section describes how to complete the appliance configuration steps. Before you begin, ensure that the switches and appliance nodes have been racked and cabled per the earlier instructions, and that you have completed the preinstallation worksheets (on page 14). Process overview: Configuring the network switches (on page 32) Powering on all nodes (on page 33)
• VSA iSCSI Storage network/iSCSI Storage Network (tagged network) • VMware vMotion Network (tagged network) • Cloud Management Network (tagged network) • Consumer Access Network (tagged network) • Object Proxy Network (tagged network) • IPv6 must be enabled for the VLANs used by the HC380 Separate subnets are required for all networks.
Use the following illustration to locate the correct port. Your rear components may vary from what is shown. Configure the laptop or workstation port to use the static IP address 192.168.42.99 with subnet mask 255.255.255.0 (a gateway address is not required). IMPORTANT: Do not configure a laptop or workstation with an IP address of 192.168.42.100 or greater.
If your HC 380 purchase includes NVIDIA Tesla M60 GPUs, you must visit the NVIDIA website (http://www.nvidia.com/object/nvidia-grid-buy.html) to register your product and download the following: • 90-day trial license • Preferred Solution Provider list • NVIDIA GRID software • GPU mode change utility •...
# gpumodeswitch --gpumode graphics Type y to confirm updating all graphics adapters. Run the following command to reboot the vSphere host. # reboot Verify that the GPU Mode is set to Graphics by running the following command: # gpumodeswitch --listgpumodes Installing NVIDIA GRID Manager Software on vSphere Obtain the nVIDIA GRID Manager software by visiting the NVIDIA website (http://www.nvidia.com/object/nvidia-grid-buy.html).
• To complete the steps in this section, use the information you entered in the preinstallation worksheets (on page 14). • After you successfully connect using a laptop ("Configuring a laptop or workstation to access the system" on page 33), OneView InstantOn will automatically start. If it does not, navigate to the desktop and click the OneView InstantOn shortcut to start the application.
Create a new datacenter and cluster: Click New. The Create New vCenter Datastore/Cluster window opens. Enter a new datacenter and cluster name. Click Create. iii. Select this cluster name in the Cluster field on the vCenter screen by clicking the down arrow and selecting it from the drop-down list.
You can install multiple systems at the same time. Up to sixteen systems can be installed simultaneously. By default, the system from which you are accessing the Management VM to complete deployment is selected. Select the systems you want to install. Click Next to continue to the IP assignments screen.
If you selected Remote on the vCenter screen ("Completing the vCenter screen" on page 37), the Starting IP address field is pre-populated with the IP address you entered on that screen. Click Next to continue to the Credentials screen. Completing the Credentials screen On the Credentials screen, enter the preferred StoreVirtual credentials.
NOTE: You can change these credentials later using the StoreVirtual Centralized Management Console. Click Next to continue to the Settings screen. Completing the Settings screen Use the information collected in the preinstallation worksheets (on page 14) to complete the fields of the Settings screen.
If you prefer to use a VSA Failover Manager, enter an invalid NFS share IP address, and OneView InstantOn will create a Virtual Manager that can be configured in the StoreVirtual CMC as a post-deployment step. For more information, see "Appendix G: Management group quorum consideration (on page 109)."...
If you need to make changes, go back using the links in the left navigation pane. When you are ready to proceed, click Deploy. Be aware that if you are deploying multiple systems, the time to complete deployment increases. IMPORTANT: Do not close OneView InstantOn during the deployment process. If the deployment does not complete or hangs, see "Troubleshooting (on page 73)."...
Copy and paste the information from the text editor into the Hewlett Packard Enterprise licensing portal. Under "Manage advanced StoreVirtual features" on the Next Steps screen, click Launch the HP StoreVirtual Centralized Management Console. Enter the license keys you obtained from the Hewlett Packard Enterprise licensing portal.
Page 45
Following a successful configuration using OneView InstantOn, you are prompted to install HC380 Management UI. If you do not receive a pop-up window, check the task bar to see if the window appears minimized. If so, maximize the window. The HC380 Management UI deploys and configures OneView as a part of its first-time setup. It does not support using an already configured Oneview appliance.
Completing the initial HC380 Management UI configuration Configuring the settings Process overview: Performing the initial HC380 Management UI setup (on page 46) Configuring LDAP or Active Directory (on page 47) Creating datastores Performing the initial HC380 Management UI setup Most of the setup values are populated during the configuration using the OneView InstantOn tool, and you can use the Settings area in HC380 Management UI to verify or update the values.
Configuring LDAP or Active Directory HC380 Management UI, when used in conjunction with LDAP or AD, can restrict users so that they only see their own VMs. If HC380 is not configured with LDAP or AD, this functionality is not available. If the HC380 is configured with LDAP or AD, use the following steps to connect to an LDAP or AD server: Click Connect in the Directory section.
Installing CloudSystem If you ordered CloudSystem, the CloudSystem installation utility is pre-installed on the system. Before you run the installation utility, you must have successfully completed the steps to configure the HC380 using OneView InstantOn. The CloudSystem installer assumes the following: •...
Running the installation utility Overview The CloudSystem installation process can take several hours to complete, so adequate preparation can help limit the amount of re-work that will need to be done. This guide assumes that the user is familiar with the overall installation process as referenced in HPE Helion CloudSystem 9.0 Installation Guide, and the background information provided in HPE Helion CloudSystem 9.0 Administrator Guide, and HPE Helion CloudSystem 9.0 Network Planning Guide.
Running the installation utility The CloudSystem installer can be launched from the management VM, by using a connected laptop or workstation. For more information, see "Configuring a laptop or workstation to access the system (on page 33)." Before launching the installation utility, you should have completed the preinstallation worksheet for CloudSystem ("CloudSystem"...
Page 51
Click Next to proceed to the Core VLAN IDs screen. After installation, you have the opportunity to enter the license for the enterprise components. Completing the Core VLAN IDs screen On the Core VLAN IDs screen, provide networking VLAN IDs for each of the five CloudSystem networks.
Page 52
Click Next to proceed to the Networks screen. Completing the Networks screen On the Networks screen, specify up to four Provider and four Tenant VLAN identifiers and the corresponding networking subnet range (in CIDR format). You can also specify the external networking VLAN identifier and the corresponding CIDR networking value for the external VLAN.
Page 53
Click Next to proceed to the IP Assignments screen. Completing the IP Assignments screen On the IP Assignments screen, specify the range of IP addresses used for the CloudSystem management VMs on the Data Center Management (DCM) and Consumer Access Network (CAN) networks.
Page 54
Click Next to proceed to the Credentials screen. Completing the Credentials screen On the Credentials screen, provide the vCenter administrator and VSA administrator credentials. Click Next to proceed to the Appliance Hostname screen. Installing CloudSystem 54...
Page 55
Completing the Appliance Hostname screen On the Appliance Hostnames screen, specify the hostname and IP addresses needed for the CloudSystem appliances. Each of the major CloudSystem consoles is hosted by an appliance which allows you to specify the hostnames and IP addresses of these appliance VMs. The DCM network and CAN network values must be outside the range specified for the required IP addresses or specified for the DCM network in the OneView InstantOn tool.
Page 56
When all items are denoted with a green square, click Deploy to launch the CloudSystem installer. The Deploy screen appears to provide feedback on the installation process. A CloudSystem Enterprise installation can take about 2 hours to complete. Details of the installation process are shown in the Details window.
Perform one or all of the following: View the CloudSystem 9 documentation Open the CloudSystem 9 Operations Console Open the OpenStack user portal Click Finish. After you finish using the Next Steps screen, you must disable SSH on each ESXi server before using the product.
Page 58
Select the specific cluster for which you want to create a datastore. The Summary tab for the selected cluster appears. Select the Manage tab, and then select HP Management. The Actions menu appears on the right side of the window.
From the Actions menu, select Create Datastore. The Create Datastore wizard appears. Alternatively, right-click the cluster name and select All HP Management Actions → Create Datastore. Select the default location, and then click Next. On the Select storage screen, select the applicable storage pool.
Page 60
Press F2 to access the Customize System/View Logs menu. Type the username and password. Access the Troubleshooting Mode Options menu. Disable SSH. Press Esc to exit the Troubleshooting Mode Options menu. Press Esc to close the Customize System/View Logs menu. Installing CloudSystem 60...
Upgrading CloudSystem Prerequisites for upgrading cloud You must meet the following prerequisites before beginning the update: • You must download the file HC380-CloudSystem_9.01_Installer.zip. • The environment variable CS9INSTALL must be set. The attributes for this variable are set during the initial installation. Upgrading CloudSystem 9.0 to 9.01 and 9.02 On the management node (on which the installation utility was run), open up PowerShell.
Provider Launch the CloudSystem Operations Console. Select Menu > Networking > Provider Networks and then Add Provider Network. Supply the network name (convention suggestions Provider<VLAN>) and Segmentation (VLAN) ID. Use the default demo project unless the customer has defined a different project. Add the subnet details (subnet name, CIDR, gateway, &...
Validating CloudSystem Use the Monitoring Dashboard to check the status of the nodes in a trio. Open a browser and point to ma1, i.e. http://10.100.3.10 Accept the EULA and click UPDATE. Log in to the CloudSystem Operations Console, default is admin with password of unset. Expand the Menu drop-down then select General >...
• Do not run HP OneView InstantOn while performing an HPE LeftHand OS upgrade. • Tool tips and error information are available when you hover in a text box on a screen in HP OneView InstantOn. The information might take a few seconds to display.
Page 65
of 16 appliances in the cluster. For instance, if you already have 3 systems in your existing cluster, you can expand by adding a further 13 systems. Select the systems you want to add to your existing cluster and then click Next. On the IP Assignments screen, enter the appropriate information that matches your existing cluster.
Page 66
Click Next. On the Credentials screen, enter your existing StoreVirtual credentials, and then click Next. On the Review Configuration screen, ensure the information that you entered is correct. If you need to make changes, go back using the links in the left navigation pane. When you proceed, click Deploy.
Page 67
Open the vCenter web client. Select the cluster that was created during system deployment. Select Manage > HP Management. Next to the Actions menu, double-click the Refresh icon. Click Yes when prompted to refresh data now. Expanding the system 67...
Page 68
Verify that the refresh completed by hovering over the Refresh icon to display the last time the HPE Storage data was updated. If prompted to refresh date, select No. NOTE: If you need the licensing links again, restart OneView InstantOn, and the Next Steps screen automatically opens.
Expanding CloudSystem Compute node expansion When CloudSystem has been installed on an HC380, the default networking has been adjusted to suit the CloudSystem requirements. For compute nodes, the management network, or Data Center Management (DCM) which uses portgroup ESXcomp on the vSwitch1 standard switch in vCenter has been assigned a VLAN ID and migrated to a vCenter distributed switch CS9_Compute_DVS and portgroup dvESXcomp (management hosts are migrated to CS9_Mgmt_DVS and portgroup dvESXmgmt).
Go to Network TCP/IP Settings > eth0 > Provide the hostname, SN, and GW. Use the Tab key to navigate. Log out of the session and close the console. iii. Configure the new node in the CMC Launch CMC and login as the admin user. On the Toolbar, select Find >...
On the Dynamic Discovery tab Select Add… Provide the iSCSI Server IP. You can obtain this IP in two ways. — From the other nodes configured on vCenter — From the CMC, navigate to the VSA cluster then go to the iSCSI tab. The iSCSI Server IP is the value of the Virtual IP.
Click Next on Network Connectivity. iii. Click Next on Virtual Machine Networking. Click Finish. Right hpcs-storageXXXXXX distributed vswitch and click Add Host… Select the new node. Click Yes to the Warning pop-up (since no physical adapter was selected). Click Next on Network Connectivity. iii.
You can still log in and use the vCenter web client. The certificate error will not affect use of the system. To resolve the certificate error: After logging in, go to the Home tab of the vCenter web client and select HP Management Administration.
• If there are IP address conflicts, ensure that the IP addresses that are validated on the IP Assignment screen are not in use. • If you are unable to resolve the issue or you discover another issue, contact Hewlett Packard Enterprise Support.
"The page cannot be displayed" error message appears Symptom When attempting to access the Online Help or the Software Depot link under Upgrades, the following error message appears: The page cannot be displayed. Action Configure the proxy server for Internet Explorer on the Management VM. OneView InstantOn hangs with error message "0:02 Adding SAN to vCenter"...
Troubleshooting CloudSystem Connection error to vCenter server Symptom The following error message appears during installation: Could not connect to vCenter server at … Initialization failed…cannot proceed. Action Perform one or both of the following actions: • Verify that the server and username values are correct. •...
Use OneView for vCenter to allocate a datastore, or use the following steps to create it manually: Launch the StoreVirtual Centralized Management Console. Log in to the VSA Cluster (as specified in OneView InstantOn, default is HP-HyperConv-XXX). Credentials were supplied during the OneView InstantOn setup.
Share the volume with all servers. Launch vSphere and connect to the vCenter server. Pick the CS9 management host and click Rescan All, then Add Storage (CS9MgmtDatastore). On the remaining hosts in compute clusters, either click Rescan All, or wait for the datastore to be visible.
On the ESXi server that is the management VM, browse to the datastore1 datastore. Locate the cloud folder and download the Enterprise and Foundation zip files to a temporary location on the management appliance. Unzip the Enterprise and Foundation zip files. Use vSphere to upload the cs*.ova files (cs-mgmt, cs-cloud, cs-sdn, cs-monitoring, cs-update, cs-enterprise, cs-ovsvapp).
CloudSystem was not deployed successfully Symptom CloudSystem was not deployed successfully. Action See the HPE Helion CloudSystem 9.0 Troubleshooting Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CloudSystem/docs). Could not update the hpcs-data* distributed switch Symptom The hpcs-data* distributed switch could not be updated during the installation. Cause After CloudSystem is deployed, the installer tries to increase the number of uplinks to two on the hpcs-data* distributed switch for each ESXi compute host.
Could not activate compute nodes Symptom Could not activate compute nodes. Explanation The installer is trying to add any hosts in the compute cluster as activated nodes in CloudSystem. Potential problems could occur if vCenter is not registered or it fails to find an OVA image. Action Perform one or both of the following actions: •...
Action Running deploy a second time from the UI will usually fix most problems. For more information, see "Tenant and provider networking (on page 61)." Could not create Tenant and Provider VLAN Networks Symptom Could not create Tenant and Provider VLAN Networks. Explanation The installer is creating the Tenant and Provider VLAN networks specified in the UI.
• Perform a manual update by referring to HPE Helion CloudSystem 9.0 Installation Guide located on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CloudSystem/docs). Could not update passwords Symptom Could not update the administrator password for CloudSystem. Action Hewlett Packard Enterprise recommends that you back up your system and take a snapshot of CloudSystem before changing.
Ensure that the original cluster name (e.g. HC-CLUS) that was specified during the OneView InstantOn deployment is in place, including the ESXi hosts in that cluster. Could not find the VSA VM on host <name> Symptom The installation utility cannot locate the VSA VM on the specified host. Explanation When trying to migrate ESXi hosts to the new compute cluster, the utility must be able to shut down the VSA VM on each respective host.
Appendix A: Network switch configuration Hewlett Packard Enterprise switches This section describes how to configure a 3-node HC380 appliance with a pair of HPE 5900A-48XG-4QSFP+ switches. Network cabling The following table shows an example 10GbE networking connectivity to two HPE 5900AF-48XG-4QSFP+ switches configured with HPE Intelligent Resilient Framework (IRF)+.
Device Port Comment HC380 Node 1 HC380 Node 2 HC380 Node 3 Configuring the switches The following procedures describe an example of how to configure the two HPE 5900AF-48XG-4QSFP+ switches for use in an HC380 for the general virtualization configuration. Although two 10GbE based switches are used here in this example, the same steps can be leveraged for a 1GbE based HC380 solution using an HPE 5900AF-48G-4XG-2QSFP+ switch, taking into account the additional 1GbE connections that are required.
Page 87
Press ctrl and c or ctrl and d to stop the automatic configurations. Whenever the instructions call for network configuration in "system view" context, if at the <HPE> prompt, issue the system-view command to get to the [HPE] prompt. Automatic configuration is aborted. Line aux0 is available.Press ENTER to get started.<HPE>...
Page 88
(To leave the existing filename unchanged, press the enter key): Validating file. Please wait... Saved the current configuration to mainboard device successfully. [HPE] quit <HP> reboot Start to check configuration with next startup configuration file, please wait..DONE!This command will reboot the device. Continue? [Y/N]:yNow rebooting, please wait...
Page 89
Saved the current configuration to mainboard device successfully. Configure IRF priority Configure the domain and IRF parameters. The <<net_switch_domain_id>> value is an arbitrary number, but must be unique from other IRF domains. Switch prompts will not be displayed in the remaining switch configuration information in this section of the documentation.
Page 90
t_Create the public keys on the switch2_Phoenix From system-view, run the following commands. public-key local create rsa Input the modulus length [default = 1024]:Enter public-key local create dsa Input the modulus length [default = 1024]:Enter public-key local create ecdsa secp256r1 SSH to the switch using <<net_switch1_mgmt_ip>>, username admin and the password <<net_switch_admin_password>>.
description Switch1-IRF-Switch2-IRF-2/0/52 quit interface FortyGigE 2/0/51 description Switch2-IRF-Switch1-IRF-1/0/51 quit interface FortyGigE 2/0/52 description Switch2-IRF-Switch1-IRF-1/0/52 quit save Configure VLANs on HC380 nodes To configure the VLANs for each HC380 node in the configuration, run the following commands. interface range Ten-GigabitEthernet 1/0/1 to Ten-GigabitEthernet 1/0/3 port link-type trunk undo port trunk permit vlan 1 port trunk pvid vlan <<mgmt_net_vlan>>...
Page 93
Parity: None Stop bits: 1 Flow control: None Emulation: VT100 Press OK to open the connection. Cisco Nexus 56128P switch 1 To set up the initial configuration for the Cisco Nexus 56128P switch 1, complete the following steps. On initial boot and connection to the serial or console port of the switch, the NX-OS setup should automatically start and attempt to enter Power on Auto Provisioning (PoAP).
Page 94
Abort Power on Auto Provisioning and continue with normal setup? (yes/no) [n]: yes Enter the password for "admin": <<net_switch_admin_password>> Confirm the password for "admin": <<net_switch_admin_password>> Would you like to enter the basic configuration dialog (yes/no): yes Create another login account (yes/no) [n]: Enter Configure read-only SNMP community string (yes/no) [n]: Enter Configure read-write SNMP community string (yes/no) [n]: Enter Enter the switch name: <<net_switch2_hostname>>...
Page 95
Create VLANs Complete this section on both Cisco Nexus 56128P switch 1 and Cisco Nexus 56128P switch 2. From the configuration terminal mode, run the following commands to create the needed VLANs for the solution. vlan <<mgmt_net_vlan>> name MGMT-VLAN exit vlan <<vmotion_net_vlan>>...
From configure terminal, run the following commands to create port profiles which will be used to simplify ongoing network administration and configuration. port-profile type ethernet HC380-Nodes switchport mode trunk switchport trunk native vlan <<mgmt_net_vlan>> switchport trunk allowed vlan <<mgmt_net_vlan>>, <<vmotion_net_vlan>>, <<iscsi_net_vlan>>, <<vm_production_net_1_vlan>>...
Page 97
C:\Program Files (x86)\VMware\VMware vSphere CLI> Test connectivity after the ping command is successful by opening Internet Explorer. Ensure you have the opening and closing [ ] around the Link-Local IPv6 Address. Repeat this section for the iLO of each HC380 node in the configuration. Troubleshoot IPv6 connectivity to the iLO of each HC380 node If you are not able to ping the Link-Local IPv6 Address of an iLO, perform the following steps.
Page 98
Reply from fe80::d27e:28ff:fecf:5b5b: time=2ms Reply from fe80::d27e:28ff:fecf:5b5b: time=1ms Ping statistics for fe80::d27e:28ff:fecf:5b5b: Packets: Sent = 4, Received = 2, Lost = 2 (50% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Program Files (x86)\VMware\VMware vSphere CLI> If the ping is successful, then that network switch is most likely configured for IPv6.
Page 99
Reply from fe80::250:56ff:fe6f:4049: time<1ms Reply from fe80::250:56ff:fe6f:4049: time<1ms Ping statistics for fe80::250:56ff:fe6f:4049: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Program Files (x86)\VMware\VMware vSphere CLI> Repeat this section for each HC380 node in the configuration.
Hewlett Packard Enterprise recommends using bridge aggregations (HPE) or virtual port channels (Cisco) to uplink the switches into the infrastructure. More information can be located in HP 5920 & 5900 Layer 2 - LAN Switching Configuration Guide or Cisco Nexus 5600 Series NX-OS Interfaces Configuration Guide, Release 7.x.
Page 101
Description Variable Value <<net_switch_domain_id>> Network Switch IRF or VPC Domain ID Appendix A: Network switch configuration 101...
Appendix D: CloudSystem 9.0 Consoles While this document assumes you are familiar with CloudSystem 9.0, determining which console to access can sometimes become confusing. Refer to this chart for clarification and default credentials. Console name Console image CS9 Operations Console (admin/unset) IP: Management Appliance DCM VIP...
Page 105
Console name Console image MarketPlace Portal (consumer/cloud) IP: Enterprise Appliance CAN VIP CSA Management Console (admin/cloud) IP: Enterprise Appliance CAN VIP Operations Orchestration (administrator/unset) IP: Enterprise Appliance CAN VIP Appendix D: CloudSystem 9.0 Consoles 105...
Appendix E: CloudSystem Network Diagram The CloudSystem introduces several new networks that are required to have a set of VLAN configurations in your top of rack (TOR) switches. This section gives a brief overview, but you should review HPE Helion CloudSystem 9.0 Network Planning Guide available on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CloudSystem/docs) before deploying CloudSystem.
Appendix F: Remote vCenter setup NOTE: A remote vCenter setup is not supported with CloudSystem. The HC380 node includes a built-in Management VM on which VMware vCenter Server is pre-installed. In OneView InstantOn, this is considered a local vCenter setup. With OneView InstantOn, you can deploy the system storage to an external instance of VMware vCenter Server that you provide (meaning, the software is not installed on the HC380 Management VM).
Page 108
Item Description Windows server running OneView for VMware vCenter vCenter Server Appliance (vCSA) 1 GbE or 10 GbE network HC380 cluster (with built-in Management VM) Appendix F: Remote vCenter setup 108...
Appendix G: Management group quorum consideration If you are deploying a two-node system, OneView InstantOn displays the Quorum Setting field on the Settings screen. You must enter an NFS file share as the Quorum Witness for the StoreVirtual management group. Within a management group, managers are storage systems that govern the activity of all the storage systems in the group.
Appendix H: IP addresses for sample cluster ESXi management network IP addresses worksheet The following table shows the required IP addresses and sample values for the ESXi management network. Addresses Purpose Count Example IP Example Example CIDR VLAN ID HC380 Management UI VM 172.28.0.1 255.255.240.0 W.X.Y.n...
Page 111
Customer worksheet The following table can be used by the customer to identify the IP addresses needed for the ESXi management network. Addresses Purpose Count Customer IP Customer Custome CIDR r VLAN HC380 Management UI VM W.X.Y.n HC380 OneView VM W.X.Y.n+1 HC380 Management VM W.X.Y.n+2...
vSphere vMotion network IP addresses worksheet The following table shows the required IP addresses and sample values for the vSphere vMotion network. Addresses Purpose Count Example IP Example Example CIDR VLAN ID Node 1 - Node 16 IP 172.28.20.1 255.255.255.0 vSphere addresses* 172.28.20.16...
Storage network IP addresses worksheet The following table shows the required IP addresses and sample values for the storage network. Addresses Purpose Count Example IP Example Example CIDR VLAN ID Node 1 - Node 16 IP 172.28.30.1 255.255.255.0 Storage addresses* 172.28.30.50 * Must be contiguous Customer worksheet...
CloudSystem network IP addresses worksheet The following table shows the required IP addresses and sample values for the CloudSystem management network. If you are not planning to install CloudSytem, you do not need this worksheet. Purpose Count Example IP Example CIDR Example VLAN ID Cloud Management Network...
Page 115
Customer VLAN Purpose Count Customer IP Customer CIDR External Network Appendix H: IP addresses for sample cluster 115...
Warranty and regulatory information Warranty information HPE ProLiant and x86 Servers and Options (http://www.hpe.com/support/ProLiantServers-Warranties) HPE Enterprise Servers (http://www.hpe.com/support/EnterpriseServers-Warranties) HPE Storage Products (http://www.hpe.com/support/Storage-Warranties) HPE Networking Products (http://www.hpe.com/support/Networking-Warranties) Regulatory information Safety and regulatory compliance For important safety, environmental, and regulatory information, see Safety and Compliance Information for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise website (http://www.hpe.com/support/Safety-Compliance-EnterpriseProducts).
Local representative information Kazakh: • Russia: • Belarus: • Kazakhstan: Manufacturing date: The manufacturing date is defined by the serial number. CCSYWWZZZZ (serial number format for this product) Valid date formats include: • YWW, where Y indicates the year counting from within each new decade, with 2000 as the starting point;...
Specifications HC380 specifications For environmental, mechanical, and power supply specifications for the server nodes, see HPE ProLiant DL380 Gen9 Server User Guide found at the Hewlett Packard Enterprise website (http://www.hpe.com/support/DL380Gen9/docs). For environmental, mechanical, and power supply specification for the server rack, see the server rack documentation.
Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page (http://www.hpe.com/support/AccessToSupportMaterials). IMPORTANT: Access to some updates might require product entitlement when accessed through the Hewlett Packard Enterprise Support Center. You must have an HP Passport set up with relevant entitlements. Websites •...
Software Depot (http://www.hpe.com/support/softwaredepot) • Customer Self Repair (http://www.hpe.com/support/selfrepair) • Insight Remote Support (http://www.hpe.com/info/insightremotesupport/docs) • Serviceguard Solutions for HP-UX (http://www.hpe.com/info/hpux-serviceguard-docs) • Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix (http://www.hpe.com/storage/spock) • Storage white papers and analyst reports (http://www.hpe.com/storage/whitepapers) Remote support Remote support is available with supported devices as part of your warranty or contractual support agreement.
Acronyms and abbreviations consumer access network CIDR classless inter-domain routing cloud management network data center management domain name system Distributed Resource Scheduler high availability hard disk drive or hard drive Integrated Lights-Out Intelligent Resilient Framework LDAP Lightweight Directory Access Protocol LAN on Motherboard network time protocol Acronyms and abbreviations 121...
Page 122
solid-state device single sign-on TMRA recommended ambient operating temperature top of rack virtual desktop infrastructure VLAN virtual local-area network Virtual Machine VMware vSphere Storage Appliance Acronyms and abbreviations 122...
Documentation feedback Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (mailto:docsfeedback@hpe.com). When submitting your feedback, include the document title, part number, edition, and publication date located on the front cover of the document. For online help content, include the product name, product version, help edition, and publication date located on the legal notices page.
Need help?
Do you have a question about the Hyper Converged 380 and is the answer not in the manual?
Questions and answers