Download Print this page
Dell SC5020 Setting Up Your System

Dell SC5020 Setting Up Your System

Hide thumbs Also See for SC5020:

Advertisement

Quick Links

Setting Up Your Dell SC5020 Storage System
1
Before You Begin
Warning!
Before you set up and operate your Dell storage system, review the safety
instructions that came with your storage system.
Unpack Storage Center Equipment
Develop a Configuration Plan
A Dell SC5020 storage system includes:
Before installing the storage hardware, develop a configuration plan where you can record host
Documentation
server information, switch information, and network information.
Storage system
Record System Information
Front bezel
System management IPv4 address for Storage Center
Rack rails
IPv4 address of the MGMT port on each storage controller
Power cables (2)
Domain name
USB cables (2)
DNS server address
Additional IPv4 addresses if the storage system has iSCSI I/O ports
Consider Plans for Multipath/Failover
Redundancy is provided by fault domains, which allow alternate paths if a path fails. Fault
domains are determined by the number of independent Fibre Channel fabrics or iSCSI networks.
Each switch carries a separate fault domain. If a port fails, any port within the same fault domain
takes over for the failed port. Dell recommends using multipathing, so that volumes are mapped
to ports in more than one fault domain.
More Information
For operating system, host bus adapter (HBA), and switch requirements, refer to the Dell
Storage Compatibility Matrix on the Dell Tech Center at
techcenter/storage.
Prepare the Host Servers
4
Refer to the Dell Storage Compatibility Matrix for a list of supported HBAs or iSCSI network adapters.
Windows and Linux Hosts
Install the HBAs or network adapters, install the drivers, and make sure that the latest supported BIOS is
installed.
Fibre Channel
1.
Install the Fibre Channel HBAs in the host servers.
2. Install supported HBA drivers and make sure that HBAs have the latest supported firmware.
3. Use the Fibre Channel cabling diagrams to cable host servers to switches. Connecting host servers
directly to the storage system without using Fibre Channel switches is not supported.
iSCSI
1.
Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the host servers.
2. Install supported HBA drivers and make sure that HBAs have the latest supported firmware.
3. Use the iSCSI cabling diagrams to cable the host servers to switches. Connecting host servers directly
to the storage system without using Ethernet switches is not supported.
4. Assign IP addresses to each iSCSI port to match the subnets for each fault domain.
CAUTION:
Make sure to assign the correct IP addresses to the HBAs or network adapters. Assigning IPs
to the wrong ports can cause connectivity issues.
NOTE:
If using jumbo frames, enable and configure jumbo frames on all devices in the data path.
SAS
1.
Install the SAS HBAs in the host servers.
2. Install supported HBA drivers and make sure that HBAs have the latest supported firmware.
3. Use the SAS cabling diagram to cable the host servers directly to the storage controllers.
VMware ESXi Hosts
Install the HBAs or network adapters and make sure that the latest supported BIOS is installed.
Fibre Channel
1.
Install the Fibre Channel HBAs in the ESXi hosts.
2. Use the Fibre Channel cabling diagrams to cable ESXi hosts to switches. Connecting ESXi hosts directly
to the storage system without using Fibre Channel switches is not supported.
iSCSI
1.
Install the iSCSI HBAs or network adapters dedicated for iSCSI traffic in the ESXi hosts.
2. If using network adapters, create a VMkernel port for each adapter.
3. Assign IP addresses for each adapter port to match the subnets for each fault domain.
CAUTION:
Make sure to assign the correct IP addresses to the HBAs or network adapters. Assigning IPs
to the wrong ports can cause connectivity issues.
NOTE:
If using jumbo frames, enable and configure jumbo frames on all devices in the data path: adapter
ports, switches, and storage system.
4. If using network adapters, configure Network Port Binding to add the VMkernel ports to the iSCSI
software initiator.
5. Use the iSCSI cabling diagrams to cable the ESXi hosts to switches. Connecting ESXi hosts directly to
the storage system without using Ethernet switches is not supported.
SAS
1.
Install the SAS HBAs in the ESXi hosts.
2. Install the updated driver for 12 Gb SAS HBAs on the ESXi hosts. For more information, contact Dell
Technical Support.
3. Use the SAS cabling diagram to cable the ESXi hosts directly to the storage controllers.
NOTE:
Configure access to Storage Center one ESXi host at a time.
2
Mount the Chassis and Optional Enclosures
Warning!
The chassis is heavy. Do not attempt to lift
the chassis without assistance.
Use the racking instructions included with your package to mount the chassis.
Mount the storage system chassis and expansion enclosures in a manner that allows
for expansion in the rack and prevents the rack from becoming top-heavy. Secure the
storage system chassis to the rack using the mounting screws that are located behind
the latches on each chassis ear. Dell recommends mounting the storage system chassis
in the bottom of the rack.
http://en.community.dell.com/
5
Cable the Host Servers to the Storage System
The SC5020 storage system supports Fibre Channel, iSCSI, or SAS protocols to connect the Storage Center to host servers. Fault domains provide fault tolerance at the storage controller level. If you are using Fibre Channel,
incorporate your switch zoning strategy with the fault domains. Dell recommends using redundant cabling to avoid a single point of failure.
1.
Identify the protocol being used to connect the host servers to the disk array.
2. Refer to the diagram below that corresponds to the proper protocol. These cabling guidelines ensure the configuration has redundancy and failover capability. For more information, contact Dell Technical Support.
Fibre Channel and iSCSI HBA Cabling
If the storage system includes Fibre Channel or iSCSI HBAs, connect the host servers and
storage system to the corresponding Fibre Channel or Ethernet switches.
Fibre Channel 4 Port Configuration
iSCSI 4 Port Configuration
1.
Connect each host server to both switches.
– Connections shown in orange belong to fault
domain 1.
– Connections shown in blue belong to fault
domain 2.
2. Connect fault domain 1 (in orange) to switch 1.
– Top storage controller: port 1 to switch 1
– Top storage controller: port 3 to switch 1
– Bottom storage controller: port 1 to switch 1
– Bottom storage controller: port 3 to switch 1
3. Connect fault domain 2 (in blue) to switch 2.
– Top storage controller: port 2 to switch 2
– Top storage controller: port 4 to switch 2
– Bottom storage controller: port 2 to switch 2
– Bottom storage controller: port 4 to switch 2
Fibre Channel 2 Port Configuration
iSCSI 2 Port Configuration
1.
Connect each host server to both switches.
– Connections shown in orange belong to fault
domain 1.
– Connections shown in blue belong to fault
domain 2.
2. Connect fault domain 1 (in orange) to switch 1.
– Top storage controller: port 1 to switch 1
– Bottom storage controller: port 1 to switch 1
3. Connect fault domain 2 (in blue) to switch 2.
– Top storage controller: port 2 to switch 2
– Bottom storage controller: port 2 to switch 2
3
Install the Bezel
1.
Hold the bezel with the logo upright.
2. Hook the right end of the bezel into the right side of the chassis.
3. Swing the left end of the bezel toward the left side of the chassis.
4. Press the bezel into place until the release latch closes.
5. Use the key to lock the front bezel.
iSCSI Mezzanine Card Cabling
SAS HBA Cabling
If the storage system includes an iSCSI mezzanine
If the storage system includes a SAS HBA, directly connect
card, connect the host servers and storage system
the host servers to the storage system.
to Ethernet switches.
SAS 4 Port Configuration
iSCSI 4 Port Mezzanine Card
1.
Connect fault domain 1 (in orange) to server 1.
Configuration
– Top storage controller: port 1 to port on server 1
– Bottom storage controller: port 1 to port on server 1
1.
Connect each host server to both Ethernet
switches.
2. Connect fault domain 2 (in blue) to server 2.
– Connections shown in orange belong to fault
– Top storage controller: port 2 to port on server 2
domain 1.
– Bottom storage controller: port 2 to port on server 2
– Connections shown in blue belong to fault
3. Connect fault domain 3 (in gray) to server 3.
domain 2.
– Top storage controller: port 3 to port on server 3
2. Connect fault domain 1 (in orange) to switch 1.
– Bottom storage controller: port 3 to port on server 3
– Top storage controller: port 1 to switch 1
4. Connect fault domain 4 (in red) to server 4.
– Top storage controller: port 3 to switch 1
– Top storage controller: port 4 to port on server 4
– Bottom storage controller: port 1 to switch 1
– Bottom storage controller: port 4 to port on server 4
– Bottom storage controller: port 3 to switch 1
3. Connect fault domain 2 (in blue) to switch 2.
– Top storage controller: port 2 to switch 2
– Top storage controller: port 4 to switch 2
– Bottom storage controller: port 2 to switch 2
– Bottom storage controller: port 4 to switch 2

Advertisement

loading

Summary of Contents for Dell SC5020

  • Page 1 Refer to the Dell Storage Compatibility Matrix for a list of supported HBAs or iSCSI network adapters. The SC5020 storage system supports Fibre Channel, iSCSI, or SAS protocols to connect the Storage Center to host servers. Fault domains provide fault tolerance at the storage controller level. If you are using Fibre Channel, incorporate your switch zoning strategy with the fault domains.
  • Page 2 The Ethernet management interface of each storage controller must be To add capacity to your storage system, you can connect up to sixteen SC400 or eight SC420 expansion enclosures to an SC5020 storage system. A maximum of 222 physical disks are supported in an SC5020 storage system.