Dell PowerEdge FE600W Hardware Installation And Troubleshooting Manual

Fibre channel storage arrays with microsoft windows server failover clusters

Advertisement

Dell|EMC CX3-series
Fibre Channel Storage Arrays With
®
®
Microsoft
Windows Server
Failover Clusters
Hardware Installation and

Troubleshooting Guide

w w w . d e l l . c o m | s u p p o r t . d e l l . c o m

Advertisement

Table of Contents
loading

Summary of Contents for Dell PowerEdge FE600W

  • Page 1: Troubleshooting Guide

    Dell|EMC CX3-series Fibre Channel Storage Arrays With ® ® Microsoft Windows Server Failover Clusters Hardware Installation and Troubleshooting Guide w w w . d e l l . c o m | s u p p o r t . d e l l . c o m...
  • Page 2 Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved. Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, PowerEdge, PowerVault, and Dell OpenManage are trademarks of Dell Inc.;...
  • Page 3: Table Of Contents

    Contents Introduction ......Cluster Solution ..... . Cluster Hardware Requirements .
  • Page 4 Optional Storage Features ... . . Updating a Dell|EMC Storage System for Clustering ......
  • Page 5: Contents

    A Troubleshooting ....B Cluster Data Form ....C Zoning Configuration Form .
  • Page 6 Contents...
  • Page 7: Introduction

    Troubleshooting Guide located on the Dell Support website at support.dell.com. For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Failover Cluster, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
  • Page 8: Cluster Solution

    Cluster Solution Your cluster implements a minimum of two node clustering to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features: • 8-Gbps, 4-Gbps, and 2-Gbps Fibre Channel technology •...
  • Page 9: Cluster Nodes

    It is strongly recommended that you use hardware-based RAID or software-based disk-fault tolerance for the internal drives. NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha. Introduction...
  • Page 10: Cluster Storage

    Table 1-2. Cluster Storage Requirements Hardware Components Requirement Supported storage One to four supported Dell|EMC storage systems. See systems Table 1-3 for specific storage system requirements. Cluster nodes All nodes must be directly attached to a single storage system or attached to one or more storage systems through a SAN.
  • Page 11 (also called a management station) running EMC Navisphere Manager—a centralized storage management application used to configure Dell|EMC storage systems. Using a graphical user interface (GUI), you can select a specific view of your storage arrays, as shown in Table 1-4.
  • Page 12: Supported Cluster Configurations

    LUN. • EMC SAN Copy™ — Moves data between Dell|EMC storage systems without using host CPU cycles or local area network (LAN) bandwidth. For more information about Navisphere Manager, EMC Access Logix™, MirrorView, SnapView, and SAN Copy, see "Installing and Configuring the...
  • Page 13: San-Attached Cluster

    EMC PowerPath provides failover capabilities, multiple path detection, and dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.
  • Page 14: Other Documents You May Need

    NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
  • Page 15 • For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide. • The HBA documentation provides installation instructions for the HBAs. • Systems management software documentation describes the features, requirements, installation, and basic operation of the software.
  • Page 16 Introduction...
  • Page 17: Cabling Your Cluster Hardware

    Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
  • Page 18 Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems primary power supplies redundant power on one AC power strip supplies on one AC (or on one AC PDU [not power strip (or on one shown]) AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
  • Page 19: Cabling Your Cluster For Public And Private Networks

    Table 2-1. NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com.
  • Page 20: Cabling The Public Network

    Table 2-1. Network Connections Network Connection Description Public network All connections to the client LAN. At least one public network must be configured for Mixed mode for private network failover. Private network A dedicated connection for sharing cluster health and status information only.
  • Page 21: Cabling The Private Network

    Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations. Table 2-2. Private Network Hardware Components and Connections Method Hardware Components Connection...
  • Page 22: Cabling The Storage Systems

    Cabling Storage for Your Direct-Attached Cluster A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell|EMC storage system. Direct-attached configurations are self-contained and do not share any physical resources with other server or storage systems outside of the cluster.
  • Page 23 (LC) multimode connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell|EMC storage system. These connectors consist of two individual Fibre optic connectors with indexed tabs that must be aligned properly into the HBA ports and SP ports.
  • Page 24 Figure 2-5. Cabling the Cluster Nodes to a CX3-10c Storage System cluster node 1 cluster node 2 HBA ports (2) HBA ports (2) SP-A SP-B CX3-10c storage system Figure 2-6. Cabling the Cluster Nodes to a CX3-20 Storage System cluster node 1 cluster node 2 HBA ports (2) HBA ports (2)
  • Page 25 SP-A CX3-40f storage system NOTE: If your cluster is attached to Dell|EMC CX3-10c, CX3-20/c, or CX3-40/c storage systems, you can configure two cluster nodes in a direct-attached configuration. With the CX3-40f or CX3-80 you can configure four cluster nodes and with CX3-20f you can configure six cluster nodes in a direct attached configuration.
  • Page 26 (fourth Fibre Channel port). Cabling Two Clusters to a Dell|EMC Storage System The four fibre channel ports per storage processor on Dell|EMC CX3-40f and CX3-80 storage systems allows you to connect two two-node clusters in a direct-attached configuration. Similarly, the six fibre-channel ports per processor on Dell|EMC CX3-20f storage system allows you to connect three two-node clusters in a direct-attached environment.
  • Page 27: Cabling Storage For Your San-Attached Cluster

    3 In the second cluster, connect cluster node 1 to the storage system: Install a cable from cluster node 1 HBA port 0 to SP-A Fibre port 2 (third Fibre Channel port). Install a cable from cluster node 1 HBA port 1 to SP-B Fibre port 2 (third Fibre Channel port).
  • Page 28 Figure 2-8. Two-Node SAN-Attached Cluster public network cluster node cluster node private network Fibre Channel Fibre Channel connections connections Fibre Channel Fibre Channel switch switch storage system Cabling Your Cluster Hardware...
  • Page 29 Figure 2-9. Eight-Node SAN-Attached Cluster public network private network cluster nodes (2-8) Fibre Channel Fibre Channel switch switch storage system Cabling Your Cluster Hardware...
  • Page 30 Dell|EMC storage system. NOTE: Dell|EMC CX3-20f, CX3-40f, and CX3-80 storage systems have more than two fibre-channel ports per SP. You can also connect the additional ports to the redundant fabrics to achieve higher availability.
  • Page 31 Cabling a SAN-Attached Cluster to a Dell|EMC CX3-10c, CX3-20/c, or CX3-40/c Storage System 1 Connect cluster node 1 to the SAN: Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).
  • Page 32 Figure 2-11. Cabling a SAN-Attached Cluster to the CX3-40c SPE cluster node 1 cluster node 2 HBA ports (2) HBA ports (2) SP-B SP-A CX3-40c storage system Cabling a SAN-Attached Cluster to the CX3-40 or CX3-80 Storage System 1 Connect cluster node 1 to the SAN: Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).
  • Page 33 Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 3 (fourth Fibre Channel port). Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 1 (second Fibre Channel port). Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 3 (fourth Fibre Channel port).
  • Page 34 Fibre Channel switches and then connect the Fibre Channel switches to the appropriate storage processors on the processor enclosure. For rules and guidelines for SAN-attached clusters, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.
  • Page 35 2 (third Fibre Channel port). NOTE: Dell|EMC CX3-20f storage system can be cabled in a similar manner as the CX3-40f or CX3-80 storage system. The remaining fibre channel ports (4Fibre and 5Fibre) in the CX3-20f storage system can also be connected depending upon the required level of redundancy.
  • Page 36 LUNs to the proper host systems." on page 53 for more information. Figure 2-13 provides an example of cabling the cluster nodes to four Dell|EMC storage systems. See "Implementing Zoning on a Fibre Channel Switched Fabric" on page 42 for more information. Cabling Your Cluster Hardware...
  • Page 37 Connecting a PowerEdge Cluster to a Tape Library To provide additional backup for your cluster, you can add tape backup devices to your cluster configuration. The Dell PowerVault™ tape libraries may contain an integrated Fibre Channel bridge or Storage Network Controller (SNC) that connects directly to your Dell|EMC Fibre Channel switch.
  • Page 38 Figure 2-14. Cabling a Storage System and a Tape Library cluster node cluster node private network Fibre Channel Fibre Channel switch switch tape library storage system Obtaining More Information See the storage and tape backup documentation for more information on configuring these components.
  • Page 39: Preparing Your Systems For Clustering

    NOTE: For more information on step 3 to step 7 and step 10 to step 13, see the "Preparing your systems for clustering" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
  • Page 40 NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see the “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
  • Page 41: Installation Overview

    Installation Overview Each node in your Dell Failover Cluster must be installed with the same release, edition, service pack, and processor architecture of the Windows Server operating system. For example, all nodes in your cluster may be configured with Windows Server 2003 R2, Enterprise x64 Edition.
  • Page 42: Installing The Fibre Channel Hbas

    Placing the adapters on separate buses improves availability and performance. For more information about your system's PCI bus configuration and supported HBAs, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha. Installing the Fibre Channel HBA Drivers For more information, see the EMC documentation that is included with your HBA kit.
  • Page 43: Using Zoning In San Configurations Containing Multiple Hosts

    Zoning automatically and transparently enforces access of information to the zone devices. More than one PowerEdge cluster configuration can share Dell|EMC storage system(s) in a switched fabric using Fibre Channel switch zoning and Access Logix™. By using Fibre Channel switches to implement zoning, you can segment the SANs to isolate heterogeneous servers and storage systems from each other.
  • Page 44 If you are sharing a storage system with multiple clusters or a combination of clustered and nonclustered systems (hosts), you must enable EMC Access Logix and Access Control. Otherwise, you can only have one nonclustered system or one PowerEdge cluster attached to the Dell|EMC storage system. Preparing Your Systems for Clustering...
  • Page 45: Installing And Configuring The Shared Storage System

    Fibre Channel topologies allow multiple clusters and stand-alone systems to share a single storage system. However, if you cannot control access to the shared storage system, you can corrupt your data. To share your Dell|EMC storage system with multiple heterogeneous host systems and restrict access to the shared storage system, you can enable and configure the Access Logix software.
  • Page 46 Access Logix is enabled by configuring the Access Logix option on your storage system. The storage systems are managed through a management station—a local or remote system that communicates with Navisphere Manager and connects to the storage system through an IP address. Using Navisphere Manager, you can secure your storage data by partitioning your storage system arrays into LUNs, assign the LUNs to one or more storage groups, and then restrict access to the LUNs by assigning the storage groups to the appropriate host systems.
  • Page 47: Access Control

    Access Control Access Control is a feature of Access Logix that connects the host system to the storage system. Enabling Access Control prevents all host systems from accessing any data on the storage system until they are given explicit access to a LUN through a storage group.
  • Page 48 Table 3-3. Storage Group Properties Property Description Unique ID A unique identifier that is automatically assigned to the storage group that cannot be changed. Storage group name The name of the storage group. The default storage group name is formatted as Storage Group n, where n equals the existing number of storage groups plus one.
  • Page 49: Navisphere Manager

    EMC PowerPath PowerPath automatically reroutes Fibre Channel I/O traffic from the host system and a Dell|EMC CX-series storage system to any available path if a primary path fails for any reason. Additionally, PowerPath provides multiple path load balancing, allowing you to balance the I/O traffic across multiple SP ports.
  • Page 50: Enabling Access Logix And Creating Storage Groups Using Navisphere

    Enabling Access Logix and Creating Storage Groups Using Navisphere 6.x The following subsection provides the required procedures for creating storage groups and connecting your storage systems to the host systems using the Access Logix software. NOTICE: Before enabling Access Control, ensure that no hosts are attempting to access the storage system.
  • Page 51: Configuring The Hard Drives On The Shared Storage System(S)

    10 Click OK. 11 Right-click the icon of your storage system and select Create Storage Group. The Create Storage Group dialog box appears. 12 In the Storage Group Name field, enter a name for the storage group. 13 Click Apply. 14 Add new LUNs to the storage group.
  • Page 52 Using the Windows Dynamic Disks and Volumes For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
  • Page 53 Naming and Formatting Drives on the Shared Storage System When the LUNs have completed the binding process, assign drive letters to the LUNs. Format the LUNs as NTFS drives and assign volume labels from the first cluster node. When completed, the remaining nodes will see the file systems and volume labels.
  • Page 54 Assign drive letters on each of the shared disks, even if the disk displays the drive letter correctly. For more information about the Navisphere Manager software, see your EMC documentation located on the Dell Support website at support.dell.com or the EMC support site located at www.emc.com. Preparing Your Systems for Clustering...
  • Page 55: Optional Storage Features

    Optional Storage Features Your Dell|EMC CX3-series storage array may be configured to provide optional features that can be used in conjunction with your cluster. These features include MirrorView, SnapView, and SANCopy. MirrorView MirrorView automatically duplicates primary storage system data from a cluster or stand-alone system to a secondary storage system.
  • Page 56: Updating A Dell|Emc Storage System For Clustering

    Updating a Dell|EMC Storage System for Clustering If you are updating an existing Dell|EMC storage system to meet the cluster requirements for the shared storage subsystem, you may need to install additional Fibre Channel disk drives in the shared storage system. The size and number of drives you add depend on the RAID level you want to use and the number of Fibre Channel disk drives currently in your system.
  • Page 57: Troubleshooting

    Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot The storage system is Ensure that the cables are connected access the storage...
  • Page 58 Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One of the nodes The node-to-node Check the network cabling. Ensure takes a long time to network has failed due that the node-to-node interconnection join the cluster. to a cabling or and the public network are connected hardware failure.
  • Page 59 Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Attempts to The Cluster Service Verify that the Cluster Service is connect to a cluster has not been started. running and that a cluster has been using Cluster formed. Use the Event Viewer and look A cluster has not been Administrator fail.
  • Page 60 IP Addresses to Cluster Resources and Components" of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide. The private Ensure that all systems are powered on...
  • Page 61 Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Unable to add a The new node cannot Ensure that the new cluster node can node to the cluster. access the shared enumerate the cluster disks using disks. Windows Disk Administration. If the disks do not appear in Disk The shared disks are Administration, check the following:...
  • Page 62 Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Cluster Services The Windows Perform the following steps: does not operate Internet Connection On the Windows desktop, right-click correctly on a Firewall is enabled, My Computer and click Manage. cluster running which may conflict In the Computer Management...
  • Page 63 Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table B-1. Cluster Information Cluster Information Cluster Solution Cluster name and IP address...
  • Page 64 Additional Networks Table B-3. Storage Array Information Array Array xPE Type Array Service Tag Number of Attached Number or World Wide DAEs Name Seed Cluster Data Form...
  • Page 65 Zoning Configuration Form Node HBA WWPNs Storage Zone Name Zone Set for or Alias WWPNs or Configuration Names Alias Names Name Zoning Configuration Form...
  • Page 66 Zoning Configuration Form...
  • Page 67 Index Access Control Dell | EMC CX3-40 about, 47 Cabling a two-node cluster, 23 Access Logix Dell | EMC CX3-80 about, 45 cabling to two clusters, 26 Dell|EMC CX3-20 Cabling a two-node cluster, 23 Dell|EMC CX3-80 cable configurations cabling the cluster nodes, 23...
  • Page 68 mouse cabling, 17 Emulex HBAs MSCS installing and configuring, 42 installing and configuring, 56 installing and configuring drivers, 42 Navisphere Agent about, 49 HBA drivers installing and configuring, 42 Navisphere Manager about, 11, 49 host bus adapter hardware view, 11 configuring the Fibre Channel HBA, 42 storage view, 11...
  • Page 69 private network storage management software cabling, 19, 21 Access Control, 47 hardware components, 21 Access Logix, 45 hardware components and Navisphere Agent, 49 connections, 21 Navisphere Manager, 49 PowerPath, 49 public network cabling, 19 storage system configuring and managing LUNs, 52 configuring drives on multiple shared storage systems, 53 RAID...
  • Page 70 warranty, 14 worldwide port name zoning, 43 zones implementing on a Fibre Channel switched fabric, 42 in SAN configurations, 43 using worldwide port names, 43 Index...

This manual is also suitable for:

Emc cx3 series

Table of Contents