Infortrend, shall be subject to the latest Standard Warranty Policy available on the Infortrend website: http://www.infortrend.com/global/Support/Warranty Infortrend may from time to time modify, update or upgrade the software, firmware or any accompanying user documentation without any prior notice. Infortrend will provide access to these new software, firmware or documentation releases from certain download sections of our website or through our service partners.
Contact your system vendor or visit the following support sites. EonStorDS / EonStorDS Support ESVA Support EonNAS Support Headquarters Infortrend Technology, Inc. (Taiwan) 8F, No. 102, Sec. 3, Jhongshan Rd., Jhonghe Dist., New Taipei City 235, Taiwan Tel: +886-2-2226-0126 Fax: +886-2-2226-0020 Email, Technical Support, Website Japan Infortrend Japan, Inc.
Infortrend, the Infortrend logo, SANWatch, ESVA, EonStorDS, EonStorDS, EonNAS, and EonPath are registered trademarks of Infortrend Technology, Inc. Other names prefixed with “IFT” and “ES” are trademarks of Infortrend Technology, Inc. Windows is a registered trademark of Microsoft Corporation.
Safety Precautions Safety Precautions Read these instructions carefully before you install, operate, or transport the EonStorDS RAID system and JBODs. Installation and Operation Install the rack cabinet and the associated equipment at a site where the ambient temperature (special room cooling equipment may be required) stays lower than: a.
The use of Infortrend certified components is strongly recommended to ensure compatibility, quality and normal operation with your Infortrend products. Please contact your distributor for a list of Infortrend certified components (eg. SFP, SFP+, HBA card, iSCSI cable, FC cable, memory module, etc.).
ESD Precautions ESD Precautions Handle the modules by their retention screws, ejector levers, or the module’s metal frame/faceplate only. Avoid touching the PCB boards or connector pins. Use a grounded wrist strap and an anti-static work pad to discharge static electricity when installing or operating the enclosure.
About This Manual About This Manual The manual introduces hardware components of EonStorDS 1000 Series’ RAID and JBOD systems. It also describes how to install, monitor, and maintain them. For non-serviceable components, please contact our support sites. Firmware operation: Consult the Firmware User Manual on the CD-ROM. SANWatch software: Consult the SANWatch User Manual on the CD-ROM.
Table of Contents Table of Contents Legal Information ........................2 Contact Information ....................... 3 Copyright Notice ........................4 Safety Precautions ......................... 5 About This Manual ......................... 8 Table of Contents ........................9 Introduction Product Overview ......................... 12 Model Naming Conventions ....................12 Model Variations ........................
Page 10
Table of Contents Battery Backup Unit Installation ..................... 42 Supercapacitor Battery & Flash Backup Module Installation ..........44 Flash Backup Module Installation ................... 44 Installing the Supercapacitor Battery ..................46 Installing the RAID Controller ....................48 System Connection General Considerations on Making Connections ............. 49 Host-Side Topologies ......................
Page 11
Table of Contents Audible Alarms ........................107 C ............................107 Restoring Default System Settings .................. 108 Restoring Default Settings ....................108 System Maintenance Replacing the Controller Module(s): Single / Dual / Simultaneous Upgrade ....111 Replacing the Controller Host Board ................114 Replacing the Memory Module on RAID Systems ............
Introduction Product Overview This manual introduces EonStorDS 1000 systems that support 3Gbps, 6Gbps SAS or 3Gbps (SATA-II) and 6Gbps (SATA-III) drive interfaces. The 2U/ 3U enclosure is designed to utilize 2.5” or 3.5” hard drives. Drive capacity can be expanded by attaching expansion hard drive enclosures (JBODs).
Model Variations Comprised of RAID and JBOD models, the RAID systems store hard drives and control the entire storage system while JBOD systems connect to a master RAID system and can expand storage capacities by adding more hard drives. All systems are compatible with SAS-3 and SAS-6;...
Major Components NOTE Upon receiving your system, check the package contents against the included Unpacking List. If module(s) are missing, contact your system vendor immediately. The Cache Backup Module (CBM) is an optional feature in single controller systems. RAID Controller and Interface Each RAID controller comes with a pre-installed DIMM module.
The Rear Panel Dual Controller Models Dual controller systems are indicated by an “R” in their model number (please refer Model Naming Conventions). Controller A is located on top and controller B at the bottom for dual controller models. In dual controller configuration, if one controller fails, the second controller module will take over in a manner that is transparent to application servers.
Single-Controller Models Single-controller models are designated by a “G” or “S” in their model number. The second controller slot is filled with a dummy cage (D). The 2UL-C and 2UL model differs in the additional cooling module at the center-bottom position. Upgrading Single Controller to Dual Controller System If the model name of a single controller RAID/JBOD is designated with the letter “S”, it can be upgraded into a dual-controller configuration by adding another controller...
Chassis The RAID chassis is a rugged storage chassis divided into front and rear sections. The chassis is designed to be installed into a rack or cabinet. Front Panel Drive trays (1): Each drive tray is hot-swappable and holds a 3.5-inch hard drive.
Rear Panel Description Description Controller A Power supply + cooling module Controller B or dummy cage Controllers (1) / (2): Each RAID controller module contains a main circuit board and a pre-installed DIMM module. For single controllers, a dummy cage will be placed at the controller (2) position. The host port configurations will vary.
Internal Backplane An integrated backplane separates the front and rear sections of the chassis. This circuit board provides logic level signals and low voltage power paths. Thermal sensors and I C devices are embedded to detect system temperatures and PSU/cooling module operating status. This board contains no user-serviceable components.
Front Panel Components LED Panel LED Panel RAID JBOD The LED panel on a JBOD storage expansion system can be located on the chassis ear. The LED panel contains Service LED (1), a power supply status LED (2), cooling module status LED (3), temperature sensor status LED (4), System fault LED (5), rotary ID switch (JBOD) (6) and a Mute Service button (7).
Drive Tray Bezel 2.5 inch 3.5 inch The drive tray is designed to accommodate separately purchased SAS, NL-SAS or SATA interface hard disk drives. There is a rotary bezel lock (1) that secures the drive tray in chassis, while a release button (2) can be used when retrieving disk drives from the chassis.
Rear Panel Components Controller Module of RAID Models Designation Description Designation Description Host ports Restore default LED Battery Backup Unit Controller status LED SAS expansion port Ethernet management port Restore default button Serial port 3 4 5 6 7 The controller on the dual controller models also features a Cache Backup Module (CBM), consisting of a Battery Backup Unit (BBU) and Flash Backup Module (FBM).
Controller Module of JBOD Models The expansion JBOD controller features SAS expansion ports (1), SAS expansion port status LEDs (2), controller status LEDs (3), extraction levers and retention screws (4). The expansion controller contains a circuit board within a metal canister, interfaced through hot-swap docking connectors at the back-end.
Cache Backup Module (CBM) The Cache Backup Module (CBM), located inside the controller, consists of a battery backup unit (BBU) (1) and flash backup module (FBM) (2). The CBM can sustain cache memory after a power failure. The use of a CBM is highly recommended in order to safeguard data integrity.
Super Capacity Battery & Flash Backup Module The super capacity battery (1) and flash backup module (2) can be located inside the controller and serves similar functions to a Cache Backup Module (CBM) described in the previous section. With the super capacity battery, the cached data can be kept safe for an extended period of time NOTE The super capacity battery is only charged partially when shipped.
PSU & Cooling Module The two redundant, hot-swappable PSU has a power socket (1), power switch (2), PSU status LED (3), cooling module (4), retention screw (5) and an extraction handle (6). The cooling modules can operate at three rotation speed settings. Under normal operating conditions, the cooling fans run at the low speed.
System Monitoring Features There are a number of monitoring approaches that provide the operating status of individual components. Expansion Enclosure Support Monitoring: A managing RAID system is aware of the status of JBOD components including those of: Expander controller (presence, voltage and thermal readings) ...
JBOD Enclosure Status Monitoring: A RAID system, when connected with expansion JBODs, acquires the component status within other enclosures via a proprietary enclosure monitoring service using the in-band connectivity. No additional management connection is required. C bus The detection circuitry and temperature sensors are interfaced through a non-user-serviceable I C bus.
Hot-swapping The system comes with a number of hot-swappable components that can be exchanged while the system is still online without affecting the operational integrity. These components should only be removed from the system when they are being replaced. The following components can be user-maintained and hot-swappable: ...
Hardware Installation This chapter describes how to install modular components, such as hard drives into the enclosure and CBM into the RAID controller enclosure. NOTE Enclosure installation into a rack or cabinet should occur BEFORE hard drives are installed into the system. Installation Prerequisites Static-free installation environment: The system must be installed in a static-free environment to minimize the possibility of electrostatic discharge (ESD) damage.
Make sure you are aware of the enclosure locations of each plug-in module and interface connector. Cables must be handled with care and must not be bent. To prevent emission interference within a rack system and accidental cable disconnection, the routing paths must be carefully planned.
Unpacking the System Compare the Unpacking List included in the shipping package against the actual package contents to confirm that all required materials have arrived. Box contents For detail content(s), please refer to the unpacking list that came with the system. The accessory items include a serial port cable, screws, Quick Installation Guide, a CD containing the SANWatch management software and its manual and Firmware Operation Manual, and a product utility CD containing the Installation and...
Installing Hard Drive Installation of hard drives should only occur after the enclosure has been rack-mounted! Hard Drive Installation Prerequisites Hard drives are separately purchased and when purchasing hard drives, the following factors should be considered: Capacity (MB/GB): Use drives with the same capacity. RAID arrays use a “least-common-denominator”...
Page 34
MUX Board: Shown below, controller A (1) and controller B (2) is connected to the backplane (3). With a MUX board (4) paired to the hard drive (5), data signals is able to switch between controllers A and B signal ports (indicated by the blue arrow / dotted line). Under normal circumstances, controller B signal port is in standby mode (6).
SAS Interface The SAS interface features a dual-ported connectivity with pins on both sides of its connector that include SAS primary links (1), power link (2) and underneath it, the SAS secondary links (3). The SATA drives have only one port that includes the SATA physical links (4) and the power link (5).
Hard Drive Designation Illustrations shown below are system hard drive slot number designations. Please familiarize yourself with the designations to avoid withdrawing the hard drive(s) out of the enclosure. 2U systems 3U systems 4U systems...
Installing the Hard Drive into Drive Tray Open the bezel by pressing the release button and gently pull out the tray. Place the hard drive into the drive tray, making sure that the interface connector is facing the open side of the drive tray and its label side facing up. If you have a MUX board and you want to install a SAS drive, the MUX board should be removed.
Installing the Hard Drive Tray into the Enclosure Once the hard drives have been installed in the drive trays, install the drive trays into the system. WARNING Each drive slot must be populated with a tray even if it does not contain a hard drive. With a bay empty, air flow ventilation may be disrupted and the system will overheat.
Installing CBM for RAID Models The CBM consists of a battery backup unit (BBU) and flash backup module (FBM). The CBM can sustain cache memory in the event of a power failure or in the extremely unlikely event of both PSUs failing at the same time. The use of a CBM is highly recommended in order to safeguard data integrity.
Flash Backup Module & Battery Backup Unit Please read the following sections on how to install the battery backup module and flash backup module into the controller module. Always place the controller on a clean, static-free surface and hold the controller only by its metal canister and never touch the circuit board or connector pins.
Page 41
Hold the SSD on a 45 degree angle, match the notch (indicated by the circle), insert the SSD gold fingers into the slot, lower the other end of the SSD until the edges are clipped-on.
Battery Backup Unit Installation 1. With the controller removed, remove the cover by removing the two screws show in the illustration below 2. Hold the battery cage at a 45 degree angle, make sure the cage sits and meets the screw holes. Secure using the supplied flathead screws.
Page 43
3. Secure the other end of the cage with flathead screws from underneath the controller. One on each side. 4. Gently insert the battery backup unit into the cage and secure it using the thumb screw. Reinstall the controller.
Supercapacitor Battery & Flash Backup Module Installation Please read the following sections on how to install the supercapacitor battery & flash backup module into the controller. Always place the controller on a clean, static-free surface and hold the controller only by its metal canister and never touch the circuit board or connector pins.
Page 45
3. Make sure the screw holes meet while placing the SSD PCB board on the copper stands. Secure the PCB with screws provided. 4. Insert the SSD at a 45 degree angle, make sure the notch meets (indicated by blue circle).
Installing the Supercapacitor Battery 1. With the controller removed, insert the supercapacitor battery at a 45 degree angle, make sure the protrusion is inserted into the slot (indicated by the blue circle / arrow). 2. Secure the supercapacitor battery with a supplied screw at the end (indicated by blue circle).
Page 47
3. Connect the four-pin molex connector to the SSD PCB board. Reinstall the controller.
Installing the RAID Controller After completing the battery backup unit and flash backup module installation, the RAID controller can be re-inserted into the enclosure: 1. Insert the controller slowly into the module slot. When you feel the contact resistance, use slightly more force and then push both of the ejection levers upwards (indicated by the blue arrows) to secure the controller into chassis.
System Connection This chapter outlines the general configuration rules you should follow when cabling a storage system and introduces basic information about topologies. You can use these topologies or refer to them as a guide for developing your own unique topologies.
• A spare drive should have a minimum capacity that is equivalent to the largest drive that it is expected to replace. If the capacity of the spare is less than the capacity of the drive it is expected to replace, the controller will not proceed with the failed drive rebuild.
Maximum Concurrent Host LUN Connection (“Nexus” in SCSI) The "Max Number of Concurrent Host-LUN Connection" menu option is used to set the maximum number of concurrent host-LUN connections. Maximum concurrent host LUN connection (nexus in SCSI) is the arrangement of the controller internal resources for use with a number of the current host nexus.
Maximum Queued I/O Count The "Maximum Queued I/O Count" menu option enables you to configure the maximum number of I/O operations per host channel that can be accepted from servers. The predefined range is from 1 to 1024 I/O operations per host channel, or you can choose the "Auto"...
Fibre-Host RAID Connections The Fibre Channel standard allows optical connections. Optical cables can be used over longer distances and have been shown to be more reliable. Due to the demands of high transfer rates, optical cables are preferred for 16/8/4Gbps fiber connectivity. Optical cables are also less susceptible to EMI.
Page 54
WARNING The SFP transceiver contains a laser diode featuring class 1 laser. To ensure continued safety, do not remove any covers or attempt to gain access to the inside of the product. Refer all servicing to qualified personnel. FC port dust plugs Each FC port comes with a dust plug.
Fibre-Host Topologies The Fibre Channel standard supports three (3) separate topologies. They are point-to-point, Fibre Channel Arbitrated Loop (FC-AL), and fabric switch topologies. Point-to-Point: Point-to-point topology is the simplest topology. It is a direct connection between two (2) Fibre Channel devices. ...
Fibre Cabling Following are steps that should be completed with cabling: Maintain a configuration plan. In addition to cabling topologies and list of networking components, the plan can also include firmware and software maintenance details. Confirm that you have a Fibre Channel cable that loops 6-inch or longer. Ensure proper airflow and keep cables away from ventilation airflow outlets.
DAS (Direct-Attached) Connection NOTE If a logical drive can be accessed by different servers, file locking, FC switch zoning, port binding, and multipath access control will be necessary in order to avoid access contention. HBA 0 HBA 1 HBA 0 HBA 1 EonPath EonPath...
Page 59
With more disk drives over the SAS expansion links, you can create more logical groups of drives. These logical drives using more host channel IDs or LUN numbers. If a server has multiple data paths to a RAID system, a multi-path software is necessary, e.g., the EonPath driver.
Switched Fabric Connection (Dual-Controller) NOTE A logical partition presented through LUN Mapping can be seen by all servers across SAN. Make sure you have access control such as file-locking, switch zoning, port binding, etc., to avoid access contention. HBA 0 HBA 0 HBA 1 HBA 1...
Page 61
Data path connection Fault-tolerant paths Host channel bandwidth 6400 MB/s Channel link bypass is provided on external FC switches. Each of the application servers shown in the diagram is equipped with two HBAs with FC links via two FC switches to the SFP ports on individual RAID controllers. You can refer to the ID tags on the host links to see the related logical volume mapping and cable links routing paths.
Page 63
Each SFP port connected to FC switches and then to host adapters. See logical associations in the drawing for LUN mapping details. Use Enclosure-specific spares to prevent a spare drive from participating in the rebuild of a logical drive on another enclosure. You can refer to the ID tags on the host links to see the related LUN mapping and cable links routing paths.
SAS-Host RAID Connections NOTE Please consult your vendor or sales representative for compatible host link cables. One SFF-8088-to-SFF-8088 host link cable is included per controller. You can contact your vendor to purchase additional cables if you need more than the included one.
DAS (Direct-Attached Storage) Connection with Redundant Host Path HBA 0 HBA 1 EonPath EonPath CH0 AID CH0 AID CH0 AID CH0 AID CH0 AID CH1 BID CH0 AID CH0 BID CH0 AID CH1 AID CH0 AID CH1 AID CH1 BID CH0 BID CH0 BID CH1 BID...
Page 66
HBA 0 HBA 1 EonPath CH0 AID CH1 AID CH0 AID CH1 AID RAID Single-controller models With more hard drives over the SAS expansion links, you can create more logical groups of drives. Avail these logical partitions using more LUN numbers. NOTE EonPath multipath software or Linux Device Mapper is necessary for controlling and optimizing the access to logical drives via multiple data paths.
DAS (Direct-Attached Storage) Connection to Two Servers CH0 AID CH1 AID CH1 AID CH0 AID RAID NOTE If you would like a LUN (a logical partition) to be accessed by multiple hosts, file locking or multipath access control will be necessary.
iSCSI-Host RAID Connections Ethernet cable requirements: • Ethernet cables are user-supplied. Cat5e shielded STP type network cables or better performance types (important for meeting the requirements imposed by emission standards). • Straight-through Ethernet cables with RJ-45 plugs. • Use of cross-over cables can also be automatically detected and re-routed for a valid connection.
Network & Host Connection Topologies The iSCSI host ports connect to Ethernet network devices and iSCSI initiators that comply with the IETF iSCSI standard (RFC 3720). Network connection of the iSCSI ports is flexible. The use of network connecting devices, subnet, Name Servers, or iSCSI management software can vary from case to case.
High Availability IP SAN with Redundant RAID Controller EonPath EonPath VLAN 1 VLAN 0 10 x 1 2x 1 0x 7 8 9 1 011 1 2 1 2 3 4 5 6 LD 0 LD 1 LD 2 LD 3 CH0 AID* CH1 AID* CH2 BID*...
Page 71
configuration using this configuration. For remote replication setup, please refer to “High Availability IP SAN (Remote Replication Enabled) “High Availability IP SAN with Port Trunk (Remote Replication Enabled)”. 4 logical drives (each has 4 member drives; for better performance, you can include drives from JBOD) LD0 mapped to CH0 AID and CH0 BID;...
identify alternate paths to the same logical drive. High Availability IP SAN (Recommended Cabling Method for Remote Replication) EonPath EonPath VLAN 1 VLAN 0 10 x 11 x 1 2x 1 0 x 1 1x 12 x 7 8 9 1011 1 2 1 2 3 4 5 6 LD 0 LD 1...
Page 74
failback and load balance. Use EonPath multipath software so that your operating system can identify alternate paths to the same logical drive. 2 logical drives (each has 8 member drives). More logical drives can be created from drives in JBOD. LD0 mapped to CH0 AID, CH1 BID, CH2 AID and CH3 BID;...
High Availability IP SAN with Port Trunk (Remote Replication Enabled) EonPath EonPath VLAN 1 VLAN 0 10 x 11 x 1 2x 1 0 x 1 1x 12 x 7 8 9 1011 1 2 1 2 3 4 5 6 LD 0 LD 1 Ch0 AID...
Page 76
logical drives can be created from drives in JBOD. configuration LD0 mapped to CH0 AID and CH1 BID; LD has to be assigned to both controllers A and B to enable remote replication LD1 mapped to CH1 BID and CH0 AID; LD has to be assigned to both controllers A and B to enable remote replication...
Hybrid Host Connections For hybrid systems that feature two additional iSCSI ports, they can be used for remote replication or be used for host LUN mapping if users wish to do so. Single Hybrid Unit Connected to FC/iSCSI Hosts HBA 0 HBA 0 HBA 0 HBA 0...
Utilizing Hybrid iSCSI ports for Data Replication HBA 0 HBA 0 HBA 0 HBA 0 HBA 1 HBA 1 HBA 1 HBA 1 EonPath EonPath EonPath EonPath FC #1 FC #2 iSCSI #1 iSCSI #2 The above diagram demonstrates how to utilize the iSCSI host ports for remote data replication.
JBOD Connections A SAS host link cable is included per JBOD. If you need to purchase other cables or if you need other cable(s) of different length, please contact your vendor. The cable features include: 28AWG x 8 pair, 100ohm, black, UL approved, lead-free, 50cm, 120cm, or 170cm cable lengths, and connectors that can be secured to chassis using thumb screws or latching mechanism.
SAS Expansion Links JBOD SAS Expansion Configuration The SAS expansion port connects to expansion JBOD enclosures. For dual-controller systems, each expansion port connects a RAID controller to a corresponding JBOD controller making fault-tolerant links to different SAS domains. The following principles apply to RAID and JBOD connections: •...
Configuration Rules Following are the rules for connecting SAS interfaces across RAID and JBOD enclosures: • Fault-tolerant links in a dual-controller combinations: Corresponding to SAS drives’ dual-ported interface, two physical links are available from each disk drive, routed across the backplane board, each through a SAS expander, and then interfaced through a 4x wide external SAS port.
Page 82
offering high redundancy. • One expansion link connects JBODs from RAID to the nearest JBOD, and then to the farthest JBOD. Another expansion link connects to the farthest JBOD from the opposite direction and then to the nearest JBOD. • Each expander controller on the SAS JBOD controls a “SAS Domain”...
Dual Controller Expansion Connection • RAID system top SAS exp. -> 1st JBOD top SAS-IN • 1st JBOD top SAS-OUT -> 2nd JBOD top SAS-IN • 1st JBOD bottom SAS-IN port –> 2nd JBOD bottom SAS-OUT • RAID system bottom SAS exp. –> last JBOD bottom SAS-IN 1st JBOD 2nd JBOD Last JBOD...
Management Console Connections Designation Description Designation Description Serial port (for Telnet access) Local area network Single controller: DB9 male to female serial SANWatch/ telnet Dual controller: Y-cable console CAT5e LAN cable Dual controller management connection Single controller management connection Connecting RAID system to external consoles: •...
Page 86
network environment is not running DHCP server protocols, a default IP, <10.10.1.1> can be used to access for the first time.
Power Connections Once all hard drives have been properly installed and the I/O ports or management interfaces have been connected, the system can be powered on. Checklist BEFORE powering on the system, please check the following: CBM: Make sure the CBM has been properly installed before powering-on the system.
Connecting Power Cords Use the included cables and connect them to the power sockets (in blue). Power On Procedure Before you power on the RAID system, please power on the expansion JBOD storage systems first if your network configuration has multiple arrays. To power on the system please follow the procedures below.
Power On Status Check As a general rule, once the system has been powered on, there should NOT be LED(s) that light up amber nor should you hear an audible alarm from the system. You should begin verifying system statuses via the following monitoring interfaces: ...
Page 90
PSU & Cooling Module LEDs: PSU LED: On (green) PSU LED PSU LED...
Power Off Procedure If you wish to power down the system, please follow these steps: NOTE If you wish to power down the system, please ensure that no time-consuming processes, like “Regenerate Logical Drive Parity” or a “Media Scan,” are taking place.
System Monitoring The EonStorDS series is equipped with a variety of self-monitoring features that help keep system managers aware of system operation statuses. Monitoring Features You may monitor the system through the following features: Firmware: The RAID controller in the system is managed by a pre-installed firmware, which is accessible in a terminal program via the serial port.
Page 93
over TCP/IP network, via the Ethernet Management port. The management session is conducted using the Ethernet management port. For more details, see the SANWatch manual in the CD-ROM. LEDs: LED indicators notify users of system status, events, and failures. LEDs are located on both the front and rear panel of the chassis.
LED Panel LEDs RAID JBOD Name Color Status White indicates that the system is being serviced or is requiring services. 1. Service White OFF indicates that the system is not being serviced nor is requiring services. Green indicates that the system is powered properly.
Page 95
temperature has gone over the safety threshold. Green indicates that the system is operating normally. Green/ 5. System fault Amber Amber indicates that the system has encountered abnormal conditions: JBOD ID switch: Allow users to set 6. Rotary ID switch enclosure IDs when connected to JBOD (JBOD) expansion enclosure(s).
Drive Tray LED Two LED indicators are located on the right side of each drive tray. When notified by a drive failure message, you should check the drive tray indicators to find the correct location of the failed drive. Name Color Status Flashing Blue...
Controller LED Controller LED for RAID Models 1 2 3 4 5 6 Name Color Status Green indicates that a RAID controller is operating healthily. 1. Ctrl Green/ Amber indicates that a component failure has Amber Status occurred, or inappropriate RAID configurations have caused system faults.
Page 98
in case of power loss. Blinking Amber indicates cached data is being transferred to the flash module after the occurrence of a power outage. Once the transfer is done, all LEDs will turn off. This signal is local to each controller. Amber indicates that the detected CPU/board/chassis temperature has exceeded the...
Controller LED for JBOD Models Name Color Status Steady green indicates all 4 PHYs are validly linked to external devices. Green 1. SAS Link Blinking green indicates one of the 4 PHYs links has failed. OFF indicates all 4 PHYs are offline. Green indicates 6Gbps link speed.
iSCSI / Ethernet Management Port LEDs Type I Type II Name Status Status Green Green indicates 1Gb connection established. 1. Speed status Off indicates 10/100Mb connection established or no connection established. Steady amber indicates a connection has been established. 2. Link / activity Amber Flashing amber indicates data I/O.
10Gb iSCSI Host Port LEDs (Fibre) LED status Color Status Steady ON Green Steady green indicates a link has been established. Flashing Green Flashing green indicates an active link. Off indicates a link has not been established. 10Gb iSCSI Host Port LEDs (RJ45) Item Status Green...
8Gb Fibre-Host Port LEDs Each controller module houses fibre channel host ports. Each of these ports has two LEDs for displaying the operating status. Name Color Status Green indicates an established link, Off means Green 1. Link a link is broken. Green indicates 8Gbps connection.
16Gb Fibre Channel Host Port LEDs Item Status Green indicates connection established Link Status LED Flashing green indicates data activity Off indicates connection not established Green indicates 16Gb connection established Yellow indicates 8Gb connection Speed LED established Off indicates 4Gb or slower connection established...
6G SAS-Host Port LEDs Name Color Status Steady Green indicates that all 4 PHYs are validly linked to external devices. 1. SAS Link Green Blinking indicates less than 4 PHYs links are connected Status (at least one 1 of the 4 PHYs links has failed). OFF indicates all 4 PHYs links are offline.
12G SAS-Host Port LEDs Name Color Status Steady Green indicates that all 4 PHYs are validly linked to external devices. 1. SAS Link Green Blinking indicates less than 4 PHY links are connected Status (at least one 1 of the 4 PHYs links has failed). OFF indicates all 4 PHYs links are offline.
PSU / Cooling Module LEDs The PSU (Power Supply Unit) contains the LEDs for the PSU and the cooling module statuses. When either of the unit fails, you need to replace the PSU as soon as possible. For details, please refer to Replacing the Power Supply Module.
Alarms and I2C Bus Other monitoring schemes include audible alarms and I C bus. Audible Alarms If any of the following components fails, the audible alarm will be triggered: Cooling fan modules PSU modules CBM module Hard disk drives ...
Restoring Default System Settings NOTE Restoring default settings is a last-resort function. All configurations, such as parameters and host LUN mappings, will be erased. You may need to restore default settings in the following cases: When the firmware update procedure requires it. ...
Page 109
10. Replace Controller A with Controller B (Controller B will be inserted into Controller A’s slot) While leaving Controller B slot empty with Controller B in slot A, perform the above steps 1 to 8 to restore Controller B to default settings. 11.
System Maintenance WARNING Do not remove a failed component from the system until you have a replacement on hand. If you remove a failed component without immediate replacement, it will disrupt the internal airflow. Qualified engineers who are familiar with the system should be the only ones who make component replacements.
Replacing the Controller Module(s): Single / Dual / Simultaneous Upgrade WARNING Controller firmware MUST be identical for proper functionality. DO NOT mix controller modules from different models. Each controller has a unique ID which is applied to host port names. As the result, you may encounter SAN problems with identical port names on multiple systems.
Page 112
3. Disconnect all cables that are connected to the controller module. 4. Loosen the screw that secures the control module’s ejection levers. 5. Push the ejection levers downwards (indicated by the blue arrows). The controller module will automatically ease out of the controller module bay. 5.
Page 113
7. Reattach all the cables. 8. For single controller models or when replacing both controllers simultaneously, power up the system. Check system message on the LCD screen, SANWatch, or firmware menu-driven utility. When the replacement controller is successfully brought online, the Power On Status LEDs should turn on properly.
Replacing the Controller Host Board Power off (if you have a single controller system and it is in operation) the system remove the controller from the enclosure. Remove the existing host board from the controller by loosening the three screws. By holding onto the edges of the PCB, carefully remove the host board from its antistatic package.
Replacing the Memory Module on RAID Systems The RAID controller comes with pre-installed DRAM module(s). You may upgrade it or replace it when the original module malfunctions (shown as the “NVRAM failure” event in SANWatch). WARNING If you are installing only one or replacing just one DRAM module, with the I/O ports pointing at you, always install to the DRAM slot on the right (blue slot).
Page 116
DRAM location Removing the module 4. Insert the replacement module. Make sure the side clips are in the open positions. Align the DIMM module with the socket and firmly push the DIMM module into the socket. The side clips will close automatically and secure the DIMM module into the socket.
Replacing the CBM for RAID Models Upgradeable / replaceable components are listed below: • Battery Backup Unit (BBU): In the event of a power failure, the BBU can help store/ save cached data in the DRAM module for up to 72 hours or longer with super capacity BBU..
BBU Fault Conditions and Precautions If a BBU leaks, gives off a bad odor, generates heat, becomes discolored or deformed, or in any way appears abnormal during use, recharging or storage, immediately remove it from the system and stop using it. Here are some of the conditions that might trigger BBU fault.
Replacing the Battery Backup Unit (BBU) To replace the BBU, follow these steps: 1. Simply loosen the thumb screw (1). 2. Using the thumb screw as an anchor, gently pull out the BBU (2). 3. Install the replacement BBU module and fasten the thumb screw. NOTE A replacement BBU takes approximately twelve hours to charge to its full capacity.
Replacing the Flash Backup Module (FBM) To replace the flash backup module (FBM), please follow these steps: 1. Remove the controller module (refer to Replacing the Controller Module). 2. Locate the flash backup module 3. Flick the clips holding down the FBM in the direction of the arrow to release it and remove it while holding it on a 45 degree angle.
Replacing Supercapacitor Battery and Flash Backup Module Upgradeable / replaceable component is listed below: • Flash Backup Module (FBM): In the event of a power failure, the combination of supercapacitor + FBM (non-volatile flash storage) can store the data, indefinitely. WARNING Make sure you have the replacement module(s) on-hand before you attempt the replacement procedure.
Supercapacitor Battery Fault Conditions and Precautions If a supercapacitor battery leaks, gives off a bad odor, generates heat, becomes discolored or deformed, or in any way appears abnormal during use, recharging or storage, immediately remove it from the system and stop using it. Here are some of the conditions that might trigger supercapacitor battery fault.
Replacing the Supercapacitor Battery To replace the BBU, follow these steps: 1. Remove the controller module (refer to Replacing the Controller Module). 2. Loosen the screw at the end of Supercapacitor battery 3. Lift the Supercapacitor battery from the screw end at 45 degree angle. 4.
Replacing Flash Backup Module (coupled with SuperCap) To replace the flash backup module, follow these steps: 1. Remove the controller module (refer to Replacing the Controller Module). 2. Loosen the screw at the end of flash backup module and remove the flash backup module.
Replacing the Power Supply Module / Cooling Module The power supply units (PSU) are configured in a redundant configuration with each PSU housed in a robust steel canister. Detecting a Failed PSU If a PSU module fails, the system notifies you through the following indicators: ...
Replacing 2U / 3U Power Supply Unit A failed PSU should be replaced as soon as possible, but only when you have a replacement module in your hand. Contact your vendor for more details (refer to Contact Information). WARNING Although the system can operate with a failed PSU in a system, it is not recommended to run the system with a failed PSU for an extended period of time.
Page 127
3. To remove the PSU module, pull the extraction handle downwards to disconnect the PSU from the backplane connectors. Once dislodged, gently pull the PSU out of the system. If the system is mounted in a rackmount rack, use another hand to support its weight while removing the module.
Replacing a Hard Drive WARNING Keep a replacement on hand before replacing the hard drive. Do not leave the drive tray open for an extended period of time or the internal airflow will be disrupted. Handle the hard drives with extreme care. Carry them only by the edges and avoid touching their circuits part and interface connectors.
Page 129
4. Remove the drive tray. Pull the tray one inch away from the enclosure. Wait for at least 30 seconds for the disk drive to spin down, and then gently withdraw the drive tray from the chassis. 5. Remove four retention screws (two on each side). The screws secure the hard drive to the drive tray.
Page 130
8. Lock the drive tray. Turn the bezel lock to the vertical orientation (locked position) using a flat blade screwdriver. Do not push the bezel lock while turning it, otherwise the spring handle will pop out again. NOTE Never leave the bezel lock unlocked – the RAID controller might consider it as a faulty drive.
RAID Configurations for RAID Models RAID Levels 0, 1(0 + 1), 3, 5, 6, 10, 30, 50, 60, and non-RAID disk spanning Cache Mode All drive channels are pre-configured and cannot be changed Cache Memory Write-through, write-back, and adaptive write policy Pre-installed DRAM module with ECC, registered;...
Power Supply Input Voltage Dual controller model: 100VAC @ 10A 240VAC @ 5A with PFC (auto-switching) Single controller model: 100VAC @ 10A 240VAC @ 5A with PFC (auto-switching) Frequency 50 to 60Hz Power rating 460W DC Output 12.0V: 38A (Max.) 5.0VSB: 2A (Max.) Input Frequency 50 to 60Hz...
Environment Humidity 5 to 95% (non condensing – operating and non-operating) Operating: a. With Battery Backup Module 0º to 35ºC Temperature b. Without Battery Backup Module 0º to 40ºC Non-operating: -40º to 60ºC Operating: Sea level to 12,000ft Altitude Packaged: Sea level to 40,000ft Operating: 5G, half-sine, 11ms pulse width Shock (Half-sine) Non-operating: 15G, half-sine, 11ms pulse width...
Slide Rail Kit Installation Guide The table is categorized into model numbers in alphabetical / numeric order so users can fast locate the corresponding slide rail kit for their respective enclosure. Slide Rail Kits If you are unable to locate clear instructions on installing your enclosure, please contact Technical Support! Enclosure Installation Prerequisites To ensure proper installation and functionality of the RAID system, please observe the...
Slide Rail Installation Guide Unpacking the System Use the “Unpacking List” to cross check all components have been received. The basic contents include one GUI CD pack, Quick Installation Guide and RAID Enclosure Installation Guide. For details on each slide rail kit contents, please refer to specific kit installation details in this manual.
Rackmount Slide Rail Kits Rack Ear Mount Kit The following table shows all accessories that came with the rack ear mount kit. Kit Contents Item Description Quantity Mounting bracket assembly, left-side Mounting bracket assembly, right-side Hexagon washer screws #6-32mm Truss head screws M5 x 9.0mm M5 cage nuts M5 x 25mm M6 x 25mm...
Installation Procedure 1. The installation begins with determining the installation position and M5 cage nut (5) insertion location. Front rack posts Unit boundary 3/4U, M5 cage nut position 2U, M5 cage nut position Unit boundary Rear rack posts M5 x 9.0mm...
Page 142
2. Install the fixed rack ear mount to the rear posts and secure them using truss head screws (4) M5 x 9.0mm 3. With the assistance of another person holding the enclosure at the installation height , the other person can place four M5 x 25mm (6) at the front of the enclosure and eight #6-32 screws (3), four on each side, to secure the enclosure into the rack.
Slide Rail Kit The following table shows all accessories that came with the slide rail kit. Kit Contents Item Description Quantity Mounting bracket assembly, left-side Mounting bracket assembly, right-side Inner glides Flathead screws #6-32 L4 Truss head screws M5 x9.0mm M5 cage nuts M5 x 25mm M6 x 25mm...
Page 144
1. The installation begins with determining the installation position (front and rear rack positions) and M5 cage nut (5) insertion location. Front rack posts Unit boundary 3/4U, M5 cage nut position 2U, M5 cage nut position Unit boundary Rear rack posts M5 x 9.0mm...
Page 145
2. Adjust the length by loosening the four screws on the slide rail. Secure the slide rails to front and rear posts using truss head screws. Tighten the four screws on the slide to fix the length. M5 x 9.0mm Inner glide rail M5 x 9.0mm 3.
Once Mounted Once the enclosure has been mounted, you may refer to the Users Manual that came with your system for further instructions on completing the hardware installation process. The Users Manual will go on to explain details on installation / maintenance of hard drives, controllers, optional modules (Supercapacitor battery, CBM, etc.), cooling modules, power supplies, cable connections, topology configurations, etc.
Need help?
Do you have a question about the EonStor DS 1024B and is the answer not in the manual?
Questions and answers