Hitachi BladeSymphony 1000 User Manual

Architecture

Advertisement

Quick Links

BladeSymphony
1000
®
Architecture
White Paper
1000

Advertisement

Table of Contents
loading

Summary of Contents for Hitachi BladeSymphony 1000

  • Page 1 BladeSymphony 1000 ® Architecture White Paper 1000...
  • Page 2: Table Of Contents

    Summary ........... . 51 For More Information ......................51 BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 3: Introduction

    For organizations interested in reducing the cost, risk, and complexity of IT infrastructure — whether at the edge of the network, the application tier, the database tier — or all three — BladeSymphony 1000 is a system that CIOs can rely on.
  • Page 4 For example: • BladeSymphony 1000 can be deployed at the edge tier — similar to dual-socket blade and rack server offerings from Dell, HP, IBM, and others — but with far greater reliability and scalability than competitive systems.
  • Page 5 (PCI-X and PCI Express), providing flexibility and investment protection. The system is extremely expandable in terms of processor cores, I/O slots, memory, and other components. Data Center Applications With its enterprise-class features, BladeSymphony 1000 is an ideal platform for a wide range of data center scenarios, including: •...
  • Page 6: System Architecture Overview

    Intel Xeon Server Blade and Intel Itanium Server Blade. A 10 RU BladeSymphony 1000 server chassis can accommodate eight server blades of these types. It can also accommodate a mixture of server blades, as well as storage modules. In addition, multiple Intel Itanium Server Blades can be combined to build multiple Symmetric Multi Processor (SMP) configurations.
  • Page 7 4-server blade Figure 3. Logical components of the BladeSymphony 1000 The following chapters detail the major components of BladeSymphony 1000, as well as management software and Virtage embedded virtualization technology. • “Intel Itanium Server Blade” on page 8 provides details on the Intel Itanium Server Blades and how they can be combined to create SMP systems up of to 16 cores and 256 GB of memory.
  • Page 8: Intel Itanium Server Blade

    Intel Itanium Server Blade The BladeSymphony 1000 can support up to eight blades for a total of up to 16 Itanium CPU sockets, or 32 cores, running Microsoft Windows or Linux. Up to four Intel Itanium Server Blades can be connected via the high-speed backplane to form a high-performance SMP server of up to 16 cores.
  • Page 9 Table 2: Main components of the Intel Itanium Server Blade Component Manufacturer Quantity Description Processor Intel Maximum 2 Intel Itanium Node Control- Hitachi Node controller — controls each system ler (NDC) Hitachi Memory controller DIMM Maximum 16 DDR2 SDRAM www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 10: Intel Itanium Processor 9000 Series

    The processors deliver mainframe-class reliability, availability, and serviceability features with advanced error detection and correction and containment across all major data pathways and the cache subsystem. They also feature integrated, standards-based error handling across hardware, firmware, and the operating system. BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 11 Intel Cache Safe Technology is an automatic cache recovery capability that allows the processor and server to continue normal operation in case of cache error. It automatically disables cache lines in the event of a cache memory error, providing higher levels of uptime. www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 12: Hitachi Node Controller

    The Hitachi Node Controller controls various kinds of system busses, including the front side bus (FSB), a PCIe link, and the node link. The Hitachi Node Controller is equipped with three node link ports to combine up to four server blades. The server blades connect to each other through the node link, maintain cache coherence collectively, and can be combined to form a ccNUMA type multiprocessor configuration.
  • Page 13: Baseboard Management Controller

    Table 3: Bus throughput from the Hitachi Node Controller Throughput Connection between nodes 400 MHz FSB = 4.8 GB/sec. 667 MHz FSB = 5.3 GB/sec. Baseboard Management Controller The Baseboard Management Controller (BMC) is the main controller for Intelligent Platform Management Interface (IPMI), a common interface to hardware and firmware used to monitor system health and manage the system.
  • Page 14: Smp Capabilities

    Leveraging their extensive mainframe design experience, Hitachi employs a number of advanced design techniques to create a blade-based SMP system, allowing the BladeSymphony 1000 to scale up to an eight socket, 16 core system with as much as 256 GB of memory. The heart of the design is the Hitachi custom designed Node Controller, which effectively breaks a large system into smaller, more flexible nodes or server blades in blade format.
  • Page 15 Memory Memory Figure 6. Hitachi Node Controller connects multiple server blades By dividing the SMP system across several server blades, the memory bus contention problem is solved by virtue of the distributed design. A processor’s access to its on-board memory incurs no penalty.
  • Page 16 SMP Configuration Options BladeSymphony 1000 supports two socket (four-core) Intel Itanium Server Blades that can be scaled to offer up to two 16 core servers in a single chassis or eight four core servers, or a mixture of SMP and single module systems, thus reducing footprint and power consumption while increasing utilization and flexibility.
  • Page 17 Shows segment of memory Interleave boundary local memory 100.0% Interleave boundary local memory0% Interleave boundary local memory no interleaved no interleaved (local (local memory) 4 node 4 node interleave interleave Figure 9. Examples of interleaving www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 18: Intel Itanium I/O Expansion Module

    When one of the Intel Itanium processors needs to access memory, the requested address is broadcast by the Hitachi Node Controller. The other Node Controllers that are part of that partition (SMP) listen for (snoop) those broadcasts. The Node Controller keeps track of the memory addresses currently cached in each processor’s on-chip caches by...
  • Page 19: Server Blade

    #15 #14 #13 #12 #11 #10 #9 #8 #5 #4 #3 #2 #1 #0 Figure 12. Intel Itanium I/O Expansion Module in type B chassis provides up to eight PCI slots per server blade www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 20: Intel Xeon Server Blade

    Chapter 4 Intel Xeon Server Blade The eight slot BladeSymphony 1000 can accommodate a total of up to eight Dual-Socket, Dual-Core or Quad-Core Intel Xeon Server Blades for up to 64 cores per system. Each Intel Xeon Server Blade supports up to four PCI slots, and provides the option of adding Fibre Channel or SCSI storage. Two on-board gigabit Ethernet ports are also provided, along with IP KVM for remote access, virtual media support, and front-side VGA and USB ports for direct access to the server blade.
  • Page 21: Intel Xeon 5200 Dual Core Processors

    Finally, I/OAT optimizes the TCP/IP protocol stack to take advantage of the features of the high bandwidth rates of modern Intel processors, thus diminishing the computation load on the processor. www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 22: Intel Xeon 5400 Quad Core Processors

    FBDIMM technology offers better RAS by extending the currently available ECC to include protection for commands and address data. Additionally, FBDIMM technology automatically retries when an error is detected, allowing for uninterrupted operation in case of transient errors. BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 23 This function is enabled to prevent system downtime caused by a memory fault. BladeSymphony 1000 supports the online spare memory function in the ten patterns of memory configurations listed in Table 5. The shaded sections represent spare banks. Online spare memory excludes the use of the memory mirroring function.
  • Page 24 Accordingly, only half of the total capacity of the memory installed is displayed both in the memory test screen shown when the system is booted and in the total memory capacity shown when the system is running. BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 25: On-Module Storage

    Intel Xeon Server Blades support up to four internal 2.5-inch SAS hard drives The SAS architecture, with its SCSI command set, advanced command queuing, and verification/error correction, is ideal for business-critical applications running on BladeSymphony 1000 systems. Traditional SCSI devices share a common bus. At higher signaling rates, parallel SCSI introduces clock skew and signal degradation.
  • Page 26: I/O Sub System

    Hitachi engineers go to great lengths to design systems that provide high I/O throughput. BladeSymphony 1000 PCI I/O Modules deliver up to 160 Gb/sec. throughput by providing a total of up to 16 PCI slots (8 slots per I/O module). I/O modules accommodates industry standard PCIe or PCI-X cards, supporting current and future technologies as well as helping to preserve investments in existing PCI cards.
  • Page 27 PCIe I/O Module Combo Card A PCIe I/O Module Combo Card is available for the BladeSymphony 1000, which can be installed in the PCIe I/O Module and provides additional FC and gigabit Ethernet configurations. The block diagram is shown in Figure 17.
  • Page 28 (4) LAN LED Link speed (green) (5) LAN LED Link status (orange) (6) SFP port/SFP module Figure 19. Back view of Embedded Fibre Channel Switch Module with blow up of the Fibre Channel switch BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 29 Figure 21. Another benefit is reduced latency on the data path. This dramatically reduces complexity, administration, and points of failure in FC environments. It also reduces the effort to install and/or reconfigure the storage infrastructure. www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 30 FL_port, F_port and E_port, with the function (U_port) to self-detect port type Switch expandability Full-fabric architecture configured by up to 239 switches Interoperability SilkWorm II, SilkWorm Express, and SilkWorm 2000 families Performance 4.250 Gb/sec. (full-duplex) BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 31 Figure 22. FC-HBA + Gigabit Ethernet Combo Card block diagram The card includes the following components: • One Intel PCIe to PCI-X bridge chip • One Intel Gigabit LAN Controller • One Hitachi FC Controller FC HBA (1 port) www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 32 Management Software Developed exclusively for BladeSymphony 1000, the BladeSymphony management software manages all of the hardware components of BladeSymphony 1000 in a unified manner, including the Embedded Fibre Channel Switch Module. In addition, Brocade management software is supported, allowing the Embedded Fibre Channel Switch Module to be managed using existing SAN management software.
  • Page 33: Embedded Gigabit Ethernet Switch

    The switch provides 12 (single) or 24 (redundant) gigabit Ethernet ports for connecting BladeSymphony 1000 Server Blades to other networked resources within the corporate networking structure. Eight of the ports connect through the backplane to server blades, and the remaining four ports are used to connect to external networks, as illustrated in Figure 23.
  • Page 34: Scsi Hard Drive Modules

    Interface extending MIB (FRC1573, RFC2233 compliant) SCSI Hard Drive Modules The BladeSymphony 1000 supports two types of storage modules containing 73 or 146 GB 15K RPM Ultra320 SCSI drives in the type B chassis only. The 3x HDD Module can have up to three drives installed.
  • Page 35 #13 #12 #11 #10 #9 #8 #7 #6 #5 #4 #3 #2 #1 #0 I/O module #0 (type 1) I/O module #1 (type 2) SCSI or RAID Card Figure 25. Connection configuration for HDD Modules www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 36: Chassis, Power, And Cooling

    Chapter 7 Chassis, Power, and Cooling The BladeSymphony 1000 chassis houses all of the modules previously discussed, as well as a passive backplane, Power Supply Modules, Cooling Fan Modules, and the Switch & Management Modules. The chassis and backplane provide a number of redundancy features including a one-to-one relationship between server blades and I/O modules, as well as duplicate paths to I/O and switches.
  • Page 37: Module Connections

    Up to four Power Modules are installable in a chassis. They are installed redundantly, and support hot swapping. The service processor (SVP) checks the power capacity when it starts up. If SVD detects redundant power capacity, it boots the system in the normal way. If SVP cannot detect www.hitachi.com BladeSymphony 1000 Architecture White Paper...
  • Page 38: Redundant Cooling Fan Modules

    • Indicate the faulty location with LEDs • Built-in fuse Cooling fan 1 Cooling fan 0 Server blade slot No. PCI slot No. Cooling fan 3 Cooling fan 2 Figure 27. Top view and cooling fan modules numbers BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 39: Reliability And Serviceability Features

    The BladeSymphony 1000 is designed with features to help ensure the system does not crash due to a failure and to minimize the effects from a failure. These features are listed in Table 11.
  • Page 40: Serviceability Features

    The Switch & Management Module houses the gigabit Ethernet switch, to which the gigabit Ethernet port of each server blade is connected over the backplane. When two Switch & Management Modules are installed, each switch operates independently. BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 41 Pwrctrl Not supported SVP manages the entire BladeSymphony 1000 device. It also provides a user interface for management via the SVP console. SVP provides the following functions: • Module configuration management (module type, installation location, etc.) within a server chassis •...
  • Page 42 • Management interface (SVP console (Telnet/CLI, RS232C/CLI), SNMP) • Assist function (E-mailing to the maintenance center) BladeSymphony 1000 implements the functions of the SVP card through software emulation by BMC and SVP over fast Ethernet and I2C connections, as shown in Figure 29.
  • Page 43 • Firmware updates (Intel Itanium Server Blade only) — updating SAL, BMC, and SVP Console Functions BladeSymphony 1000 supports the following three types of consoles: • OS console for operating the OS and system firmware (only for the Itanium Blade) •...
  • Page 44 In the Intel Xeon Server Blade, a special KVM connector can be connected to the KVM port on the front of each server blade to connect the monitor, keyboard, and mouse. BladeSymphony 1000 Architecture White Paper www.hitachi.com...
  • Page 45: Management Software

    BladeSymphony Management Suite BladeSymphony 1000 can be configured to operate across multiple chassis and racks, and this extended system can be managed centrally with BladeSymphony Management Suite software (shown in Figure 30. BladeSymphony Management Suite allows the various system components to be managed through a unified dashboard.
  • Page 46: Operations Management

    (node). However, Microsoft Cluster Server's cluster administrator can manage only clusters in the same domain, while the BladeSymphony 1000 enables centralized management of clusters from a remote location regardless of the domains. The following operations for cluster management are supported: –...
  • Page 47: Remote Management

    Rack Management In the BladeSymphony 1000, networks, server blades, and storage devices are installed as modules in a single rack. Rack Management provides graphical displays of the information about the devices, such as layout, amount of available free space, and error locations. It can also display detailed information about each device (such as type, IP address, and size) and alert information in the event of failure.
  • Page 48: Virtage

    Chapter 9 Virtage Virtage is a key technical differentiator for BladeSymphony 1000. It brings mainframe-class virtualization to blade computing. Leveraging Hitachi’s decades of development work on mainframe virtualization technology, Virtage delivers high-performance, extremely reliable, and transparent virtualization for Dual-Core Intel Itanium and Quad-Core Intel Xeon processor-based server blades.
  • Page 49: High I/O Performance

    The hardware assist feature simply modifies the memory addresses for the I/O requests. Also, because BladeSymphony 1000 can be configured with physical PCI slots, I/O can be assigned by the slot to any given partition. Therefore, any partition can be assigned any amount of slots and each partition can be mounted with any standard PCI interface cards.
  • Page 50: Fiber Channel Virtualization

    Fiber Channel Virtualization Hitachi also offers Fibre Channel I/O virtualization for Virtage. This allows multiple logical partitions to access a storage device through a single Fiber Channel card, allowing fewer physical connections between server and storage and increasing the utilization rates of the storage connections. This is exclusive to the 4 GB Hitachi FC card.
  • Page 51: Summary

    Virtage provides an alternative to third-party software solutions, enabling companies to decrease overhead costs while increasing manageability and performance. This powerful mix of flexibility, integration, and scalability makes BladeSymphony 1000 effective for any enterprise, but particularly for those running large custom applications or running high-growth applications.
  • Page 52 Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a registered trademark or trademark of Linus Torvalds in the United States and other countries. Windows is the registered trademark of Microsoft Corporation in the United States and other countries. Hitachi is a registered trademark of Hitachi, Ltd. and/or its affiliates. Blad-...

Table of Contents