BlueField-2 DPU. Ordering Part Numbers The tables below list the ordering part numbers (OPNs) for available BlueField-2 Ethernet DPUs in two form factors: Half-Height Half-Length (HHHL) DPUs and Full-Height Half-Length (FHHL) DPUs, respectively.
Full-Height Half-Length (FHHL) DPUs NVIDIA Legacy Series/ No. of PCIe Secur 1GbE Integra Exter Devic PSID Lifecycl Core Spee Ports Suppor e Boot board board e ID Speed Powe eMMC Memo Memory ✓ ✓ ✓ ✓ ✓ 900-9D218- MBF2H512 P-Series/...
Page 10
If your target application for this crypto-enabled card will utilize 100Gb/s or higher bandwidth, where a substantial part of the bandwidth will be allocated for IPsec traffic, please refer to the NVIDIA BlueField-2 DPUs Product Release Notes document to learn about a potential bandwidth limitation. See Related Documents section for details on accessing the document.
This manual is intended for the installer and user of these cards. The manual assumes basic familiarity with the Ethernet network and architecture specifications. Technical Support Customers who purchased NVIDIA products directly from NVIDIA are invited to contact us through the following methods: • E-mail: Enterprisesupport@nvidia.com ...
Cables Transceivers. NVIDIA® BlueField® DPU operating system (OS) is a reference Linux distribution based on the Ubuntu Server distribution extended to include DOCA runtime libraries, the BlueField DPU DOCA Runtime stack for Arm and a Linux kernel that supports various accelerations for storage, networking, and security. As such, customers can run any Linux-based OS Manual applications in the BlueField software environment seamlessly.
Document Conventions When discussing memory sizes, GB and GBytes are used in this document to mean size in gigabytes. The use of Gb or Gbits indicates the size in giga-bits. In this document, PCIe is used to mean PCI Express. Revision History A list of the changes made to this document is provided in Document Revision...
HPC, and artificial intelligence (AI), from edge to core data centers and clouds, all while reducing the total cost of ownership. The NVIDIA DOCA software development kit (SDK) enables developers to rapidly create applications and services for the BlueField-2 DPU. The DOCA SDK makes it easy and straightforward to leverage DPU hardware accelerators and CPU programmability for better application performance and security.
Description Operating System BlueField-2 DPU is shipped with Ubuntu – a Linux commercial operating system – which includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux applications seamlessly. BlueField-2 DPU also supports CentOS and has an out-of-band 1GbE management interface. For more information, please refer to the DOCA SDK documentation or NVIDIA BlueField-2 Software User Manual. ...
Accessories Kit The accessories kit should be ordered separately. Please refer to the below table and order the kit based on the desired DPU. DPU OPN Accessories Kit OPN Contents MBF2H516A-CEEOT MBF20-DKIT 1x USB 2.0 Type-A to Mini USB Type B cable MBF2H516A-CENOT 30-pin shrouded connector cable MBF2M516A-CECOT...
Uses PCIe Gen 4.0 (16GT/s) through an x8/x16 edge connector. Gen 1.1, 2.0, and 3.0 compatible. (PCIe) Up to 200 Gigabit BlueField-2 DPU complies with the following IEEE 802.3 standards: 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE Ethernet IEEE802.3ck...
Page 18
BlueField-2 DPU The BlueField-2 DPU integrates eight 64-bit Armv8 A72 cores interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent network adapter supporting up to 200Gb/s, an embedded PCIe switch with endpoint and root complex functionality, and up to 16 lanes of PCIe Gen 4.0.
Page 19
T10-DIF Signature Handover BlueField-2 DPU may operate as a co-processor offloading specific storage tasks from the host, isolating part of the storage media from the host, or enabling abstraction of software-defined storage logic using the BlueField-2 Arm cores. On the storage initiator side, BlueField-2 DPU can prove an efficient solution for hyper-converged systems to enable the host CPU to focus on computing while all the storage interface is handled through the Arm cores.
Page 20
NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-2 chip temperature monitoring, board power, and consumption monitoring, and individual interface resets. The BMC also supports the ability to push a bootstream to BlueField-2. Having a trusted onboard BMC that is fully isolated from the host server ensures the highest security for the DPU boards.
| grep BlueField The list of identified devices should include a network controller for every physical (Ethernet) port and a DMA controller for DPU management. Expected output example: b3:00.0 Ethernet controller: Mellanox Technologies MT42822 BlueField-2 integrated ConnectX-6 Dx network controller (rev 01)
(rev 01) b3:00.2 DMA controller: Mellanox Technologies MT42822 BlueField-2 SoC Management Interface (rev 01) If an older DOCA software version is installed on your host, make sure to uninstall it before proceeding with the installation of the new version: •...
Page 23
WARNING: Your password has expired. You must change your password now and login again! Changing password for ubuntu. Current password: New password: Upgrade the BlueField DPU's firmware: dpu# sudo /opt/mellanox/mlnx-fw-updater/mlnx_fw_updater.pl --force-fw-update Expected output example: Device #1: ---------- Device Type: BlueField-2 [...] Versions: Current Available...
<Old_FW_ver> <New_FW_ver> [...] Done Perform a DPU reset: dpu# sudo mst start dpu# sudo mlxfwreset -d /dev/mst/mt41686_pciconf0 --sync 1 -y reset Verify that the BFB has been installed and the firmware has been upgraded successfully by accessing the DPU again: SSH to the BlueField DPU from the host: ssh ubuntu@<oob-ip>...
Supported Interfaces This section describes the DPU-supported interfaces. Each numbered interface that is referenced in the figures is described in the following table with a link to detailed information. The figures in the sections below are for illustration purposes only. Interfaces of HHHL DPUs Interfaces of MBF2H332A-AECOT, MBF2H332A-AEEOT, MBF2H332A-AENOT Component Side ...
Page 27
Item Interface Description DPU IC 8 cores PCI Express Interface PCIe Gen 4.0 through an x8 edge connector Networking Ports Ethernet traffic is transmitted through the DPU SFP56 connectors. The SFP56 connectors allow for the use of modules, optical and passive cable interconnect solutions. By default, the port cages of this group of OPNs are set to operate in SFP28 mode (default card firmware setting). ...
Page 28
Component Side Print Side Item Interface Description DPU IC 8 cores PCI Express Interface PCIe Gen 4.0 through an x16 edge connector Networking Interface Network traffic is transmitted through the DPU QSFP56 connector. The QSFP56 connector allow for the use of modules, and optical and passive cable interconnect solutions. Networking Ports LEDs Interface One bi-color LED per port for the link and physical status DDR4 SDRAM On-Board Memory...
Item Interface Description NC-SI Management Interface Connection for remote sideband management USB 4-pin vertical connector (default) Mounted on the DPU for OS image loading 1GbE OOB Management Interface 1GbE BASE-T OOB management interface RTC Battery Battery holder for RTC eMMC Interface x8 NAND flash Interfaces of FHHL DPUs Interfaces of MBF2H512C-AECOT, MBF2H512C-AESOT, MBF2H512C-AEUOT, MBF2H532C-AECOT, MBF2H532C-AESOT...
Page 30
Component Side Print Side Item Interface Description DPU IC 8 cores PCI Express Interface PCIe Gen 4.0 through an x8 edge connector Networking Ports The Ethernet traffic is transmitted through the DPU SFP56 connectors. The SFP56 connectors allow the use of modules, optical and passive cable interconnect solutions.
Page 31
Item Interface Description DDR4 SDRAM On-Board Memory 8 units of SDRAM for a total of 16GB/32GB @ 3200MT/s single DDR4 channel, 64bit + 8bit ECC, solder-down memory NC-SI Management Interface Connectivity for remote sideband management (NC-SI over RBT). The NC-SI connector type differs per the product HW version: In the Engineering samples, a 30-pin NC-SI connector is populated, whereas in HW versions TBD and up, a 20-pin NC-SI connector is populated USB 4-pin Vertical Interface Used for OS image loading...
Page 32
Comp Print Side onent Side Item Interface Description DPU IC 8 cores PCI Express Interface PCIe Gen 4.0 through an x16 edge connector Networking Ports The network traffic is transmitted through the DPU QSFP56 connectors. The QSFP56 connectors allow the use of modules, optical and passive cable interconnect solutions. Networking Ports LEDs Interface One bi-color I/O LED per port to indicate link and physical status DDR4 SDRAM On-Board Memory...
Page 33
Item Interface Description External PCIe Power Supply Connector An external 12V power connection through a 6-pin ATX connector. NOTE: This connector is present on FHHL P-Series DPUs only. It is not present on FHHL E-Series DPUs. RTC Battery Battery holder for RTC eMMC Interface x8 NAND flash Interfaces of MBF2H516C-CECOT, MBF2H516C-CESOT, MBF2H516C-CEUOT, MBF2M516C-CECOT, MBF2M516C-CESOT, MBF2H536C-CECOT, MBF2H536C-CESOT,...
Page 34
Item Interface Description DPU IC 8 cores PCI Express Interface PCIe Gen 4.0 through an x16 edge connector Networking Ports The network traffic is transmitted through the DPU QSFP56 connectors. The QSFP56 connectors allow the use of modules, optical and passive cable interconnect solutions Networking Ports LEDs Interface One bi-color I/O LED per port to indicate link and physical status DDR4 SDRAM On-Board Memory...
At the heart BlueField-2, the ConnectX-6 Dx network offload controller with RDMA and RDMA over Converged Ethernet (RoCE) technology delivers cutting- edge performance for networking and storage applications such as NVMe over Fabrics.
Benefits. Traffic is transmitted through the cards' QSFP56/SFP56 connectors. By default, the port cages of this group of OPNs are set to operate in QSFP28/SFP28 mode (default card firmware setting). BlueField-2 DPUs support copper/optic and SR4 modules only. Networking Ports LEDs Interface There is one bicolor (Yellow and Green) I/O LED per port to indicate speed and link status. ...
State Bi-Color LED (Yellow/Green) Error Type Description LED Behavior Over-current Over-current condition of the Blinks until error is fixed networking ports Physical Activity Blinking Green Link Up Solid Green DDR4 SDRAM On-Board Memory The DPU incorporates 16 or 32GB @ 3200MT/s single DDR4 channel, 64bit + 8bit ECC, solder-down memory. NC-SI Management Interface ...
Page 38
NC-SI connector type on the DPU you have purchased. NC-SI Connector Type UART Interface Location and Connectivity 30-pin For DPUs with onboard BMC, the UART interface is that of the BlueField-2 device. For DPUs without onboard BMC, the UART interface is that of the NIC BMC device.
Page 39
NC-SI Connector Type UART Interface Location and Connectivity NC-SI Connector Pin # The signal on DPU without The signal on DPU with BMC BF_UART0_RX BMC_RX5 BF_UART0_TX BMC_TX5 Please note the following: The UART interface is compliant with the TTL 3.3V voltage level. A USB to UART cable that supports TTL voltage levels should be used to connect the UART Interface for Arm console access - see example below. Please refer to UART Cable...
NC-SI Connector Type UART Interface Location and Connectivity 20-pin DPUs with onboard BMC hardware: The UART interface is that of the NIC BMC device. NC-SI Connector Pin # The signal on DPU with BMC_RX5 BMC_TX5 USB Interfaces The USB interface is used to load operating system images. The following table lists the types of onboard USB interfaces and the DPU part numbers that use them.
Some DPUs incorporate local NIC BMC (Baseboard Management Controller) hardware on the board. The BMC SoC (system on a chip) can utilize either shared or dedicated NICs for remote access. The BMC node enables remote power cycling, board environment monitoring, BlueField-2 chip temperature monitoring, board power, and consumption monitoring, and individual interface resets.
For DPUs with integrated BMC: 1GbE OOB Management can be performed via the BlueField-2 device or the integrated BMC. 1GbE OOB Management LEDs Interface There are 2 OOB management LEDs, one green and one amber/yellow. The following table describes LED behavior for DPUs with or with onboard BMC.
eMMC Interface The DPU incorporates an eMMC interface on the card's print side. The eMMC is an x8 NAND flash and is used for Arm boot, operating system storage, and disk space. Memory size is either 64GB or 128GB, where 128GB is effectively 40GB with high durability. External PCIe Power Supply Connector ...
Page 44
The DPU PTP solution allows you to run any PTP stack on your host. With respect to testing and measurements, selected NVIDIA DPUs allow you to use the PPS-out signal from the onboard MMCX RA connectors; the DPU also allows measuring PTP in scale, with the PPS-In signal. The PTP HW clock on the DPU will be sampled on each PPS-In signal, and the timestamp will be sent to the SW.
Hardware Installation Installation and initialization of the BlueField-2 DPU require attention to the mechanical attributes, power specification, and precautions for electronic equipment. Safety Warnings Safety warnings are provided here in the English language. Please observe all safety warnings to avoid injury and prevent damage to system components. Note that not all warnings are relevant to all models.
CLASS 1 LASER PRODUCT and reference to the most recent laser standards: IEC 60 825-1:1993 + A1:1997 + A2:2001 and EN 60825-1:1994+A1:1996+ A2:20 Installation Procedure Overview The installation procedure of BlueField-2 DPU involves the following steps: Step Procedure Direct Link Check the system’s requirements.
The operating environment should meet severity level G1 as per ISA 71.04 for gaseous contamination and ISO 14644-1 class 8 for cleanliness level. The BlueField-2 DPU is designed and validated for operation in data-center servers and other large environments that guarantee proper power supply and airflow conditions.
(Relevant for OPNs: MBF2H516A-CENOT and MBF2H516A-CEEOT) Airflow Requirements The BlueField-2 DPU is offered with one airflow pattern: from the heatsink to the network ports. It is prohibited to use port-to-heatsink airflow as it may cause damage to the BlueField-2 DPU. Please refer to the Specifications section for airflow numbers per card model.
Turn off the power to the system, and disconnect the power cord. Refer to the system documentation for instructions. Before you install the BlueField-2 DPU, make sure that the system is disconnected from power. (Optional) Check the mounting bracket on the BlueField-2 DPU.
Due to risk of damaging the EMI fingers, it is not recommended to replace the bracket more than three times. Removing the Existing Bracket Use a Torx bit #6 (T6) driver to remove the two bracket screws and save them for the installation step. ...
Please note that the following figures are for illustration purposes only. FHHL Cards Installation To power-up the FHHL P-Series DPUs that have an x16 PCIe Gen 4 interface, you need to connect a PCIe external power cable to the on-board 6- pin ATX connector.
Page 52
4. When the DPU is properly seated, the port connectors are aligned with the slot opening, and the DPU faceplate is visible against the system chassis.
Secure the DPU with the screw. For the FHHL P-Series DPUs requiring an external power cable: Connect the 6-pin ATX power connector from the power supply to the power connector on the top edge of the DPU. Note that the connector and socket on the card have a unique shape and connect in one way only. For further instructions, please refer to the cable vendor documentation.
Page 54
Do not use excessive force when seating the card, as this may damage the system or the DPU. When the DPU is properly seated, the port connectors are aligned with the slot opening, and the DPU faceplate is visible against the system chassis. Secure the DPU with the screw.
Cables and Modules Networking Cable Installation All cables can be inserted or removed with the unit powered on. To insert a cable, press the connector into the port receptacle until the connector is firmly seated. Support the weight of the cable before connecting the cable to the DPU card. Do this by using a cable holder or tying the cable to the rack. Determine the correct orientation of the connector to the card before inserting the connector.
UART Cable Installation UART console interface is located on the DPU. Connect the supplied USB 2.0 Type A to 30pin Flat Socket cable from this interface to a motherboard/ desired server for UART console capabilities. For more information on the NC-SI interface, please refer to UART Interface Connectivity. ...
Uninstalling the DPU Safety Precautions The DPU is installed in a system that operates with voltages that can be lethal. Before uninstalling the DPU, please observe the following precautions to avoid injury and prevent damage to system components. Remove any metallic objects from your hands and wrists. It is strongly recommended to use an ESD strap or other antistatic devices.
Check that both the DPU and its link are set to the same speed and duplex settings Refer to the latest version of BlueField DPU SW Manual and follow instructions under "Upgrading Forgot password needed to install/upgrade the BlueField-2 DPU image NVIDIA BlueField DPU Software" section.
NVOnline following login. The BlueField-2 DPU is designed and validated for operation in data-center servers and other large environments that guarantee proper power supply and airflow conditions. The DPU is not intended for installation on a desktop or a workstation. Moreover, installing the DPU in any system without proper power and airflow levels can impact the DPU's functionality and potentially damage it.
Page 60
Voltage: 12V DPU Power Consumption and Maximum power available through SFP56 port: 2W Airflow Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. Temperature Operational 0°C to 55°C Environmental ...
Voltage: 12V Power Supply, Consumption and Maximum power available through QFP56 port: 6W Airflow Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. Temperature Operational 0°C to 55°C Environmental ...
Page 62
Notes: The non-operational storage temperature specifications apply to the product without its package.
Voltage: 12V Power Supply, Consumption and Maximum power available through SFP56 port: 1.5W (per port) Airflow Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. Temperature Operational 0°C to 55°C...
Voltage: 12V DPU Power Supply, Consumption and Airflow Maximum power available through SFP56 port: 1.5W (per port) Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. Temperature Operational 0°C to 55°C...
The PCIe external power cable should be supplied by the customer (usually available with the server). Refer to External Power Supply Connector pinouts for pin descriptions. BlueField-2 P-Series - 8 Cores - 550MHz/2750MHz BlueField-2 SoC • MBF2H516A-CEEOT: Crypto Enabled, Secure Boot Enabled • MBF2H516A-CENOT: Crypto Disabled, Secure Boot Enabled Card Dimensions (FHHL): 4.53 in.
Capabilities Voltage: 12V DPU Power Supply, Consumption and Airflow Maximum power available through QSFP56 port: 2.5W (per port) Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. Temperature Operational 0°C to 55°C...
Page 67
Voltage: 12V DPU Power Supply, Maximum power available through QSFP56 port: 2.5W (per port) Consumption Power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available at NVOnline following login. and Airflow Temperature Operational 0°C to 55°C...
The PCIe external power cable should be supplied by the customer (usually available with the server). Refer to External Power Supply Connector pinouts for pin descriptions. BlueField-2 P-Series - 8 Cores - 550MHz/2750MHz BlueField-2 SoC MBF2H516C-CECOT : Crypto Enabled, Secure Boot Enabled MBF2H516C-CESOT: Crypto Disabled, Secure Boot Enabled MBF2H516C-CEUOT: Crypto Disabled, Secure Boot Enabled with UEFI Disabled MBF2H536C-CECOT...
If your target application for this crypto-enabled card will utilize 100Gb/s or higher bandwidth, where a substantial part of the bandwidth will be allocated for IPsec traffic, please refer to the NVIDIA BlueField-2 DPUs Product Release Notes document to learn about a potential bandwidth limitation. See Related Documents section for details on accessing the document.
Page 70
(a) If your target application for this crypto-enabled card will utilize 100Gb/s or higher bandwidth, where a substantial part of the bandwidth will be allocated for IPsec traffic, please refer to the NVIDIA BlueField-2 DPUs Product Release Notes document to learn about a potential bandwidth limitation. See Related Documents section for details on accessing the document.
DPU Mechanical Drawing and Dimensions All dimensions are in millimeters. The PCB mechanical tolerance is +/- 0.13mm. The diagrams may differ for different cards and are provided here for illustration purposes only. HHHL DPUs FHHL DPUs...
The DPU SoC has a thermal shutdown safety mechanism that automatically shuts down the DPU in cases of high-temperature events, improper thermal coupling, or heatsink removal. Refer to the below table for heatsink details per card configuration. For the required airflow (LFM) per OPN, please refer to the NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, available at NVOnline following login.
The product revisions indicated on the labels in the following figures do not necessarily represent the latest revisions of the cards. BlueField-2 DPU labels contain four MAC addresses (Host, OOB, ECPF and MPF). • Host (Base MAC) •...
Supported Servers and Power Cords Supported Servers Server support depends on the particular setup being used. The following is a partial list of servers with which the DPUs have been tested. For more information, please contact your NVIDIA representative. Vendor Name Server Model...
Supported Power Cords Vendor Name Part Number Description CISCO 72-102163-01 Cisco-UCSC M6/7 1U Power Cable for Nvidia BlueField-2 DPU 72-102164-01 Cisco-UCSC M6/7 2U Power Cable for Nvidia BlueField-2 DPU 755742-001 HP GPU Power Cable for hp dl380 gen9 805123-001 HP GPU Power Cable for hp dl380 gen9...
Pin # Signal Name Description Pin # Signal Name Description PERN6 PETP7 PETN7 PERP7 PERN7 PRSNT2# Mechanical Present DPU PCI Express x16 Pin Description Pin # Signal Name Description Pin # Signal Name Description PRSNT1# Mechanical Present JTAG - Not Connected SMCLK Host SMBus JTAG - Not Connected...
External Power Supply Connector The below table provides the External Power Supply pins of the external power supply interfaces on the DPU. For further details, please refer to External PCIe Power Supply Connector. The mechanical pinout of the 6-pin external +12V power connector is shown below. The +12V connector is a GPU power PCIe standard connector. Care should be taken to ensure the power is applied to the correct pins as some 6-pin ATX type connector can have different pinouts.
Pin # Signal USB DN NC-SI Management Interface The below tables list the NC-SI management interface pinout descriptions per card type. Please follow the link to the table coinciding with the OPN you have purchased. OPNs HW Versions Link MBF2H332A-AECOT, MBF2H332A-AECOT, A1xx - A3xx 30-pin Connector: MBF2H332A-AENOT ...
Page 88
Table A - NC-SI Connector Pins Signal Description Comments Name REF_CLK Input 50M REF CLK for RBT Reference clock. Synchronous clock reference for receive, transmit and control interface. The clock shall have a typical frequency of NCSI BUS 50MHz ±50 ppm. For baseboards, this pin shall be connected between the baseboard NC-SI over RBT PHY and the DPU cable connector.
Page 89
Signal Description Comments Name RX_D0 Output Receive data Output for SoC Receive data. Data signals from the network controller to the BMC. For baseboards, this pin shall be connected between the baseboard NC-SI over RBT PHY and the connector. This signal requires a 100 kΩ pull down resistor to GND on the baseboard between the BMC and the RBT isolator to prevent the signal from floating when no card is installed.
Page 90
Signal Description Comments Name TX_D0 Input Transmit data Input for SoC. Data signals from the BMC to the network controller. For baseboards, this pin shall be connected between the baseboard NC-SI over RBT PHY and the connector. This signal requires a 100 kΩ pull down resistor to GND on the baseboard between the RBT isolator and the DPUcable connector to prevent the card-side signals from floating when the RBT signals are isolated.
Page 91
Signal Description Comments Name I2C_SDA Bidirection I2C Serial Data GW_ARM1 Ground I2C_SCL Bidirection I2C Serial Clock GW_ARM1 Ground Ground Ground NC UART_TX Output Transmit data Output for SoC. 3.3V UART TX signal from the baseboard. UART_RX Input Receive data 3.3V UART RX signal to the baseboard. Table B - NC-SI Connector Pins ...
Page 92
Transmit Data In 0 Ground NCSI_TX_D1 Input Transmit Data In 0 Ground NCSI_TX_EN Input Transmit Enable Ground Reserved SOFT_RST#, connected to BlueField-2 device pin HOST_GPIO[7] ARM_I2C1_SDA Input/Output Open-drain signal ARM_NSRST# Input/Output Open-drain signal ARM_I2C1_SCL Input/Output Open-drain signal PACK_ID2, connected to BlueField-2 device pin NIC_GPIO[46]...
Page 93
Input 50MHz REF CLK for NC-SI BUS Ground NCSI_ARB_IN Input NC-SI hardware arbitration input PACK_ID0, connected to BlueField-2 device pin NIC_GPIO[49] Input See the note above table NCSI_ARB_OUT Output NC-SI hardware arbitration output PACK_ID1, connected to BlueField-2 device pin NIC_GPIO[47]...
Page 94
NIC_BMC_CTRL1, connected to BMC device pin W4 (Open Drain) Input/Output Open-drain signal BMC_I2C2_SCL Input/Output Open-drain signal PACK_ID2, connected to BlueField-2 device pin NIC_GPIO[46] Input See description above table Ground NIC_BMC_CTRL0, connected to BMC device pin Y3 (Open Drain) Not Connected...
Page 95
Pin# Pin Name Description/Comments Ground Ground PACK_ID1, connected to BlueField-2 NIC_GPIO[47] Input See the description above table regarding PACK_ID Should be connected to the Primary controller NC-SI PACK_ID pins to set the appropriate package ID. PACK_ID0 should be connected to the endpoint device GPIO associated with Package ID[0].
Page 96
Pin# Pin Name Description/Comments DPUs, this pin should be connected between the connector and the RBT PHY. No external termination is required. 50MHz REF CLK for NC-SI BUS RBT_RXD1 Output Receive Data Out 1. Data signals from the network controller to the BMC. For baseboards, this pin should be connected between the baseboard NC-SI over RBT PHY and the connector.
Page 97
Pin# Pin Name Description/Comments PACK_ID0, connected to BlueField-2 NIC_GPIO[49] Input See note above table regarding NC-SI PACK_ID should be connected to the Primary controller NC-SI PACK_ID pins to set the appropriate package ID. PACK_ID0 should be connected to the endpoint device GPIO associated with Package ID[0].
Page 98
Pin# Pin Name Description/Comments NCSI_TX_D1 Input Transmit Data In 0. Data signals from the BMC to the network controller. For baseboards, this pin should be connected between the baseboard NC-SI over RBT PHY and the connector. This signal requires a 100 kΩ pull-down resistor to GND on the baseboard between the RBT isolator and the DPUcable connector to prevent the card- side signals from floating when the RBT signals are isolated.
Input 50MHz REF CLK for NC-SI BUS Ground NCSI_ARB_IN Input NC-SI hardware arbitration input PACK_ID0, connected to BlueField-2 device pin NIC_GPIO[49] Input See the note above the table NCSI_ARB_OUT Output NC-SI hardware arbitration output PACK_ID1, connected to BlueField-2 device pin NIC_GPIO[47]...
Page 100
NIC_BMC_CTRL1, connected to BMC device pin W4 (Open Drain) Input/Output Open-drain signal BMC_I2C2_SCL Input/Output Open-drain signal PACK_ID2, connected to BlueField-2 device pin NIC_GPIO[46] Input See the note above the table Ground NIC_BMC_CTRL0, connected to BMC device pin Y3 (Open Drain) Not Connected...
Document Revision History Date Description Aug. 2023 • Added step 3 to section Verifying DPU Connection and Setting Up Host Environment • Added Monitoring Jul. 2023 Removed PPS IN/OUT support for the following OPNs: • MBF2H516C-CECOT • MBF2H516C-CESOT • MBF2H516C-CEUOT Jun. 2023 Added important notes on selected OPNs in Ordering Part Numbers and the Specifications chapter...
Page 102
Added mechanical diagrams (component & print sides, bracket) for MBF2M355A-VECOT, MBF2M355A-VESOT. • Added in Specification tables that power and airflow specifications are provided in NVIDIA BlueField-2 DPUs Power and Airflow Specifications document, which is available through the customer portal following login.
Page 104
NVIDIA product and may result in additional or different conditions and/or requirements beyond those contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or problem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is contrary to this document or (ii) customer product designs.
Page 105
INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARISING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatsoever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein shall be limited in...
Need help?
Do you have a question about the BlueField-2 and is the answer not in the manual?
Questions and answers