9/24/2019 Cmm Ibm Flex X240
Ifyou are installing a DIMM as a result of a DIMM failure, you mighthave to reenable the DIMM. To re-enable the DIMM, complete the followingsteps:. Verify that the amount of installed memory is the expected amountof memory through the operating system, by watching the monitor asthe compute node starts, by using the CMM sol command,or through IBM FlexSystem Manager management software (ifinstalled). For more information about the CMM sol command,see the. For more information about IBM FlexSystem Manager management software,see the. Run the Setup utility to reenable the DIMMs (see for more information).The compute node has a total of 24 dual inline memorymodule (DIMM) connectors.
The compute node supports low-profile (LP)DDR3 DIMMs with error-correcting code (ECC) in 2 GB, 4 GB, 8 GB, 16GB, and 32 GB capacities.The following illustration showsthe system-board components, including the DIMM connectors. Memory-channel configuration MicroprocessorMemory channelDIMM connectorsMicroprocessor 1Channel A4, 5, and 6Channel B1, 2, and 3Channel C7, 8, and 9Channel D10, 11, and 12Microprocessor 2Channel A22, 23, and 24Channel B19, 20, and 21Channel C13, 14, and 15Channel D16, 17, and 18Depending on the memory mode that is set in the Setup utility,the compute node can support a minimum of 2 GB and a maximum of 384GB of system memory in a compute node with one microprocessor. Iftwo microprocessors are installed, the compute node can support aminimum of 4 GB and a maximum of 768 GB of system memory. There are three memory modes:. Independent-channel mode: Independent-channel mode providesa maximum of 384 GB of usable memory with one installed microprocessor,and 768 GB of usable memory with two installed microprocessors (using32 GB DIMMs). Rank-sparing mode: In rank-sparing mode, one memory DIMMrank serves as a spare of the other ranks on the same channel. Thespare rank is held in reserve and is not used as active memory.
Thespare rank must have identical or larger memory capacity than allthe other active DIMM ranks on the same channel. After an error thresholdis surpassed, the contents of that rank is copied to the spare rank.The failed rank of DIMMs is taken offline, and the spare rank is putonline and used as active memory in place of the failed rank. Thefollowing notes describe additional information that you must considerwhen you select rank-sparing memory mode:. Rank-sparing on one channel is independent of the sparing on allother channels. You can use the Setup utility to determine the status of the DIMMranks. Mirrored-channel mode: In mirrored-channel mode, memoryis installed in pairs. Each DIMM in a pair must be identical in sizeand architecture.
Item 2 IBM 00Y3390 CMM FLEX CHASSIS MANAGEMENT MODULE 46W0923 - IBM 00Y3390 CMM FLEX CHASSIS MANAGEMENT MODULE 46W0923. Item 8 00Y2770 IBM FLEX X220 / X240 Compute Node Chassis & SYSTEM BOARD 00Y2774 - 00Y2770 IBM FLEX X220 / X240 Compute Node Chassis & SYSTEM BOARD 00Y2774.
The channels are grouped in pairs with each channelreceiving the same data. One channel is used as a backup of the other,which provides redundancy. The memory contents on channel B are duplicatedin channel C, and the memory contents of channel A are duplicatedin channel D. The effective memory that is available to the systemis only half of what is installed.One DIMM for each microprocessor is the minimum requirement.However, for optimal performance, install DIMMs in sets of four sothat you distribute memory equally across all four channels. If twomicroprocessors are installed, distribute memory across all channelsand equally between the microprocessors. DIMMpopulation sequence for rank-sparing mode DIMM pair installation order2 DIMMs per channel3 DIMMs per channel1 microprocessor installed2 microprocessors installed1 microprocessor installed2 microprocessors installed14 and 54 and 54, 5, and 64, 5, and 628 and 920 and 217, 8, and 919, 20, and 2131 and 28 and 91, 2, and 37, 8, and 9411 and 1216 and 1710, 11, and 1216, 17 and 185n/a1 and 2n/a1, 2, and 3623 and 2422, 23, and 24711 and 1210, 11, and 12813 and 1413, 14, and 15Install DIMMs in order as indicated in the followingtable for mirrored-channel mode.
DIMMpopulation sequence for mirrored-channel mode DIMM pair1 microprocessor installed2 microprocessors installedDIMM slot numbersDIMMs per channelDIMM slot numbersDIMMs per channel14 and 1 114 and 1 1129 and 12 121 and 24 132 and 5 129 and 12 148 and 1113 and 16 153 and 6 132 and 5 1267 and 10 120 and 23 17none8 and 11 18none14 and 17 19none3 and 6 1310none19 and 22 111none7 and 10 112none15 and 18 1. For mirrored-channel mode, the DIMM pair must be identical insize, type, and rank count. Attention: To avoid breaking the retaining clips or damagingthe DIMM connector, handle the clips gently. Press the DIMM into the DIMM connector. The retaining clipslock the DIMM into the connector. Make sure that the small tabs on the retaining clips engagethe notches on the DIMM.
If there is a gap between the DIMM and theretaining clips, the DIMM has not been correctly installed. Pressthe DIMM firmly into the connector, and then press the retainingclips toward the DIMM until the tabs are fully seated. When the DIMMis correctly installed, the retaining clips are parallel to the sidesof the DIMM.
The Flex System™ x240 Compute Node is a high-performance Intel Xeon processor-based server that offers outstanding performance for virtualization with new levels of CPU performance and memory capacity, and flexible configuration options. It is part of Flex System, a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage.
![]()
Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increase resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments. The Flex System™ x240 Compute Node is a high-performance server that offers outstanding performance for virtualization with new levels of CPU performance and memory capacity, and flexible configuration options. The x240 Compute Node is an efficient server designed to run a broad range of workloads, armed with advanced management capabilities allowing you to manage your physical and virtual IT resources from a single-pane of glass.Suggested use: database, virtualization, enterprise applications, collaboration/email, streaming media, web, HPC, Microsoft RemoteFX, and cloud applications.Note: This Product Guide describes the models of the x240 Compute Node with Intel Xeon E5-2600 processors.
For models with the Intel Xeon E5-2600 v2 processors, seeFigure 1 shows the Flex System x240 Compute Node.Figure 1. The Flex System x240 Compute Node. Flex System is a new category of computing that integrates multiple server architectures, networking, storage, and system management capability into a single system that is easy to deploy and manage. Flex System has full built-in virtualization support of servers, storage, and networking to speed provisioning and increased resiliency. In addition, it supports open industry standards, such as operating systems, networking and storage fabrics, virtualization, and system management protocols, to easily fit within existing and future data center environments. Flex System is scalable and extendable with multi-generation upgrades to protect and maximize IT investments.
The Flex System x240 Compute Node is a high-availability, scalable compute node optimized to support the next-generation microprocessor technology and is ideally suited for medium and large businesses. ComponentsSpecificationModels8737-x1x and 8737-x2x (x-config)8737-15X and 7863-10X (e-config)Form factorStandard-width compute node.Chassis supportFlex System Enterprise Chassis.ProcessorUp to two Intel Xeon Processor E5-2600 product family CPUs with eight-core (up to 2.9 GHz) or six-core (up to 2.9 GHz) or quad-core (up to 3.3 GHz) or dual-core (up to 3.0 GHz). Two QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache.ChipsetIntel C600 series.MemoryUp to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP) DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5 V and low-voltage 1.35 V DIMMs supported.
Support for up to 1600 MHz memory speed depending on the processor. Four memory channels per processor (three DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @ 1600MHz) with single and dual rank RDIMMs. Supports three DIMMs per channel at 1066 MHz 1600MHz) with single and dual rank RDIMMs.Memory maximumsWith LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processorsWith RDIMMs: Up to 384 GB with 24x 16 GB RDIMMs and two processorsWith UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processorsMemory protectionECC, Chipkill (for x4-based memory DIMMs), memory mirroring, and memory rank sparing.Disk drive baysTwo 2.5' hot-swap SAS/SATA drive bays supporting SAS, SATA, and SSD drives. Optional support for up to eight 1.8” SSDs. Up to 12 additional 2.5-inch drive bays with the optional Storage Expansion Node.Maximum internal storageWith two 2.5” hot-swap drives: Up to 2 TB with 1 TB 2.5' NL SAS HDDs, or up to 2.4 TB with 1.2 TB 2.5' SAS HDDs, or up to 2 TB with 1 TB 2.5' SATA HDDs, or up to 3.2 TB with 1.6 TB 2.5' SATA SSDs.
An intermix of SAS and SATA HDDs and SSDs is supported. Alternatively, with 1.8” SSDs and ServeRAID M5115 RAID adapter, up to 4 TB with eight 512 GB 1.8” SSDs.
Additional storage available with an attached Flex System Storage Expansion Node.RAID supportRAID 0 and 1 with integrated LSI SAS2004 controller. Optional ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, 50 support and 1 GB cache. Supports up to eight 1.8” SSD with expansion kits. Optional flash-backup for cache, RAID 6/60, SSD performance enabler.Optical and tape baysNo internal bays; use an external USB drive. See for options.Network interfacesx2x models: Two 10 Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet LAN-on-motherboard (LOM) controller; Emulex BE3 based.x1x models: None standard; optional 1Gb or 10Gb Ethernet adapters.PCI Expansion slotsTwo I/O connectors for adapters. PCI Express 3.0 x16 interface. Includes an Expansion Connector (PCIe 3.0 x16) to connect an expansion node such as the PCIe Expansion Node.
PCIe Expansion Node supports two full-height PCIe adapters, two low-profile PCIe adapters and two Flex System I/O adapters.PortsUSB ports: One external. Two internal for embedded hypervisor with optional USB Enablement Kit. Console breakout cable port providing local KVM and serial ports (cable standard with chassis; additional cables optional).Systems managementUEFI, Integrated Management Module 2 (IMM2) with Renesas SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server restart, remote presence.
Support for IBM Flex System Manager, IBM Systems Director and Active Energy Manager, and Lenovo ServerGuide.Security featuresPower-on password, administrator's password, Trusted Platform Module 1.2.VideoMatrox G200eR2 video core with 16 MB video memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.Limited warranty3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.Operating systems supportedMicrosoft Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, VMware ESXi. See the section for specifics.Service and supportOptional service upgrades are available through ServicePacs®: 4-hour or 2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical support for Lenovo hardware and selected Lenovo and OEM software.DimensionsWidth: 215 mm (8.5”), height 51 mm (2.0”), depth 493 mm (19.4”).WeightMaximum configuration: 6.98 kg (15.4 lb).The x240 servers are shipped with the following items:. Statement of Limited Warranty. Important Notices. Documentation CD that contains the Installation and User's Guide. The x240 is supported in the Flex System Enterprise Chassis.Up to 14 x240 Compute Nodes can be installed in the chassis, however, the actual number that can be installed in a chassis depends on these factors:.
The TDP power rating for the processors that are installed in the x240. The number of power supplies installed in the chassis.
The capacity of the power supplies installed (2100 W or 2500 W). The chassis power redundancy policy used (N+1 or N+N)The following table provides guidelines about what number of x240 Compute Nodes can be installed. For more guidance, use the Power Configurator, found at the following website:In the table:. Green = No restriction to the number of x240 Compute Nodes that are installable. Yellow = Some bays must be left empty in the chassisTable 3. Maximum number of x240 Compute Nodes installable based on power supplies installed and power redundancy policy used.
DDR3 memory is compatibility tested and tuned for optimal performance and throughput. Memory specifications are integrated into the light path diagnostics for immediate system performance feedback and optimum system uptime.
From a service and support standpoint, memory automatically assumes the Lenovo system warranty, and Lenovo provides service and support worldwide.The following table lists memory options available for the x240 server. DIMMs can be installed one at a time, but for performance reasons, install them in sets of four (one for each of the four memory channels).Table 4. Memory options for the x240. The x240 server has two 2.5-inch hot-swap drive bays accessible from the front of the blade server (Figure 2).
These bays connect to the integrated LSI SAS2004 6 Gbps SAS/SATA RAID-on-Chip (ROC) controller.The integrated LSI SAS2004 ROC has the following features:. Four-port LSI SAS2004 controller with 6 Gbps throughput per port. PCIe x4 Gen 2 host interface.
Two SAS ports routed internally to the two hot-swap drive bays. Supports RAID-0, RAID-1 and RAID-1EThe x240 also supports up to eight 1.8-inch drives with the addition of the ServeRAID M5115 controller and additional SSD tray hardware. These are described in the next section.Supported drives are listed in the section.
The x240 supports up to eight 1.8-inch solid-state drives combined with a ServeRAID M5115 SAS/SATA controller (90Y4390). The M5115 attaches to the I/O adapter 1 connector and can be attached even if the Compute Node Fabric Connector is installed (used to route the Embedded 10Gb Virtual Fabric Adapter to bays 1 and 2, as discussed in 'I/O expansion options'). The ServeRAID M5115 cannot be installed if an adapter is installed in I/O adapter slot 1.The ServeRAID M5115 supports combinations of 2.5-inch drives and 1.8-inch solid state drives:. Up to two 2.5-inch drives only. Up to four 1.8-inch drives only. Up to two 2.5-inch drives, plus up to four 1.8-inch solid state drives. Up to eight 1.8-inch solid state drivesThe ServeRAID M5115 SAS/SATA Controller (90Y4390) provides an advanced RAID controller supporting RAID 0, 1, 10, 5, 50, and optional 6 and 60.
It includes 1 GB of cache, which can be backed up to Flash when attached to the supercapacitor included with the optional ServeRAID M5100 Series Enablement Kit (90Y4342).At least one hardware kit is required with the ServeRAID M5115 controller, and there are three hardware kits that are supported that enable specific drive support.Table 7. ServeRAID M5115 and hardware kits. Part numberFeaturecodeDescriptionMaximumsupported90Y4390A2XWServeRAID M5115 SAS/SATA Controller for Flex System190Y4342A2XXServeRAID M5100 Series Enablement Kit for Flex System x2A2XYServeRAID M5100 Series Flex System Flash Kit for x2A47DServeRAID M5100 Series Flex System Flash Kit v2 for x2A2XZServeRAID M5100 Series SSD Expansion Kit for Flex System x2401The hardware kits have the following features:. ServeRAID M5100 Series Enablement Kit for Flex System x240 (90Y4342) enables support for up to two 2.5” HDDs or SSDs in the hot-swap bays in the front of the server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache protection.
![]()
This enablement kit replaces the standard two-bay backplane (which is attached via the planar to an onboard controller) with a new backplane that attaches to an included flex cable to the M5115 controller. It also includes an air baffle, which also serves as an attachment for the CacheVault unit.MegaRAID CacheVault flash cache protection uses NAND flash memory powered by a supercapacitor to protect data stored in the controller cache. This module eliminates the need for a lithium-ion battery commonly used to protect DRAM cache memory on PCI RAID controllers.
To avoid the possibility of data loss or corruption during a power or server failure, CacheVault technology transfers the contents of the DRAM cache to NAND flash using power from the supercapacitor. After the power is restored to the RAID controller, the saved data is transferred from the NAND flash back to the DRAM cache, which can then be flushed to disk.Tip: The Enablement Kit is only required if 2.5-inch drives are to be used.
If you plan to install four or eight 1.8-inch SSDs only, then this kit is not required. ServeRAID M5100 Series Flex System Flash Kit for x240 (90Y4341) enables support for up to four 1.8-inch SSDs.
This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and therefore this kit does not have a supercap. The use of this kit limits which 1.8-inch solid state drives can be used in the x240 as listed in. Use Flash Kit v2 instead. ServeRAID M5100 Series Flex System Flash Kit v2 for x240 (47C8808) enables support for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a four-bay SSD backplane that attaches to an included flex cable to the M5115 controller. Because only SSDs are supported, a CacheVault unit is not required, and therefore this kit does not have a supercap.
This v2 kit provides support for the latest high-performance SSDs. ServeRAID M5100 Series SSD Expansion Kit for Flex System x240 (90Y4391) enables support for up to four internal 1.8-inch SSDs.
This kit includes two air baffles, left and right, which can attach two 1.8-inch SSD attachment locations and Flex cables for attachment to up to four 1.8-inch SSDs.Note: The SSD Expansion Kit cannot be installed if the USB Enablement Kit, 49Y8119, is already installed as these kits occupy the same location in the server.The following table shows the kits required for each combination of drives. For example, if you plan to install eight 1.8-inch SSDs, then you will need the M5115 controller, the Flash kit, and the SSD Expansion kit.Table 8. ServeRAID M5115 hardware kits. =RequiredRequiredRequiredThe following figure shows how the ServeRAID M5115 and the Enablement Kit are installed in the server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (row 1 of the preceding table).Figure 4. The ServeRAID M5115 and the Enablement Kit installedThe following figure shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are installed in the server to support eight 1.8-inch solid-state drives (row 4 of the preceding table).Figure 5.
Part numberFeaturecodeDescriptionMaximumsupported90Y4410A2Y1ServeRAID M5100 Series RAID 6 Upgrade for Flex System190Y4412A2Y2ServeRAID M5100 Series Performance Upgrade for Flex System(MegaRAID FastPath)190Y4447A36GServeRAID M5100 Series SSD Caching Enabler for Flex System(MegaRAID CacheCade Pro 2.0)1These features are described as follows:. RAID 6 Upgrade (90Y4410)Adds support for RAID 6 and RAID 60. This is a Feature on Demand license. Performance Upgrade (90Y4412)The Performance Upgrade for Flex System (implemented using the LSI MegaRAID FastPath software) provides high-performance I/O acceleration for SSD-based virtual drives by exploiting an extremely low-latency I/O path to increase the maximum I/O per second (IOPS) capability of the controller. This feature boosts the performance of applications with a highly random data storage access pattern, such as transactional databases.
Part number 90Y4412 is a Feature on Demand license. SSD Caching Enabler for traditional hard drives (90Y4447)The SSD Caching Enabler for Flex System (implemented using the LSI MegaRAID CacheCade Pro 2.0) is designed to accelerate the performance of hard disk drive (HDD) arrays with only an incremental investment in solid-state drive (SSD) technology. The feature enables the SSDs to be configured as a dedicated cache to help maximize the I/O performance for transaction-intensive applications, such as databases and web serving. The feature tracks data storage access patterns and identifies the most frequently accessed data. The hot data is then automatically stored on the SSDs that are assigned as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a Feature on Demand license.
This feature requires at least one SSD drive be installed. The x240 supports the attachment of the Flex System Storage Expansion Node. The Flex System Storage Expansion Node provides the ability to attach additional 12 hot-swap 2.5-inch HDDs or SSDs locally to the attached compute node. The Storage Expansion Node provides storage capacity for Network Attach Storage (NAS) workloads, providing flexible storage to match capacity, performance and reliability needs.Model 8737-HBx includes the Storage Expansion Node as standard as listed in. All other models support the SEN as an option.The following figure shows the Flex System Storage Expansion Node attached to a compute node.Figure 6. Flex System Storage Expansion Node (right) attached to a compute node (left)The ordering information for the Storage Expansion Node is shown in the following table.Table 12.
Ordering part number and feature code. Part numberFeature code.DescriptionMaximumsupported68Y8588A3JFFlex System Storage Expansion Node1. The feature is not available to be configured using model 7863-10X in e-config.
Use model 8737-15X instead.The Storage Expansion Node has the following features:. Connects directly to supported compute nodes via a PCIe 3.0 interface to the compute node's expansion connector (See Figure 3). Support for 12 hot-swap 2.5-inch drive, accessible via a sliding tray. Support for 6 Gbps SAS and SATA drives, both HDDs and SSDs. Based on an LSI SAS2208 6 Gbps RAID on Chip (ROC) controller. Supports RAID 0, 1, 5, 10, and 50 as standard. JBOD also supported.
Optional RAID 6 and 60 with a Features on Demand upgrade. Optional 512 MB or 1 GB cache with cache-to-flash super capacitor offloadNote: The use of the Storage Expansion Node requires that the x240 Compute Node have both processors installed.For more information, see the Product Guide on the Flex System Storage Expansion Node. Some models of the x240 include an Embedded 10Gb Virtual Fabric Adapter (VFA, also known as LAN on Motherboard or LOM) built into the system board. Lists what models of the x240 include the Embedded 10Gb Virtual Fabric Adapter. Each x240 model that includes the embedded 10Gb VFA also has the Compute Node Fabric Connector installed in I/O connector 1 (and physically screwed onto the system board) to provide connectivity to the Enterprise Chassis midplane. Figure 3 shows the location of the Fabric Connector.The Fabric Connector enables port 1 on the embedded 10Gb VFA to be routed to I/O module bay 1 and port 2 to be routed to I/O module bay 2. The Fabric Connector can be unscrewed and removed, if required, to allow the installation of an I/O adapter on I/O connector 1.The Embedded 10Gb VFA is based on the Emulex BladeEngine 3 (BE3), which is a single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller.
These are some of the features of the Embedded 10Gb VFA:. PCI-Express Gen2 x8 host bus interface. Supports connection to 10 Gb and 1 Gb Flex System Ethernet switches. Supports multiple virtual NIC (vNIC) functions. TCP/IP Offload Engine (TOE enabled). SRIOV capable. RDMA over TCP/IP capable.
iSCSI and FCoE upgrade offering via FoDThe following table lists the ordering information for the IBM Virtual Fabric Advanced Software Upgrade (LOM), which enables the iSCSI and FCoE support on the Embedded 10Gb Virtual Fabric Adapter.Table 13. Feature on Demand upgrade for FCoE and iSCSI support. The x240 has two I/O expansion connectors for attaching I/O adapter cards. There is a third expansion connector designed to connect an expansion node such as the PCIe Expansion Node. The I/O expansion connectors are a very high-density 216-pin PCIe connector. Installing I/O adapter cards allows the server to connect with switch modules in the Flex System Enterprise Chassis. Each slot has a PCI Express 3.0 x16 host interface and both slots support the same form-factor adapters.The following figure shows the location of the I/O expansion connectors.Figure 7.
Location of the I/O adapter slots in the Flex System x240 Compute NodeAll I/O adapters are the same shape and can be used in any available slot. A compatible switch or pass-through module must be installed in the corresponding I/O bays in the chassis, as indicated in the following table. Installing two switches means that all ports of the adapter are enabled, which improves performance and network availability.Table 14. Adapter to I/O bay correspondence. I/O adapter slot in the serverPort on the adapterCorresponding I/O module bay in the chassisSlot 1Port 1Module bay 1Port 2Module bay 2Port 3 (for 4-port cards)Module bay 1Port 4 (for 4-port cards)Module bay 2Slot 2Port 1Module bay 3Port 2Module bay 4Port 3 (for 4-port cards)Module bay 3Port 4 (for 4-port cards)Module bay 4The following figure shows the location of the switch bays in the Flex System Enterprise Chassis.Figure 8.
Location of the switch bays in the Flex System Enterprise ChassisThe following figure shows how two-port adapters are connected to switches installed in the chassis.Figure 9. Logical layout of the interconnects between I/O adapters and I/O modules. The x240 supports the attachment of the Flex System PCIe Expansion Node. The Flex System PCIe Expansion Node provides the ability to attach additional PCI Express cards such as High IOPS SSD adatpers, fabric mezzanine cards, and next-generation graphics processing units (GPU) to supported Flex System compute nodes. This capability is ideal for many applications that require high performance I/O, special telecommunications network interfaces, or hardware acceleration using a PCI Express card. The PCIe Expansion Node supports up to four PCIe 2.0 adapters and two additional Flex System expansion adapters.The PCIe Expansion Node is attached to the x240 as shown in the following figure.Figure 10.
PCIe Expansion NodeThe ordering information for the PCIe Expansion Node is shown in the following table.Table 15. Ordering part number and feature code. Part numberFeature codeDescriptionMaximumsupported81Y8983A1BV.Flex System PCIe Expansion Node1. The feature is not available to be configured using model 7863-10X in e-config. As described in 'Embedded 10Gb Virtual Fabric Adapter,' certain models (those with a model number of the form x2x) have a 10Gb Ethernet controller on the system board, and its ports are routed to the midplane and switches installed in the chassis via a Compute Note Fabric Connector that takes the place of an adapter in I/O slot 1.Models without the Embedded 10Gb Virtual Fabric Adapter (those with a model number of the form x1x) do not include any other Ethernet connections to the Enterprise Chassis midplane as standard.
Therefore, for those models, an I/O adapter must be installed in either I/O connector 1 or I/O connector 2 to provide network connectivity between the server and the chassis midplane and ultimately to the network switches.The following table lists the supported network adapters and upgrades. Adapters can be installed in either slot.
However, compatible switches must be installed in the corresponding bays of the chassis. All adapters can also be installed in the PCIe Expansion Node. The 'Maximum supported' column indicates the number of adapter than can be installed in the server and in the PCIe Expansion Node (PEN).Table 16. Network adapters. Part numberFeature codeDescriptionMaximumsupported94Y5960A1R4NVIDIA Tesla M2090 (full-height adapter)1.47C2120A4F1NVIDIA GRID K1 for Flex System PCIe Expansion Node1†47C2121A4F2NVIDIA GRID K2 for Flex System PCIe Expansion Node1†47C2119A4F3NVIDIA Tesla K20 for Flex System PCIe Expansion Node1†47C2122A4F4Intel Xeon Phi 5110P for Flex System PCIe Expansion Node1†None4809.4765 Crypto Card (full-height adapter)2. When this double-wide adapter is installed in the PCIe Expansion Node, it occupies both full-height slots.
The low-profile slots and Flex System I/O expansion slots can still be used.† If installed, only this adapter is supported in the system. No other PCIe adapters may be installed. Orderable as separate MTM 4765-001 feature 4809. Available via AAS (e-config) only. The x240 supports the ESXi hypervisor on a USB memory key via the x240 USB Enablement Kit. This kit offers two internal USB ports.
The x240 USB Enablement Kit and the supported USB memory keys are listed in the following table.There are two types of USB keys, preloaded keys or blank keys. Blank keys allow you to download a Lenovo customized version of ESXi and load it onto the key. Preload keys are shipped with a specific version of ESXi already loaded. The x240 supports one or two keys installed, but only certain combinations:Supported combinations:. One preload key. One blank key.
One preload key and one blank key. Two blank keysUnsupported combinations:.
Two preload keysInstalling two preloaded keys will prevent ESXi from booting as described in. Having two keys installed provides a backup boot device. Both devices are listed in the boot menu, which allows you to boot from either device or to set one as a backup in case the first one gets corrupted.Note: The x240 USB Enablement Kit and USB memory keys are not supported if the SSD Expansion Kit (90Y4391) is already installed, because these kits occupy the same location in the server.Table 20. Virtualization options. For quick problem determination when located physically at the server, the x240 offers a three-step guided path:. The Fault LED on the front panel.
The light path diagnostics panel, shown in the following figure. LEDs next to key components on the system boardThe x240 light path diagnostics panel is visible when you remove the server from the chassis. The panel is located on the top right-hand side of the compute node, as shown in the following figure.Figure 11. Location of x240 light path diagnostics panelTo illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the chassis, and press the power button. The power button doubles as the light path diagnostics remind button when the server is removed from the chassis.The meanings of the LEDs in the light path diagnostics panel are listed in the following table.Table 21. Light path diagnostic panel LEDs.
LEDMeaningLPThe light path diagnostics panel is operational.S BRDA system board error is detected.MISA mismatch has occurred between the processors, DIMMs, or HDDs within the configuration as reported by POST.NMIA non maskable interrupt (NMI) has occurred.TEMPAn over-temperature condition occurs that was critical enough to shut down the server.MEMA memory fault has occurred. The corresponding DIMM error LEDs on the system board are also lit.ADJA fault is detected in the adjacent expansion unit (if installed). The server contains a Lenovo Integrated Management Module II (IMM2), which interfaces with the advanced management module in the chassis. The combination of these provides advanced service-processor control, monitoring, and an alerting function.
If an environmental condition exceeds a threshold or if a system component fails, LEDs on the system board are lit to help you diagnose the problem, the error is recorded in the event log, and you are alerted to the problem. The Flex System x240 Compute Node has a three-year on-site warranty with 9x5 next-business-day terms. Lenovo offers the warranty service upgrades through ServicePac, discussed in this section. The ServicePac is a series of prepackaged warranty maintenance upgrades and post-warranty maintenance agreements with a well-defined scope of services, including service hours, response time, term of service, and service agreement terms and conditions.ServicePac offerings are country-specific. That is, each country might have its own service types, service levels, response times, and terms and conditions. Not all covered types of ServicePac might be available in a particular country.
For more information about Lenovo ServicePac offerings available in your country, see the ServicePac Product Selector at.The following table explains warranty service definitions in more detail.Table 22. Warranty service definitions. TermDescriptionOn-site repair (OR)A service technician will come to the server's location for equipment repair.24x7x2 hourA service technician is scheduled to arrive at your customer’s location within two hours after remote problem determination is completed. We provide 24-hour service, every day, including Lenovo holidays.24x7x4 hourA service technician is scheduled to arrive at your customer’s location within four hours after remote problem determination is completed.
We provide 24-hour service, every day, including Lenovo holidays.9x5x4 hourA service technician is scheduled to arrive at your customer’s location within four business hours after remote problem determination is completed. We provide service from 8:00 a.m. In the customer's local time zone, Monday through Friday, excluding Lenovo holidays. If after 1:00 p.m. It is determined that on-site service is required, the customer can expect the service technician to arrive the morning of the following business day. For noncritical service requests, a service technician will arrive by the end of the following business day.9x5 next business dayA service technician is scheduled to arrive at your customer’s location on the business day after we receive your call, following remote problem determination.
We provide service from 8:00 a.m. The server conforms to the following standards:. ASHRAE Class A3. FCC - Verified to comply with Part 15 of the FCC Rules Class A. Canada ICES-004, issue 3 Class A. UL/IEC 60950-1.
CSA C22.2 No. 60950-1. NOM-019. Argentina IEC 60950-1. Japan VCCI, Class A. IEC 60950-1 (CB Certificate and CB Test Report). China CCC (GB4943); (GB9254, Class A); (GB17625.1).
Taiwan BSMI CNS13438, Class A; CNS14336. Australia/New Zealand AS/NZS CISPR 22, Class A. Korea KN22, Class A, KN24. Russia/GOST ME01, IEC 60950-1, GOST R 51318.22, GOST R. 51318.249, GOST R 51317.3.2, GOST R 51317.3.3.
IEC 60950-1 (CB Certificate and CB Test Report). CE Mark (EN55022 Class A, EN60950-1, EN55024, EN61000-3-2,. EN61000-3-3). CISPR 22, Class A. TUV-GS (EN60950-1/IEC 60950-1, EK1-ITB2000).
For more information, see the following resources:. US Announcement Letter. US Announcement Letter for x240 model 8737-15X in e-config. Flex System x240 Compute Node product page. Flex System Information Center. Flex System x240 Compute Node Installation and Service Guide. ServerProven for Flex System.
ServerProven hardware compatibility page for the x240. ServerProven compatibility page for operating system support. Flex System Interoperability Guide. Configuration and Option Guide.
xREF - x86 Server Reference. System x Support Portal. System Storage® Interoperation Center. TrademarksLenovo and the Lenovo logo are trademarks or registered trademarks of Lenovo in the United States, other countries, or both.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |