IBM System Storage DS5100 Manuals Manuals and User Guides for IBM System Storage DS5100. We have 3 IBM System Storage DS5100 manuals available for free PDF download: Hardware Manual, Installation, User's, And Maintenance Manual, Installation Manual.

  1. Dw:eserver:ibm System Storage:ds3400 Host Type For Mac Pro
  2. Dw:eserver:ibm System Storage:ds3400 Host Type For Mac Download

Dw:eserver:ibm System Storage:ds3400 Host Type For Mac Pro

  1. Readbag users suggest that System Storage DS3400 Storage Subsystem: Installation, User's, and Maintenance Guide is worth reading. The file contains 174 page(s) and is free to view, download or print.
  2. The IBM Flex System™ CN4058 8-port 10Gb Converged Adapter is an 8-port 10Gb converged network adapter (CNA) for Power Systems compute nodes that supports 10 Gb Ethernet and FCoE. With hardware protocol offloads for TCP/IP and FCoE standard, the CN4058 8-port 10Gb Converged Adapter provides maximum.

Abstract As of the 1st January 2017 the Host/Controllers/Switches section will not longer be updated. Please use SSIC instead: The documents titled Supported Hardware List and Recommended Software Levels provide the operating systems, Host adapters, SAN fabric elements, RAID controllers and selected other hardware that have been tested or qualified by IBM.

Dw:eserver:ibm System Storage:ds3400 Host Type For Mac Download

This page combines the previously separate Software and Hardware support matrices. For operating systems, we show the latest tested release levels and service packs. Only the listed operating systems are supported. For host adapters, SAN fabric elements and RAID controllers, we list our currently recommended firmware and/or BIOS levels and formally support only the hardware listed. While these levels are not mandatory levels for a customer to be supported by IBM, they are the recommended levels.

There may be known operational issues with older firmware and BIOS levels and, in these cases, a customer working with the IBM Support center may be directed to upgrade a component to a recommended level. SVC will support all OS applications at any level that run to the standard block level OS interface. This includes such applications as Oracle, HP Virtual Connect, etc.

Note:. The levels shown in italic text indicate previously recommended levels. The level shown in bold text indicates the latest recommended level. EOL - (End of Life): Where interoperability items have gone end of life (out of support) and are no longer supported by the vendor either generally or by extended service contact IBM will continue to support the environment on a best can do basis.

Where issues occur which are deemed by IBM support to be directly related to items which are no longer generally supported by the vendor IBM may direct customers to upgrade a component to a recommended level. Whilst IBM recommends these levels based upon the most recent testing, the following levels were tested against previous versions of SVC and IBM will support the use of these levels with SVC V7.4.x If you have Interoperability requirements which are not listed in this document please contact your IBM Account Representative - 16Gbps Fibre Channel Node Connection Please see SSIC for supported 16Gbps Fibre Channel configurations supported with 16Gbps node hardware. Note 16Gbps Node hardware is supported when connected to Brocade and Cisco 8Gbps or 16Gbps fabrics only. Direct connections to 2Gbps or 4Gbps SAN or direct host attachment to 2Gbps or 4Gbps ports is not supported. Other configured switches which are not directly connected to the16Gbps Node hardware can be any supported fabric switch as currently listed in SSIC.IBM highly recommends the use of SDDDSM 2.4.7.1 for clustered environments.

With versions of SDDDSM earlier than 2.4.4.0 & clustered environments please see the. Please see the following link for advice on Qlogic Fibre Channel cards are not supported attached to Cisco or Brocade FCF switches. Veritas Storage Foundations Settings The default setting of VERITAS Storage Foundation 5.1 is SCSI-2. However in Configurations with SVC or V7000 IBM recommends the use of SCSI-3 for native solution VCS + VERITAS Storage Foundation 5.1 (VSFW 5.1 HA) + DMPDSM or when using VERITAS Storage Foundation 5.1 with SDDDSM.IBM highly recommends the use of SDDDSM 2.4.7.1 for clustered environments. With versions of SDDDSM earlier than 2.4.4.0 & clustered environments please see the. HBA NodeTimeOut on Emulex adapters with Windows OS should be set to 3 seconds. Please see the.

Dw:eserver:ibm

Veritas Storage Foundations Settings The default setting of VERITAS Storage Foundation 5.1 is SCSI-2. However in Configurations with SVC or V7000 IBM recommends the use of SCSI-3 for native solution VCS + VERITAS Storage Foundation 5.1 (VSFW 5.1 HA) + DMPDSM or when using VERITAS Storage Foundation 5.1 with SDDDSM. HP 9000 Series Servers (PA-RISC 64bit) & HP Itanium ServersThe following patches should be applied to the base OS: BUNDLE11i(B.3) HWEnable11i(B.064) FEATURE11i(B.063) QPK1123(B.064) Set PV (Physical Volume) timeout to 60 seconds when using PV Links. This is to avoid I/O timeouts during fabric maintenance and SAN Volume Controller CCLs (Concurrent code loads) BUNDLE11i B.3 Required Patch Bundle for HP-UX 11i v2 (B.11.23), FEATURE11i B.070a Feature Enablement Patches for HP-UX 11i v2, HWEnable11i B.070 Hardware Enablement Patches for HP-UX 11i v2, OnlineDiag B.11.23.10.05 HPUX 11.23 Support Tools Bundle, QPKAPPS B.072 Applications Patches for HP-UX 11i v2, QPKBASE B.072 Base Quality Pack Bundle for HP-UX 11i v2, HPUX11i-OE-MC B.

HP-UX Mission Critical Operating Environment Component SDD does not support path failover when applied to raw disk. The use of PV Links is recommended. Supported OS Versions Solaris 11.0 Solaris 11.1 Solaris 11.2 Solaris 11.3 Oracle VM Server for SPARC version 3.0 For guidance on SUN Cluster in a SAN Volume Controller stretch cluster environment please refer to the An issue has been identified with certain Solaris versions that may cause I/O failures during a failover scenario.

The effected Versions are S11.1SRU20 and later, S11.2 and later, S11.2SRU The following IDRs resolve the issue and are available directly from Oracle DR1352.1 for Solaris 11.1.21.4.1 IDR1565.1 for Solaris 11.2.4.6.0 IDR1563.1 for Solaris 11.2.2.5.0, 11.2.2.7.0,11.2.2.8.0. SAN Volume Controller supports Server Blades that meet all of the following criteria; 1. The Server blade contains Qlogic and Emulex generic chip sets running the IBM or equivalent supported Drivers; and 2. These generic chip sets and drivers are present in the SAN Volume Controller support matrix for HBAs used in non Blade servers; and 3. The Server blade is running an OS that is present in the SAN Volume Controller support matrix; and 4. Either a Fibre Channel pass through module is used in the Blade server chassis to connect the Blades to an external FC network; or The Fibre Channel switch module used in the Blade server chassis is equivalent (in the sense it runs the same microcode on the same switch hardware base) to a discrete Fibre Channel switch that is already listed in the SAN Volume Controller support matrix.

System

Examples of specific BladeCentre configurations that IBM includes in SAN Volume Controller interoperability testing currently include: IBM Flex V7000 and General Flex Systems Support.IBM highly recommends the use of SDDDSM 2.4.7.1 for clustered environments. With versions of SDDDSM earlier than 2.4.4.0 & clustered environments please see the. With 90Y3566 & 90Y3550+ the BNT 10 Port 10Gb Ethernet Switch Module (PN 46C7191) and QLogic Virtual Fabric Extension Module (PN 46M6172) are required components. PN 90Y3550 must be upgraded with PN 49Y4265 (Advanced Upgrade).

Mac

PNIC mode only when used with the IBM Brocade Converged 10GbE Switch Module (P/N 69Y1909) pNIC or vNIC1 mode when used with the IBM BNT 10 Port 10Gb Ethernet Switch Module (P/N 46C7191). SAN boot (IPL) is supported from SVC however the IPL processes can fail because there is no multipath support during the boot process Remove the ql2xfailover=0 parameter from the IBM modprobe.conf Supported with Emulex 11000 and QLogic 2460 With 90Y3566 & 90Y3550+ the BNT 10 Port 10Gb Ethernet Switch Module (PN 46C7191) and QLogic Virtual Fabric Extension Module (PN 46M6172) are required components. PN 90Y3550 must be upgraded with PN 49Y4265 (Advanced Upgrade).

PNIC mode only when used with the IBM Brocade Converged 10GbE Switch Module (P/N 69Y1909) pNIC or vNIC1 mode when used with the IBM BNT 10 Port 10Gb Ethernet Switch Module (P/N 46C7191). Single path SAN boot Problem: SLES 11Server x8664 doesn't boot after OS installation on SAN disk. Problem appears when Adaptec Raid controller BIOS (local disk) and Emulex HBA BIOS are active.

Solution: Host boots from SAN disk when local disk controller BIOS is disabled. Expand / shrink vdisk Warning: data loss possible!!!!! With 90Y3566 & 90Y3550+ the BNT 10 Port 10Gb Ethernet Switch Module (PN 46C7191) and QLogic Virtual Fabric Extension Module (PN 46M6172) are required components. PN 90Y3550 must be upgraded with PN 49Y4265 (Advanced Upgrade). PNIC mode only when used with the IBM Brocade Converged 10GbE Switch Module (P/N 69Y1909) pNIC or vNIC1 mode when used with the IBM BNT 10 Port 10Gb Ethernet Switch Module (P/N 46C7191). VMware Native Multipathing Plugin (NMP) Controllers Please refer to the Configuration Guide or SAN Volume Controller Information Center which details the recommended customer configurable settings that should be applied for each storage controller type.

Multipathing. Round Robin: IO is distributed over multiple ports on the controller. Mdisk Group Balanced: IO is sent to one target port on the controller for each mdisk. The assignment of ports to mdisks is chosen to spread all the mdisks within a mdisk group across all of the active target ports as evenly as possible. Single port active: All IO is sent to a single port on the controller for all mdisks. Controller Balanced: IO is sent to one target port on the controller for each mdisk.

The assignment of ports to mdisks is chosen to spread all the mdisks across all of the active target ports as evenly as possible. IBM Storwize V3700. Dynamic expansion of controller LUNs is not supported Please refer the for the latest recommended DS4000 client software and RDAC levels Please refer the for specific configuration guidelines, cabling requirements & restrictions when attaching EXP810 to DS4300 & DS4500 DS4000 copy services (FlashCopy, VolumeCopy, MetroMirror) can be used with SAN Volume Controller RAID-5 Array size must be 4+1 or larger. RAID-6 Array size must be 4+1+1 or larger. DS4000 range maximum LUNs per partition is currently 256, limiting the number of LUNs from a single DS4000 to a single SAN Volume Controller cluster to 256. Please see the following DS4K flashe s If you have 07.36.08.xx code, you MUST contact DS4000 support prior to upgrading to your DS4000 firmware.

The 07.36.08.xx release is no longer supported. Dynamic expansion of controller LUNs is not supported Please refer the for the latest recommended DS5000 client software and RDAC levels Please refer the for specific configuration guidelines, cabling requirements & restrictions when attaching EXP5000 and EXP810 DS5000 copy services (FlashCopy, VolumeCopy, MetroMirror) can be used with SAN Volume Controller. DS5000 range maximum LUNs per partition is currently 256, limiting the number of LUNs from a single DS5000 to a SAN Volume Controller cluster to 256.

RAID-5 Array size must be 4+1 or larger RAID-6 Array size must be 4+1+1 or larger. Your HDS support representative will perform hardware maintenance and firmware upgrade procedures. These operations are non-disruptive to SAN Volume Controller Dynamic expansion of controller LUNs is not supported An SAN Volume Controller configuration must include at least one storage controller which supports SAN Volume Controller quorum disks Ensure that you wait until each Volume is formatted before you present it to the SAN Volume Controller.

USPv & USPv m advanced function such as thin provisioning are not supporte. Switches SAN Volume Controller supports NPIV products in the SAN with the restriction that these products can only be used for host attachment.

An example of an NPIV product is the Brocade Access Gateway module for the BladeCenter. FCIP technology listed on the IBM SAN Support pages can be used in SAN Volume Controller configurations for the following uses: 1) iSCSI host attachment. Please refer to the iSCSI Host Attachment section for more information. 2) Links between two SAN Volume Controller clusters in a Metro Mirror or Global Mirror configuration. Please refer to the for details. Clustered FCoE hosts are not supported Qfabric is supported as an FCoE Transit switch, and QFX3500 Standalone is supported acting as an FCoE to FC Gateway device above the Brocade SAN with Fibre Channel connections to SVC.

Direct attach of Qfabric or QFX3500 to SVC is not supported. Other Hardware & Software Technologies for extending the distance between two SAN Volume Controller clusters can be broadly divided into the following two categories. Inter Cluster Fibre Channel Extenders Fibre channel extenders simply extend a fibre channel link by transmitting fibre channel packets across long links without changing the contents of those packets. IBM has tested a number of such fibre channel extender technologies with SAN Volume Controller and will support fibre channel extenders of all types provided that they are planned, installed and operated as described in the document.

Inter Cluster SAN Routers SAN routers extend the scope of a SAN by providing 'virtual nPorts' on two or more SANs. The router arranges that traffic at one virtual nPort is propagated to the other virtual nPort but the two fibre channel fabrics are independent of one another. Thus nPorts on each of the fabrics cannot directly log into each other. There are distance restrictions imposed due to latency. The amount of latency which can be tolerated depends on the type of copy services being used (Metro Mirror or Global Mirror). Details of the maximum latencies supported can be found in the.

Machine Code updates for Power Systems and System Storage are available for IBM machines that are under warranty or an IBM hardware maintenance service agreement. Some exceptions apply. For more information, including how to obtain access to Machine Code updates for machines outside of warranty that are not covered by an IBM hardware maintenance service agreement, please click.

Code for operating systems or other software products is available only where entitled under the applicable software warranty, IBM software maintenance or Software Subscription and Support agreement. Some exceptions may apply. For a list of Fix Central Machine Code updates available for installation on select machine types that do not require the machine to be covered under warranty, an IBM hardware maintenance service agreement, or a Special Bid Agreement please click. All code (including Machine Code updates, samples, fixes or other software downloads) provided on the Fix Central website is subject to the terms of the applicable license agreements.

As previously announced, Lenovo has acquired IBM's System x business. Machine Code policies relating to System x machines will be established by Lenovo and may be different from the policies described herein.