Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Future consideration
Created by Guest
Created on Sep 22, 2023

Increase limits and remove restrictions for NVMe over Fiber Channel

Increase the limit of "NVMe over Fiber Channel hosts per I/O group = 16", (maybe to double that) and

REMOVE the Volume Mobility restrictions #5:  "SCSI only. Fibre Channel and iSCSI supported. NVMe not supported."

IBM has been touting the usefulness of moving to end-to-end NVMeoF connectivity for years.  And because of improvements to SCSI protocol over the past couple years (interrupt coalescing or something like that?), has "lowered the usefulness bar" of this.  Consequentially, further development in this area has been lagging - specifically the above limitations (It was also mentioned to me that there is a limit on the number of NVMe volume mappings - but I could not find this limit specified in the configurations limits and restrictions document.  Any limit to an insufficient number of supported NVMe-mapped volumes should be increased to be more on par with supported SCSI volume mappings - or at least to a few hundred volumes).

Problem is, the SCSI improvements still do NOT give a large improvement the one biggest advantage that was always touted for end-to-end NVMe connectivity:  Reduced CPU utilization.  VMware 7 adds support for NVMeo over Fiber Channel, and uses a "High-Performance Plug-in (HPP)" multipathing driver to support it.  This end-to-end NVMe protocol reduces the CPU consumption of VMware that results from SCSI interrupt requests.  Another VMWare storage improvement is available with this HPP:  You can set all IO's to bypass VMware's I/O Scheduler, which can suffer performance resulting from ESX-internal IO queueing.   Additionally, this plugin detects new NVMe-mapped volumes, and size changes of these volumes, without a need to 'rescan the bus' like you need to do with SCSI.

Our environment will, according to testing performed, benefit significantly from reduced CPU utilization on our ESX hosts as a result of implementing end-to-end NVMe over FC connectivity.  The limits, however, are concerning and making us hesitant to do so.  Please do necessary development to remove the limits as noted above.

Idea priority High
  • Admin
    Philip Clark
    Reply
    |
    Nov 24, 2023

    Thank you for submitting this enhancement request. We have reviewed this request and believe it is a good candidate for Virtualize.


    Thanks,
    FlashSystem Team

    philipclark@ibm.com

1 MERGED

Expand the 16 host limit for NVMe over Fibre Channel hosts per I/O group

Merged
We have run into the "Fabric too large" situation at a customer that added more than 16 VMware hosts to their NVMeoF Zone towards an FS5200 (I see the same limit in the FS7300/FS9500). Is there a particular reason for the only 16 host limit for NV...
7 months ago in IBM Spectrum Virtualize / FlashSystem 1 Future consideration