Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 500

extend number of independent filesets

For a file system used in a large research facility with a capacity beyond 20 PB, we would like to generate at least 5000 independent filesets for projects being active in processing and generating data. We expect initially a demand for 5000 indep...
almost 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Reflink/ficlone support

reflink/ficlone is a system call for making clones of files, where all data blocks is initially shared, and changes are then done through copy-on-write/redirect-on-write. This can be done through "cp --reflink", or IOCTL_FICLONERANGE system call. ...
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Add Support for Hardware Load Balancers like the F5

We are a large bank and a large enterprise customer. Not all Open-Source software is supported. HA-Proxy is one of those not supported. But we still need to Load balance the GPFS GUI. This is a requirement when hosting Scale in OCP. We need to pla...
4 months ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Subnets Config Parameter in CNSA / Remote Cluster

With MROT capability on Scale/Scale System (ESS) for Ethernet networks, the ability to use that vs. Network Bonds in OCP environments would be beneficial. In a recent lab test using an ESS3500 + OpenShift (Fusion), MROT was used initially, however...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Request to add "all" option to put filesystem in maintenance mode

In order to shutdown/startup gpfs gracefully, it is recommended to put filesystem in maintenance mode using "mmchfs --maintenance-mode" cmd before shutdown. However, this command allow only one filesystem at a time. So if the customer has many fil...
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Shorten CES failover/failback timeout

The io pending time in the failure of CES interface failure/CES node failure is 10s. Customer‘s expectation is 4-5s so that can improve usage experience.
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Requests and limits for statefullset ibm-spectrum-scale-gui in Spectrum Scale container native shoud be configurable

Currently CPU request for gui pods is set to 500m. This request is too large for small clusters and pod typically use ~10m. This can prevent other workload to be sheduled and it's wastefull especially in small multicluster scenarios.
over 1 year ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

CNSA / CSI add support for Access Mode ROX (ReadOnlyMany)

There are UseCases where Data should be shared between PODs or among different Clusters. To protect Data and prevent inconsistency it would be beneficial to enforce a ReadOnly Mode at OpenShift PV Level. Currently, ReadOnly can only be set in the ...
12 months ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

mmrestripefs % complete metric is near useless

Current behavior------------------mmrestripefs (and mmdeldisk) displays a % complete metric while it is going. Unforunately, this seems to be tracking the % of the inode of the whole filesystem that it is currently working on. Deficiency of this--...
about 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Limit NFS Exports to a CES Group

Following limitation for NFS exports is in place (see A8.9 in Q&A section Spectrum Scale 5.1.2): If a file system that was previously exported successfully by NFS on a CES node becomes unavailable, the NFS daemon exits and the CES node becomes...
about 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration