Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 26

Backpressure for virtual disk I/O to prevent throughput and IOPS overload to avoid "Forcing kernel panic to clear hung I/O".

When a cluster reboots, the recovery process can generate a lot of I/O ( high throughput and IOPS ) which might hit the limits of the virtual disk. And unlike physical disks, when virtual disks get overwhelmed, we can see I/O hang for up to second...
about 2 months ago in Storage Scale (formerly known as GPFS) 1 Submitted

we need an a way to avoid such NFS System Hang IO when any of protocol nodes restarted and ganesha go in grace period time ...i suggest to enhance in way of ganesha work to avoid clients that connected to other protocol nodes to affect with this time and only cliets that connected to this rebooted node hange in time of grace period until VIP move to another availabe node ...As we have a Big cluster with 6 nodes of protocol nodes and when one of them go down all banking sectors affected with this time of grace period ..hope to find us a solution for this long time problem desing.

because not all system will go down and only clients that connected to protocol node affected one will disconnect only and other will still working fine. all mega projects that use Spectrum scale and to avoid SPOF description when one of protocol ...
3 months ago in Storage Scale (formerly known as GPFS) 0 Submitted

Support user- or node-level isolation mechanisms, such as sub-directory mounts and the ability to collect and examine usage statistics at the job level.

We have a large RFP where the proposed file system must have user- or node-level isolation mechanisms, such as sub-directory mounts and the ability to collect and examine usage statistics at the job level. We have a strong competition from DDN so ...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Submitted

SpectrumScale - Stability & Observability issues

We had numerous issues on DataStage, while there were no issues I could see from any CNSA kubernetes objects, CSI drivers, cluster, remote cluster, filesystem, operator logs or events in any IBM namespace. I could see that some datastage pods were...
3 months ago in Storage Scale (formerly known as GPFS) 0 Submitted

Improve the performance of small I/O for large files when AFM was enable

I have two storage scale cluster with AFM enable,cluster A as Primary for production,B as secondary for backup,my production cluster performance Seriously degradation whe using small I/O for large files(In most cases, there is up to a thirtyfold d...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Submitted

Ability to alter the audit log retention period

Hello! We recently came across the need to reduce the audit log retention period of a storage scale system to save some space and we realized that the only way to do that is to disable and re-enable the audit log feature altogether. As an end user...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Submitted