Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 487 of 5231

Add quota logging capability to keep overall quota accounting consistent

For performance, nowadays, most of quota information is stored and updated in memory. Then, it will be synced back to disk in an asynchronous way later. However, there isn't a recovery mechanism for GPFS quota in abnormal situations. These can mak...
about 2 years ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Increase the maximum number of independent filesets per filesystem

Currently we are hovering under 1000 independent file sets within the user file system. Even taking into account file sets that are removed when users leave site it is very possible that we will hit the limit within a year.
almost 6 years ago in Storage Scale (formerly known as GPFS) 2 Planned for future release

Scale S3/Nooba tiering to tape

With the new capability S£/Nooba replacing S3/Swift we would use the Scale ILM capability to move buckets/filesets from disk to tape thanks to Storage Archive integration
3 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to change NSD names

We would like the ability to change the name of existing NSDs. This change could be done while entire GPFS NSD Server Cluster is down, similar to when you change the NSD ServerList.
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Process and automation to simplify the patching and software update of large node count clusters

Managaing the the currency of Spectrum/Storage Scale systems are difficult when you have over 1000 nodes within the cluster in question. The request is for the process and automation for both the local repository and cluster endpoints where upon r...
2 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Improvement to Distributed Lock Manager (DLM) for shared files.

Request for an improved DLM mechanism when leveraging a scale out protocole node approach to better handle Locking broadcasting overhead for "shared to all" files and folder.
almost 8 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Performance monitoring for AIX LPAR node by GPFS GUI with ZImon

Performance monitoring for AIX LPAR node by GPFS GUI with ZImon
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

IBM Spectrum Scale CNSA & OpenShift w/two network adapters

Client has existing big Spectrum Scale infrastructure (not-containerized) with hundreds clients connected from different environments. Spectrum Scale is using dedicated not routed VLAN for daemon-interface network following IBM best practice. Ther...
about 2 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Spectrum Scale - automatic audit log files rotation

The File Audit Logging functionality from Spectrum Scale captures file operations on a file system and logs them to a retention enabled fileset.Depending on the number of operations at the Spectrum Scale file system, it can generate lots of audit ...
about 4 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

HPO to support InfiniBand

In order to implement a high performant and reliable infrastructure it is required to have HPO support for deployment on InfiniBand infrastructure.
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration