Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 488 of 5212

Improve system memory hard limit for GPFS program startup to adapt to AI system

We hit the following issue on Spectrum Scale 5.1.6.1 when GPFS is starting up. 2023-12-13_21:22:28.648+0800: [I] Verifying minimum system memory configurations.2023-12-13_21:22:28.648+0800: [I] The system memory configuration is 2063930 MiB2023-12...
5 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Graceful failure of AFM recall timeouts when using AFM-NSD + Support for offline file awareness with AFM (related issues)

Any client using AFM-NSD Caching with a home storage repository that uses DMAPI aware Hierarchical Storage Management is at risk. AFM-NSD has no ability to gracefully respond when an AFM Cache requst to a recall filesfrom home times out. AFM-NFS u...
5 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support for ROCKY Linux

Client is requesting ROCKY Linux support for their client nodes.
6 months ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Scale GUI Dashboard Nodes by Latency

Add the ability to "Cherry Pick" and display in the Scale GUI the top nodes by latency. This could be on the statistics widget or a new widget. Add the granularity of client nodes, NSD nodes, nodes by nodeclass XYZ, or custom picked nodes. Additio...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Scale GUI Statistics Save, Generate report, and Forward Selections

Provide the ability in the Scale GUI Statistics monitoring page to save custom settings for easy re-use and ability to forward these selections to the Dashboard page for use in a monitoring widget. There are pre-set statistics monitoring selection...
6 months ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Storage Scale - Safe-guarded snapshots - allow expiration - give client to set its own expiration parameter in the GUI snapshot rule

By setting the "Allow Expiration" for snapshot rules under the GUI, it sets the snapshot only until the next execution, not for the whole retention, neither allows to client to sets its own retention. Based on the IBM Documentation "https://www.ib...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Storage Scale GUI command line does not show when Expiration (Safe Guarded snapshot) is enabled

Client enabled the option "Allow Expiration" at the GUI. At the GUI panel, we see the Expiration column changed from False to True. However when checking in the /usr/lpp/mmfs/gui/cli/lssnaprule command, it does not have the Expiration column, so i...
6 months ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

Add performance monitoring for fileset and storage pool per client node

When a storage pool or fileset is close to being filled up, it is possible to identify which client node is exerting the most write pressure through performance monitoring. This allows the relevant business unit to be notified to make adjustments.
6 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Auto expansion of inode to trigger once usage reaches around 95%

Today the Scale alerts and auto expansion are triggered when inode usage is very close to 100% and typically that means when the filesystem in use is very active and busy and is not optimal for performance. Would like to see the auto expansion of ...
6 months ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Support BSD groups semantics

Customer is migrating from Isilon, which supports configuring the BSD group semantincs meaning that new files inherit the group id of the parent group. Some file systems on linux support the "grpid" or "bsdgroups" mount option to enable same seman...
6 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration