Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 540

Spectrum Scale Container Storage Interface Driver compatibility rhel 7.4

My environment is as follows:gpfs version:Spectrum_Scale_Data_Access-5.1.0.2-x86_64-Linux-installCSI version:IBM Spectrum Scale Container Storage Interface Driver 2.1.0Kubernetes version :1.19.3 https://www.ibm.com/support/knowledgecenter/STXKQY_C...
almost 4 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

NSD client IO should have a time-out and a retry tunable per filesystem (JUMP Trading LLC)

The current Spectrum Scale configuration does not give the customer the ability to provide a timeout or retry values to NSD client IO. Since these tuning parameters do not exist, this can result in hung IOs or unexpected IO errors when the NSD ser...
almost 8 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

File sync tools and Spectrum Scale permissions

Copying data in Spectrum Scale, especially from external sources, can be problematic at times due to the way permissions work. This is particularly a problem when data needs to be synced through time in preparation for cutting over from a new exte...
almost 6 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

CES/NFS/SMB Function Enhancements

1) CES FO/FB timeout, shorten the io pending time in the failure of CES node failure/GPFS node failure/one of FG failure in replication mode. The io pending time should be less than 4-5s. 2) AFM/DR RPO Interval, the minimal RPO interval is 60 minu...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Spectrum Scale ECE - Need quota feature to support limiting the resource usage on different storage pools

In more and more projects, we are delivering GPFS file systems with multiple storage pools for teiring. And our customers keep asking for quota limitation of each user/group/fileset on different storage pools, because different pools have differen...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Prioritization of mmrestripefs -r block rebuilds

When a FPO mmrestripefs -r kicks off it should prioritize rebuilding blocks that have 2 duplicates missing over blocks that only have one duplicate missing.
about 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Media read error to trigger auto rewrite

When a media read error is reported by a device a read from another replica should be followed by a rewrite to the original device to allow the devices LBA to use a different part of the device.
about 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Option to avoid down drives

Option: when a device is down allows writes to blocks on the device to go to a different device rather than create a missed update. This protects the file system from large numbers of missed update blocks and retains full replica count.
about 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Allow for separation of client services from storage services

Allow for the optional separation of the gpfs mmfsd process into a client process mmfscd independent of the gpfs storage process mmfssd. When something goes wrong with I/O subsystem in one node the applications on that node should still be able to...
about 6 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Make mmclone permissions work like cp

Currently, a file clone created with mmclone always has its group owner set to the primary gid of the creating process, without regard to the setgid bit of the containing directory, and without regard to the group owner of the original file. Other...
about 2 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration