Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 524 of 5454

AD with autoID mode to support NFSv4 Kerberos authentication

Need cross platform access (NFS and SMB) in an AD domain where the AD-LDS is not RFC2307 compliant. We need to be able to use AD with autoID mode to support NFSv4 Kerberos authentication. Using the kerberos ticket for IDMAP instead of UID is a use...
almost 9 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

mmaddnode error message enhancement

mmaddnode failed with a "node already belongs to GPFS cluster" error. But the problem ended up being that a reverse DNS lookup of the IP address of the node being added returned the IP address of a node that already belonged to the cluster. Since ...
almost 9 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Add the ability for protocol replication to support many:1 replication for shared DR cluster

The request is for support of many:1 replication for protocol cluster replication rather than only the present 1:1 model.
almost 9 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Remove requirement to manually remove / re-add file authentication with mmcesdr

When executing failover, restore or failback, the instructions in Section 16 of the document “IBM Spectrum Scale 4.2: Advanced Administration Guide” clearly states that one must remove and then re-create file authentication on the relevant cluster...
almost 9 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Support for NFSv4 as AFM transport protocol

AFM currently uses NFSv3. NFSv3 is insecure and so mount point access could be gained by spoofing the IP address and a UID. We would like to request the option to alternatively used NFSv4, primary to get AFM through internal audits. This is a medi...
almost 9 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

inode number limit

Need to restrict the inode number to be lower than 2^32 . If its greater than that, 32 bit applications are failing complaining file not found.
almost 5 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Option to mount the file system with the root being as a subdirectory

Control over where GPFS mounts the 'root' (or perhaps fakeroot) of a GPFS file system in order to preserve file paths across different topologies.
over 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

mmfsck -P poolname

Checking a file system with multiple pools means results in a check of the entire metadata for the file system when each pool is checked.  Not optimal.  Check individual pools instead.
over 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Pool placement based on path

Ability to place data into a pool based on the directory location of the data.
over 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Ability to flush / clear the cache (not to disk)

The cache on a node should be able to be cleared without unmounting and remounting the file system
over 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration