Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 125

License Data via RestAPI

Scale will soon only be license via TB and not via Sockets. This mean we need to update the way how to keep control of our license status and by that we also want to move away from Clients / Agent based License tools to more web based license tool...
almost 6 years ago in Storage Scale (formerly known as GPFS) 2 Delivered

Spectrum Scale Proxy Support for PCI compliance

Client uses Scale as object storage for IBM FileNet P8 to store auto insurance claim objects. Project is to archive objects using TCT and iCOS
almost 6 years ago in Storage Scale (formerly known as GPFS) 1 Delivered

Spectrum Scale GPFS support for Splunk

GPFS is not a supported file system by vendor Splunk
almost 6 years ago in Storage Scale (formerly known as GPFS) 3 Delivered

user/role based permission control in Spectrum Scale

So far, in Spectrum Scale, all user management is done by OS, e.g. by /etc/passwd, by AD or LDAP. However, for administration, there is only one role: the user root or the common user with sudo permission. No matter it's the user root, or it's the...
about 6 years ago in Storage Scale (formerly known as GPFS) 2 Delivered

support hole punching

We're seeing requests for customers running Commvault that they need support for hole punching to free up space in already allocated files. Please support deallocating space in files trhough the FALLOC_FL_PUNCH_HOLE and FALLOC_FL_KEEP_SIZE flags o...
over 6 years ago in Storage Scale (formerly known as GPFS) 4 Delivered

Improve performance in mmrestripe

Mmrestripe operation encounters slow performance in ESS tracked under PMR 85092-227-000. The action plan is to increase the pitWorkerThreadsPerNode value manually. This RFE is to improve the performance of restripe. The development team suggests t...
over 6 years ago in Storage Scale (formerly known as GPFS) 1 Delivered

Allow later kernel gpfs.gplbin to be installed without taking down GPFS

Kernel upgrades are done by installing the new kernel while the current systems is running. When the server is rebooted, it picks up the new kernel. The new level of gpfs.gplbin that matches that new kernel cannot be added while Scale is up, even ...
almost 7 years ago in Storage Scale (formerly known as GPFS) 2 Delivered

REST API Endpoints for Usage Feedback of Filesystems and Filesets

We are currently developing a self service portal for Spectrum Scale, mainly based on the REST API. We would like to feed back filesystem and fileset usages (used capacity and used inodes), like the GUI does already. However currently the REST API...
almost 7 years ago in Storage Scale (formerly known as GPFS) 2 Delivered

Reduce time to remove a failed disk in GPFS

Please can you look to reduce the time it takes to remove a disk from a GPFS file system in the particular scenario when the disk has already failed (and so is suspended or down) and the data on it has been evacuated onto other nodes as restripeOn...
about 10 years ago in Storage Scale (formerly known as GPFS) 4 Delivered

Increase Quorum Node MAximum limit from 8 to 9

Required for HSBC : Scale 5.1.x on Linux on x86. Stretch cluster across 3 x phys sites. Client has failure tolerence requirement for Scale stretch cluster to tolerate as follows: Scale must tolerate a single site failure AND a single quorum node f...
about 2 years ago in Storage Scale (formerly known as GPFS) 2 Delivered