Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 535

Allow Spectrum Scale Management API to handle POSIX ACLs

The Spectrum Scale Management API currently can not manage POSIX ACLs, only NFSv4 ACLs. We'd like to be able to manage both kinds of ACLs in our system.
over 5 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

CES Group failover, within the locality, when using a stretched cluster

Client in a active-active Spectrum Scale stretched cluster using protocol nodes. They have 6 protocol nodes in total. Nowadays, in case of 1 node failure, the CES IP can be failover to any nodes, either in the same site or to the remote site. This...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Spectrum Scale VMWare ESX V7 Support

Hallo,we plan to migrate our SAS-Grid Cluster to a VMWare based Plattform. These Platform is installed with ESX Version 7.0 U2. But currently scale support only V6.x.This gap stop our Migrationsplans currently. With the introduction ov Version 7 E...
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Auto-cleanup /tmp/mmfs

Over time /tmp/mmfs fills up will diagnostic data from gpfs.snap. Provide a facility to cleanup those files automatically based on age.
almost 5 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Distribution via Package Repository

Today Spectrum Scale ships self extracting images. These are large, usually only 25% of the image content is used, and a special install procedure is needed.These images are quickly superseded by new ptf versions, sometimes certain versions are wi...
over 4 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

AFM immutable file support

Need to be able to make files immutable in a home fileset for AFM in order to protect data from ransomware, data will be created at the cache site (single-writer) and transferred to the home site using AFM, data are to be retained at the home site...
about 5 years ago in Storage Scale (formerly known as GPFS) 1 Planned for future release

Shorten the pending IO time to 5-30s during node expel

When node in ECE cluster was expelled from cluster, the IO pending time is long as 1-3 minutes. Some of applications failure due to long pending time,especial to OLTP or AI training jobs. Competitors which using traditional dual-controllers mode c...
about 1 year ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Scale GUI - Need to improve pmcollector service with large amount of performance data

Problem: "pmcollector" service failed to start while there was large amount of performance data (about 10GB for 6 months). Performance data can not be shown on the GUI. And restarting the GUI did not help. ZIMON can not process large amount of per...
about 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Samba "hide unreadable = root_only" to boost performance

In most cases we need Samba's "hide unreadable" only on root directory of an smb export. A switch to enable "hide unreadable" only for the export dir's content and not for the whole subtree should significantly improve performance.
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

CNSA / CSI add support for Access Mode ROX (ReadOnlyMany)

There are UseCases where Data should be shared between PODs or among different Clusters. To protect Data and prevent inconsistency it would be beneficial to enforce a ReadOnly Mode at OpenShift PV Level. Currently, ReadOnly can only be set in the ...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration