Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 491

API enhancement for File System Size and Utilization

In the current version (5.1.8) the api does not have a usable endpoint for the filesystem to show Current File System Size (Data/System) File system utilisation (size / %used)
9 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Performance monitoring for AIX LPAR node by GPFS GUI with ZImon

Performance monitoring for AIX LPAR node by GPFS GUI with ZImon
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Shorten the pending IO time to 5-30s during node expel

When node in ECE cluster was expelled from cluster, the IO pending time is long as 1-3 minutes. Some of applications failure due to long pending time,especial to OLTP or AI training jobs. Competitors which using traditional dual-controllers mode c...
5 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Seamlessy convert subtree into (independent) fileset

add a command to convert a subtree to an independent fileset seamlessly (without interrupting access for users) and - as a bonus - vice-versa
about 3 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Samba "hide unreadable = root_only" to boost performance

In most cases we need Samba's "hide unreadable" only on root directory of an smb export. A switch to enable "hide unreadable" only for the export dir's content and not for the whole subtree should significantly improve performance.
5 months ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Accelerate re-balance actions when adding new ECE nodes into existing RG

Partner was opened two cases about expanding ECE nodes to existing RG. The re-balance progress was very slowly. The action was last for 3 months when adding 3 new ECE nodes into existing 5 ECE nodes topology.This is not a acceptable solution to cu...
3 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Add Broadcom SAS 3808/3908 HBAs in Scale ECE

One of OEM partners are planning to replace existing SAS 3108 HBA with SAS 3808/3908 HBAs. But we didn't find SAS 3808/3908 HBAs in the support list of ECE 5.1.9. The partner hope IBM can support SAS 3808/3908 HBAs in the next release. They can pr...
3 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Check pool usage in units of "fileset"

My client company is a customer who uses "fileset" in a wide variety.The customer is using "ess3200" and "ess5000" divided into "system pool" and "data1 pool".The customer has transferred the data from "system pool" in "fileset" to "data1 pool".Ho...
5 months ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

GUI snapshot deletion from oldest to newest

Due to inode manipulation, it is a best practice and less effort to the GPFS to delete the oldest snapshot first. However, when doing the snapshot deletion manually, in the client using the GUI, it should also order it from oldest to newest.This w...
about 5 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Spectrum Scale GPFS client running on z/OS

Customer desires to speed up time to insights from their systems of record on z/OS.
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration