Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 527

Enhance information provided in case of errors flushing file data from pagepool to persistent storage

In case of failures in that area, file data may be corrupted. Providing the information which files are affected omits the effort of manually identifying them by expensive scans of the file system.
about 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

GPFS client support larger memory address space

Currently GPFS client can use upto 1TB address space, but now GPU servers already configured with 2TB memory, and 3T, 4T memory in future. the address space below 1TB might be used by vfio, for nic, gpu passthrough, so GPFS client has very little ...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

Lessen Spectrum Scale CNSA's pods dependency on running as super user (root)

Today as of Scale version 5.1.1.4, some of the Container Native Storage Access (CNSA) containers for running Scale on Openshift have the need to run as super-user (root). Although it's understandable from the multi-decade history of Scale itself a...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

GPFS command confliction list

Our client uses IBM Spectrum Scale for NAS. Scale has many GPFS commands which have "mm" prefix, and some commands conflict each other.We want to know the conflict list on KnowledgeCenter for their daily operation.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Not under consideration

Command to check compressed usage size for each fileset quickly

We are performing IBM Spectrum Scale compression on each fileset on daily basis through gpfs policies. The df command is not showing the accurate usage for each fileset and only du shows the actual usage. But since du usually takes long time to pr...
over 3 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Improve AFM to handle hard links correctly

As per https://www.ibm.com/docs/en/spectrum-scale/5.1.2?topic=limitations-afm-afm-dr support for hard links in AFM is very constrained. This is especially problematic for the AFM data migration use case outlined here: https://www.ibm.com/docs/en/s...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Option Needed to Disable Implicit WRITE_ACL Permission for Owner on NFSv4 ACLs

According to https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1adm_admglnv4.htm Spectrum Scale grants implicit WRITE_ACL permission to the owner of files and directories in conflict with the NFSv4 specifi...
almost 4 years ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

When a server node fails, the client should hang instead of reporting "stale file handle".

When a server node fails, the client should hang instead of reporting "stale file handle".
over 2 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Client memory automatically shrinks.

When clients open a lot of files, MMFSD takes up a lot of memory. When the file is closed, the memory should be freed automatically.
over 2 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

support RHEL6, SLES11 and Debian in Scale 5.0

Add support for RHEL6, Debian and SLES11 to Scale 5.0.
almost 7 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration