Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 17

extend mmapplypolicy --scope

I have a filesystem with >1300 independent filesets and ~540 million files. Running mapplypolicy with --scope filesystem takes a long time because all inodes need to be scanned. For some mmapplypolicy runs I only need to process one fileset so ...
3 days ago in Storage Scale (formerly known as GPFS) 0 Under review

FSSTRUCT and checksum in the /var/adm/ras need to be logged as ERRORS and not Warning or Info

FSSTRUCT and checksum in the /var/adm/ras need to be logged as ERRORS and not Warning or Info - These items can cause a cluster wide outage and should be logged as ERRORS. If they were logged as errors, we could pull them into our Kibana dashboard...
4 days ago in Storage Scale (formerly known as GPFS) 1 Under review

Enhance mmshutdown confirm shutdown logic to make it ctrl-c resistend.

We had a maintenance window and want to update a storage-cluster that own 20 FS for native K8s and OCP-Clusters. We see if a additionally mmshutdown are happen and the Admin make on the prompt a ctl-c the shutdown process are working together and ...
11 days ago in Storage Scale (formerly known as GPFS) 1 Under review

GUI : capacity information consistency

Different information/ mismatch between ‘used capacity’ and ‘total capacity’. Monitoring -> Files -> Filesets -> view details -> used capacity Due to the fact, that two file copies are made on the filesystem on ESS storage cluster , th...
11 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Storage Scale - Rest API for file system defragmentation

With the rapid growth in demand for AIGC and autonomous driving, our GPFS file systems are getting larger and larger. We know that disk fragmentation within a file system is an unavoidable condition. As time goes by, the space occupied by file sys...
13 days ago in Storage Scale (formerly known as GPFS) 1 Under review

Allow multiple targets in an AFM-fileset (AFM-S3 only perhaps)

It would be good if we could have data coming from multiple sources in an AFM fileset. Perhaps a client has data that needs to be presented from NFS and S3. The application would have to talk to two different filesets to get data. The current use ...
24 days ago in Storage Scale (formerly known as GPFS) 1 Under review

Lustre allows 8182 users to be added to the ACL of a single file for R/W access, the latest Scale release (v5.5.2.x) only allows 1913 because a hidden ACL file is limited to 15KB.

Users in large environments like Sanger (UK Biomedical research lab) can continue to implement their in-house user access policies resulting in $10 million dollar in sales for IBM. Assuming there are 4000 users created via script (useradd was used...
26 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Implement upstream NooBaa Changes that support recent Amazon S3 server/client changes

Recent client side changes to the aws cli utility as well as the Amazon boto3 python libraries have broken functionality of using the s3 gateway. From Amazon: In AWS CLI v2.23.0, we released changes to the S3 client that adopts new default integri...
27 days ago in Storage Scale (formerly known as GPFS) 0 Under review

GUI change of units how quota is set up

because we need this feature for better daily operation of capacities...
27 days ago in Storage Scale (formerly known as GPFS) 0 Under review

CES-S3 should support multiple groups per S3 account

Currently there's only support for a single group ID for each S3 account. This doesn't fit well with normal file system permissions where the same account might need access to several groups of different roles. F.ex. /gpfs/fs1/department might all...
28 days ago in Storage Scale (formerly known as GPFS) 0 Under review