Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 487

zstd compression & dictionary support

If zstd support is added to the product it would be very interesting to allow for a few dictionaries to be used to improve (de)compression speed and compression ratio. This is ideal in situations where ILM policies can be written to target either ...
11 days ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Parallelization/enhancement of pool migrations of individual large files

When running migrations between different pools often it appears the tail latency of the ILM run is dictated by a small number of large files This seems to be due to the pool migration of any individual file being a single threaded activity that i...
11 days ago in Storage Scale (formerly known as GPFS) 2 Needs more information

Enhance AFM to transfer NFSv4 ACLs from NFS servers like Isilons and store the ACLs correct as GPFS ACLs

Storage Scale and Storage Scale System are great products. But in many cases, another storage is already used and there are no easy ways to migrate PBytes of data - except perhaps with AFM? Almost all installations use e.g. AD integrations and NFS...
2 months ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Allow IBM Spectrum Protect Client installed directly on ESS Appliance Family

When performing backup operations on ESS using MMbackup and IBM Spectrum Protect, the current architecture needs to have a Backup Proxy in order to configure the IBM Spectrum Protect and MMbackup. In this configuration, MMbackup currently forces t...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Future consideration

List Fileset Capacity of Data in Different Pools

Using the mmlsfileset command, GUI, or both - add ability to see what is the capacity of the fileset on differnet pools. If a filesystem has different pools ie. system, tier1, tier2, etc... using the mmlsfileset command get the percentage of the f...
3 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enable multi protocol CES SMB / NFS rolling upgrades when CTDB upgrade is within a minor version.

When a multi protocol SMB and NFS CES cluster is upgraded, a short full stop of all CES protocols is required due to CTDB not allowing mixed major or minor versions. This can often cause issues with NFS mounts, if it is not possible to unmount the...
18 days ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Auto expansion of inode to trigger once usage reaches around 95%

Today the Scale alerts and auto expansion are triggered when inode usage is very close to 100% and typically that means when the filesystem in use is very active and busy and is not optimal for performance. Would like to see the auto expansion of ...
6 months ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Enhance mmfindinode Performance and Provide SDK API for Path Query by Inode

This RFE requests enhancements to the mmfindinode utility in the GPFS/Spectrum Scale filesystem to significantly improve its performance. Additionally, it proposes the development of an SDK API that allows querying file paths based on inode number...
about 2 months ago in Storage Scale (formerly known as GPFS) 2 Under review

HPO to support InfiniBand

In order to implement a high performant and reliable infrastructure it is required to have HPO support for deployment on InfiniBand infrastructure.
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

HPO to support AFM managed data along with data transfer to AWS S3

In order to provide high-performance hybrid-cloud object storage for Analytics & AI, within HPO, it is required to process S3 object data via AFM to on-prem AND on AWS S3. To place/move objects intelligently on/to AWS S3, HPO AFM needs to supp...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration