Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 531

HPO to support AFM managed data along with data transfer to AWS S3

In order to provide high-performance hybrid-cloud object storage for Analytics & AI, within HPO, it is required to process S3 object data via AFM to on-prem AND on AWS S3. To place/move objects intelligently on/to AWS S3, HPO AFM needs to supp...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

CNSA on OpenShift on bare-metal along with InfiniBand

In order to implement a high performant and reliable infrastructure it is required to support CNSA on bare-metal OpenShift with InfiniBand by June 2023. Workaround of CNSA on VMware with InfiniBand is not supported, because OpenShift does not supp...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

eliminate lack of I/O on mmdelsnapshot start

When deletion of bunch of snapshots starts we a lack of I/O for about three minutes.NFS Clients see a huge delay of I/O. Related applications hanging for this time and user connections and run into timeouts (e.g http connections for apps that stor...
over 4 years ago in Storage Scale (formerly known as GPFS) 4 Planned for future release

Introduce effective quality management

During the last year, we saw many new ISS code levels, most of them with critical errors that could lead to data corruption or data loss. Fortunately, many costumers were spared from data corruption, but they had to update Scale nearly monthly to ...
over 7 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Quota calculation and capacity presentation based on files size

User quota and file set quota are calculated based on allocated capacity on GPFS (kb_allocated). If files are moved to an external tier, e.g. by transparent cloud tiering, the file system utilisation will be decreased. Storage admins should have t...
almost 6 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Add quota logging capability to keep overall quota accounting consistent

For performance, nowadays, most of quota information is stored and updated in memory. Then, it will be synced back to disk in an asynchronous way later. However, there isn't a recovery mechanism for GPFS quota in abnormal situations. These can mak...
over 2 years ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Remove Node Feature in Spectrum Scale CNSA

Currently in the CNSA Implementation there are no way to reduce a define CNSA-Cluster to one or more Nodes, like a mmdelnode step. The Operator has no function to do these necessary step to reduce a cluster with these request.
almost 3 years ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Capacity quota support for HSM managed file systems

We have two storage scale file systems which are HSM managed using Spectrum Protect for space Management. In total we are storing ~40 PB user data on tape here. Each group has been granted an amount of files and capacity for all our file systems. ...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enable multi protocol CES SMB / NFS rolling upgrades when CTDB upgrade is within a minor version.

When a multi protocol SMB and NFS CES cluster is upgraded, a short full stop of all CES protocols is required due to CTDB not allowing mixed major or minor versions. This can often cause issues with NFS mounts, if it is not possible to unmount the...
7 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

To simplify the deployment of workloads in the cloud, IBM must improve the performance of the Scale command "mmaddnode" to allow for faster spin up/down of resources.

Nodes in the cloud need to be spun up quickly as needed. Currently mmaddnode takes too long to add a significant number of nodes as it sequentially adds nodes to the cluster; not only that, but other commands like mmchlicense, mmchnodeclass, and m...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Planned for future release