Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 535

To simplify the deployment of workloads in the cloud, IBM must improve the performance of the Scale command "mmaddnode" to allow for faster spin up/down of resources.

Nodes in the cloud need to be spun up quickly as needed. Currently mmaddnode takes too long to add a significant number of nodes as it sequentially adds nodes to the cluster; not only that, but other commands like mmchlicense, mmchnodeclass, and m...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Planned for future release

Enable multi protocol CES SMB / NFS rolling upgrades when CTDB upgrade is within a minor version.

When a multi protocol SMB and NFS CES cluster is upgraded, a short full stop of all CES protocols is required due to CTDB not allowing mixed major or minor versions. This can often cause issues with NFS mounts, if it is not possible to unmount the...
8 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Add offical Support for s3A semantics in CES-S3 Protocol

Our complete Bigdata-Workload are going to use s3A Function to store Data in s3 Buckets. We see no offical support statement from IBM. Please add these to the supportet list of function in usage of s3A. Noobaa as the Base support this why mms3 not...
about 2 months ago in Storage Scale (formerly known as GPFS) 0 Under review

Enhance AFM to transfer NFSv4 ACLs from NFS servers like Isilons and store the ACLs correct as GPFS ACLs

Storage Scale and Storage Scale System are great products. But in many cases, another storage is already used and there are no easy ways to migrate PBytes of data - except perhaps with AFM? Almost all installations use e.g. AD integrations and NFS...
10 months ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Monitoring quota usage per fileset (not inodes, capacity usage)

Hi team, we found a gap in quota monitoring of spectrum scale filesets, we have the capability to monitor inode usage per-fileset, but not quota consumption in terms of capacity. Becasuse of that we urgently need a feature that could alert us when...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Support TLS 1.3 for use with IBM SKLM

Industry is requiring stronger current encryption levels.
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

NFS protocol nodes upgrade without disruption

Customer is having NFS being autheticated my MS AD. In this case SMB service needs to be installed and according to documentation, NFS outage is done when upgrading protocol nodes. Customer is requesting non-disruptive NFS protocol nodes upgrade a...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Auto expansion of inode to trigger once usage reaches around 95%

Today the Scale alerts and auto expansion are triggered when inode usage is very close to 100% and typically that means when the filesystem in use is very active and busy and is not optimal for performance. Would like to see the auto expansion of ...
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Adding Support of Noobaa Metrics with Integration in Zimon Sensors

Currently the existing mmces protocols has integration of metrics in the zimon Sensor Framework, but the newest one, mms3 (noobaa-core) don't. We use the current zimonGrafanaBgridge to visualize these Metrics in Grafana. The current available func...
about 2 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

CNSA Support for Per-PV Encryption

Add support for Scale CNSA to provide per-PV encryption allowing for each PV to use its own encryption key from GKLM or HashiCorp Vault. This would be configured as part of the encryption config CR for the PVC. Allow for each fileset associated wi...
7 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration