Skip to Main Content
IBM System Storage Ideas Portal
Hide about this portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 548 of 5612

Ability to change NSD names

We would like the ability to change the name of existing NSDs. This change could be done while entire GPFS NSD Server Cluster is down, similar to when you change the NSD ServerList.
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

CNSA Support for Per-PV Encryption

Add support for Scale CNSA to provide per-PV encryption allowing for each PV to use its own encryption key from GKLM or HashiCorp Vault. This would be configured as part of the encryption config CR for the PVC. Allow for each fileset associated wi...
11 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

AFM Azure Blob direct support

For data exchange, data access from multiple sites and also access of data from within Azure we would like to use Spectrum Scale AFM connected to a Azure blob similar as connected to S3 storage. Currently this will require a S3 / Azure converter l...
about 3 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Prometheus Exporter for zimon metrics

For our compute and Storage systems we are running a monitoring setup based on Prometheus in combination with the Prometheus node exporter and Alert manager. To monitor spectrum Scale health we have some self written scripts collecting metrics. Bu...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

more useful output from mmrestripefs failures

It is possible, in a number of scenarios, to have a mmrestripefs operation fail when many disks are suspended and the filesystem is being restriped to bring the disks to the empty state. This is not an uncommon operation, especially when data is m...
11 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

zstd compression & dictionary support

If zstd support is added to the product it would be very interesting to allow for a few dictionaries to be used to improve (de)compression speed and compression ratio. This is ideal in situations where ILM policies can be written to target either ...
11 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to limit io per user for gpfs fs for a node/server

Client need the ability to impose restriction on IO that a user can open on a compute server via cgroup or any other method. They currently have cpu and ram limit per user but users seem to overload io request that retards system responds.
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Monitoring fileset quota (block/files)

Customer want to monitor via SNMP all the alerts that the system can generate like:fileset quota block (soft/hard)fileset quota files (soft/hard)Storage Pool Metadata block/inodes Storage Poo Data block/inodesand all other event that the system is...
6 months ago in Storage Scale (formerly known as GPFS) 0 Not under consideration

Support BSD groups semantics

Customer is migrating from Isilon, which supports configuring the BSD group semantincs meaning that new files inherit the group id of the parent group. Some file systems on linux support the "grpid" or "bsdgroups" mount option to enable same seman...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Storage scale performance statistics for SMB shares

EMS GUI (Monitoring > Statistics) only provides SMB performance statistics for cluster or node level. Having stats on each individual SMB share would be very useful. NFS shares have such stats.
12 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration