Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 528

Storage Scale - A GPFS node with multiple nic drivers (ip addresses) should not be able to rejoin the cluster when the daemon interface is down

Our storage nodes have multiple nic drivers and IP addresses (RoCE network). When the nic for daemon IP is down (dev bond0), it will still keep trying to rejoin the cluster again and again, because its another nic (dev bond1) is still working. But...
6 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

GUI snapshot deletion from oldest to newest

Due to inode manipulation, it is a best practice and less effort to the GPFS to delete the oldest snapshot first. However, when doing the snapshot deletion manually, in the client using the GUI, it should also order it from oldest to newest.This w...
over 5 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Storage Scale GUI command line does not show when Expiration (Safe Guarded snapshot) is enabled

Client enabled the option "Allow Expiration" at the GUI. At the GUI panel, we see the Expiration column changed from False to True. However when checking in the /usr/lpp/mmfs/gui/cli/lssnaprule command, it does not have the Expiration column, so i...
12 months ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

GNR needs to be supported outside ESS

Since Spectrum Scale is a Software Defined Storage product, we would need to support GNR outside of Elastic Storage Server.
about 8 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Spectrum Scale GPFS client running on z/OS

Customer desires to speed up time to insights from their systems of record on z/OS.
almost 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Allow group owners to also query filesets

We have user filesets (independent) that are owned by root but the group is actually the user. Unfortunately Spectrum Scale seems not to take group ownership into account and only the owner can query the fileset (mmlsfileset) which seems overly re...
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Add GPFS Fileset quota metrics/monitoring

Product: Spectrum Scale quota monitoring (specially for filesets) We use GPFS fileset quota. These fileset quota is only monitored internally by the GPFS GUI - if the GPFS GUI is installed. If the fileset quota exceeded, the system will give you n...
almost 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

lowInodes Callback for filesets

The noDiskspace callback is triggered on several occasions, one being an inodespace running out of inodes. At that point n time the filespace is already full and users see "no space on device" errors. We'd need a similar callback that triggers ear...
over 3 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Support 256 or more cluster watches per cluster.

Only 25 cluster watches are supported per cluster, but we need 256 at least.
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

document privateSubnetOverride and make it switchable without shutting doen the whole cluster

This helps in adding (additional) high performance networks to existing clusters
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration