Skip to Main Content
IBM System Storage Ideas Portal
Hide about this portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 172

Recover failed Ganesha node - which mark with F flag, without reboot.

Hi We had NFS Ganesha node that failed - but we did not able to recover it , only reboot solve it. Sysmon attempted to remove the F flag three times, but all attempts to acquire the lock failed.As a result, the node remained in a failed state desp...
5 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

mmrestripefs % complete metric is near useless

Current behavior------------------mmrestripefs (and mmdeldisk) displays a % complete metric while it is going. Unforunately, this seems to be tracking the % of the inode of the whole filesystem that it is currently working on. Deficiency of this--...
almost 4 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Storage Scale - A GPFS node with multiple nic drivers (ip addresses) should not be able to rejoin the cluster when the daemon interface is down

Our storage nodes have multiple nic drivers and IP addresses (RoCE network). When the nic for daemon IP is down (dev bond0), it will still keep trying to rejoin the cluster again and again, because its another nic (dev bond1) is still working. But...
11 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Spectrum Scale Toolkit support for separated admin and daemon network

Due to security, some clients have different IPs for daemon and for admin network, in a Spectrum Scale Cluster, and current version of Toolkit does not support different networks.
about 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Expand the number of supported Protocol Clusters that can be configured on a IBM Spectrum Scale Storage Cluster

Expand support for the current limitation of one storage cluster and up to five protocol clusters to one storage cluster and up to ten protocol clusters.
over 6 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Storage Scale - Safe-guarded snapshots - allow expiration - give client to set its own expiration parameter in the GUI snapshot rule

By setting the "Allow Expiration" for snapshot rules under the GUI, it sets the snapshot only until the next execution, not for the whole retention, neither allows to client to sets its own retention. Based on the IBM Documentation "https://www.ib...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Automate Spectrum Scale AFM Role Reversal Process for Failover

Request to Automate the IBM Spectrum Scale Filesystem's AFM Role Reversal Process used during failover event.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to Specify protocol -- rdma/tcp -- along with the subnet/cluster reachability

Consider the following use-case: There are two clusters each with multiple nodes. The intra-cluster communication (i.e amongst nodes within the cluster) happens via TCP and inter-cluster communication (i.e amongst nodes between clusters) happens v...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

using remote clients as helper nodes for policy scans and processing migration

Usually, IBM recommends to pool all ESS Storage Servers in a so called storage cluster and then connect one or multiple remote client / or so called / compute clusters to the ESS. In fact, thats meanwhile a common best practice cluster topology. O...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Adding Support of Noobaa Metrics with Integration in Zimon Sensors

Currently the existing mmces protocols has integration of metrics in the zimon Sensor Framework, but the newest one, mms3 (noobaa-core) don't. We use the current zimonGrafanaBgridge to visualize these Metrics in Grafana. The current available func...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration