Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 11

Get SOBAR supported for Spectrum Archive Disaster Recovery

We use SOBAR to backup metadata of all data that is going to Storage Archive. This is the only way to be able to recover data that was migrated toStorage Archive and recover this data when a disaster happens. In our use case we have an active arch...
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review

Allow RDMA device recognition possible at mmfsd run time

Sometimes, when mmfsd starts up, not all RDMA devices are available (mostly due to temporarily down links). When such a link comes up later, mmfsd would not detect that and consequently ignore the port for the current run. Only when recycled, mmfs...
16 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Allow mmrepquota -j to filter fileset(s)

The synopsis of mmrepquota looks like this: Synopsis mmrepquota [-u] [-g] [-e] [-n] [-v] [--block-size {BlockSize | auto} | -Y] {-a | Device:Fileset ...} or mmrepquota-j [-e] [-q] [-n] [-v] [-t] [--block-size {BlockSize | auto}] {-a | Device...} o...
25 days ago in Storage Scale (formerly known as GPFS) 0 Under review

gpfs enhance NFS performance in chip design domain

we expect GPFS to enhance NFS performance to accommodate small file scenarios in the chip design domain to beat netapp
24 days ago in Storage Scale (formerly known as GPFS) 0 Under review

fileset QoS distinguish read and write

we expect fileset QoS distinguish read and write to enhance the flexibility of QoS, also expect metadata support qos
24 days ago in Storage Scale (formerly known as GPFS) 0 Under review

RHEL 9.X And GPFSGUI Daemon Fail bug?

line 67, checkPortIsAlreadyUsed in "/usr/lpp/mmfs/gui/bin-sudo/check4iptables" This function can act unintentionally when searching for 443 ports. "checkPortIsAlreadyUsed" can act unintentionally when searching for 443 ports on the 72nd line of "/...
15 days ago in Storage Scale (formerly known as GPFS) 0 Under review

gpfs support mount fileset or directory

I hope that GPFS (General Parallel File System) supports mounting filesets or directories. Since each fileset or directory is used by different individuals, if mounting of filesets and directories is supported, it would achieve the effect of data ...
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review

A command is needed to manually drop the FilesToCache and StatCache of the Client node mounted with GPFS to release the tokens of the server

same as the topic
18 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Storage Scale - A GPFS node with multiple nic drivers (ip addresses) should not be able to rejoin the cluster when the daemon interface is down

Our storage nodes have multiple nic drivers and IP addresses (RoCE network). When the nic for daemon IP is down (dev bond0), it will still keep trying to rejoin the cluster again and again, because its another nic (dev bond1) is still working. But...
about 1 month ago in Storage Scale (formerly known as GPFS) 1 Under review

Enhance Ansible Toolkit with exclude node support on Install and deploy

In an heterogene world of scale cluster we had more an more the requirement to install new nodes with higher versions of scale because of some dependencies of OS-Kernel-Levels. (ABI changes as example). We want the same function like exclude suppo...
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review