Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 551

Spectrum Scale maintenance integration with RHEL "yum"

when patching is performed on RHEL (with yum update), binaries that match the kernel must be available concurrently from an IBM public REPO site.1) Patching of RHEL cannot cause a mismatch of the binaries between RHEL and Scale, 2) compilers canno...
about 6 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

mmchconfig and mmvdisk configure need to be more transparent

Today, there is no association between mmvdisk configure and mmchconfig. The RFE is to have one command for setting both GPFS parameters and mmvdisk and for mmvdisk to have a way to preserve any previously customized parameters.
about 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

adding back CLOSE_WRITE to watchfolder and mmwatch

as a customer , we need to get informed , when a file gets closed, but only, if the file was changed ... we are not interessted on CLOSEs , when the file content was not changed. To sort out millions of false positives of thise unchanged CLOSE cal...
about 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Add Storage Class Attribute to Data Pushed to S3

When pushing data to S3 via the Cloudgateway service it would be helpful to be able to specify the storage class. If you don't specify the storage class the data will be stored in S3 Standard. You can create a Lifecycle policy to push the data to ...
about 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

RHEL 9.X And GPFSGUI Daemon Fail bug?

line 67, checkPortIsAlreadyUsed in "/usr/lpp/mmfs/gui/bin-sudo/check4iptables" This function can act unintentionally when searching for 443 ports. "checkPortIsAlreadyUsed" can act unintentionally when searching for 443 ports on the 72nd line of "/...
9 months ago in Storage Scale (formerly known as GPFS) 1 Is a defect

A command is needed to manually drop the FilesToCache and StatCache of the Client node mounted with GPFS to release the tokens of the server

same as the topic
9 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Isolate File sets per node, similar to how we could isolate NFS mounts by directories.

As of now, there are 2 workarounds.Manage ACLs for PBs of storage, which gets increasingly complex to manage.2. Create multiple small file systems of few terabytes. Creating small files systems just to mount them differently makes it very ineffici...
about 5 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

AFM-COS configuration option for proxy servers

At the moment you cannot configure AFM-COS to use an explizit http(s) proxy server to connect to an external S3 storage.Please also add proxy support to AFM-COS as it is already available for the TCT or call home features.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Allow Windows Server to re-export Spectrum Scale mount and serve SMB/CIFS

Allow a Windows NSD client (server licensed) to share the Spectrum Scale mount it has from a Windows NSD Server.
almost 8 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

gpfs client support rocky linux

we expect gpfs client support more operation systems to meet the needs of mostusers, for example rocky linux
9 months ago in Storage Scale (formerly known as GPFS) 1 Not under consideration