Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 535

mmchconfig and mmvdisk configure need to be more transparent

Today, there is no association between mmvdisk configure and mmchconfig. The RFE is to have one command for setting both GPFS parameters and mmvdisk and for mmvdisk to have a way to preserve any previously customized parameters.
almost 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Disable NFS export

When having maintenance on larger file systems, it's advisable to first do a clean shutdown, and then enable maintenance mode with mmchfs --maintenance-mode to avoid anyone mounting or processes messing with the file system. This is quite problema...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Spectrum Scale maintenance integration with RHEL "yum"

when patching is performed on RHEL (with yum update), binaries that match the kernel must be available concurrently from an IBM public REPO site.1) Patching of RHEL cannot cause a mismatch of the binaries between RHEL and Scale, 2) compilers canno...
almost 6 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Make CSI upgrade easier for disconnected clusters

At the present time when you want to upgrade spectrum scale's CSI there's no clear documentation: I go to the target version section and then to the 'upgrade' part. But there's nothing written for a disconnected cluster and for RHEL servers. To ha...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Add Storage Class Attribute to Data Pushed to S3

When pushing data to S3 via the Cloudgateway service it would be helpful to be able to specify the storage class. If you don't specify the storage class the data will be stored in S3 Standard. You can create a Lifecycle policy to push the data to ...
about 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

AFM-COS configuration option for proxy servers

At the moment you cannot configure AFM-COS to use an explizit http(s) proxy server to connect to an external S3 storage.Please also add proxy support to AFM-COS as it is already available for the TCT or call home features.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Isolate File sets per node, similar to how we could isolate NFS mounts by directories.

As of now, there are 2 workarounds.Manage ACLs for PBs of storage, which gets increasingly complex to manage.2. Create multiple small file systems of few terabytes. Creating small files systems just to mount them differently makes it very ineffici...
almost 5 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Parallelization/enhancement of pool migrations of individual large files

When running migrations between different pools often it appears the tail latency of the ILM run is dictated by a small number of large files This seems to be due to the pool migration of any individual file being a single threaded activity that i...
8 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

User unable to query fileset if permissions are root as owner and user as group

It is not unusual for user directories permissions to be set where root is the owner and the user is the group for operational reasons. However this leads to the situation where user have a degraded level of service as they cannot query their file...
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

mmhealth node show should explicitly show the cluster status, too

Currently mmhealth node show can report the node as healthy while there are problems elsewhere in the cluster, e.g. in the filesystem component:# mmhealth node show Node name: somenode.somedomain Node status: HEALTHY Status Change: 13 days ag...
almost 3 years ago in Storage Scale (formerly known as GPFS) 1 Planned for future release