Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 177

Process and automation to simplify the patching and software update of large node count clusters

Managaing the the currency of Spectrum/Storage Scale systems are difficult when you have over 1000 nodes within the cluster in question. The request is for the process and automation for both the local repository and cluster endpoints where upon r...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

data tiering of block (ranges ) within a file

A file needs to be able to be located partially on multiple storage TIERs, depending on various specific conditions like policy rules, selectable by ranges in Bytes or #blocks, rules for file names etc..The goal is that data within one single file...
7 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

CES Auth - SSSD

All clients using Windows system connecting to Scale cluster export services with Kerberos through Samba. Allow user-defined nsswitch configuration for spectrum scale
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

CES NFS - extended attributes

Storage Scale (GPFS) does support extended attributes for the users like: > getfattr -d stephan.txt > md5sum stephan.txt 041ad89a0dc0772f384f8d1bf4ddff4b stephan.txt > setfattr -n user.md5 -v 041ad89a0dc0772f384f8d1bf4ddff4b stephan.txt &...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support Scale Client running on VMware ESXi 8.x

According to the support page, VMware ESX 7.x is supoorted but not ESXi8.x . Table 72. VMware support matrix on VM guest https://www.ibm.com/docs/en/STXKQY/gpfsclustersfaq.html#virtual also please provide GDS(GPUDirect Storage ) support informatio...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

IO Performance Reporting/Tracking of Specific Applications / HPC Jobs

Add the ability to provide application / job specific performance monitoring and reporting (mmperfmon and GUI). Many HPC customers would like the ability to trace IOs from the filesystem perspective with the granularity of the application/job. Thi...
about 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

CES S3 - subnet restrictions in S3 bucket policies

One of our ESS clusters stores highly sensitive data for around two thousand projects that needs to be kept strictly separate between projects, so we neeed the possibility to add subnet-requirements to buckets for proper data separation, like we c...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support more than 15 remote clusters with fileset access control

Customer needs strict separation of filesets accessible on different compute clusters, and want to utilize RFAC to control what view each of the compute clusters has access to in the file system. Customer was then surprised when they hit the limit...
over 1 year ago in Storage Scale (formerly known as GPFS) 2 Future consideration

CES S3 - REST API for mms3 commands

Currently a REST API call to create a new S3 user account is missing, This is needed for several client applications. The AWS REST API and client provides this function which is currently used by a client application... workaround to create user a...
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Support for "directory swapping" (RENAME_EXCHANGE) on Linux

I propose adding support for the RENAME_EXCHANGE flag for directory renames in Storage Scale, to enable atomic directory swaps. Linux around kernel v3.15 added the renameat2 syscall which supports the RENAME_EXCHANGE flag to atomically swap two di...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration