Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 178

CES IP - bulk add of mmces addresses in node-affinity

We have found that for deployments of 16 or more CES nodes that node-affinity mode is the most predictive and stable, the challenge is in deployment time. In balanced mode all of the addresses can be deployed in 1-2 minutes. With 32 CES nodes runn...
about 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

add --auto-inode-limit for independent filesets

In Storage Scale 5.1.4 mmcrfs/mmchfs introduced the --(no)auto-inode-limit switches. Unfortunately this only works for whole filesystems and cannot be finetuned for individual (independent) filesets. There are situations where you have a mixed env...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

CES NFS - disable NFS export (but keep in config)

When having maintenance on larger file systems, it's advisable to first do a clean shutdown, and then enable maintenance mode with mmchfs --maintenance-mode to avoid anyone mounting or processes messing with the file system. This is quite problema...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

API enhancement for File System Size and Utilization

In the current version (5.1.8) the api does not have a usable endpoint for the filesystem to show Current File System Size (Data/System) File system utilisation (size / %used)
over 2 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Capacity quota support for HSM managed file systems

We have two storage scale file systems which are HSM managed using Spectrum Protect for space Management. In total we are storing ~40 PB user data on tape here. Each group has been granted an amount of files and capacity for all our file systems. ...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Feature Enhancement to support multiple infiniband partition key

Increase security against eavesdropping by limiting the access to resources using infiniband partition so cross talk and malicious attacks can be eliminate
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

document privateSubnetOverride and make it switchable without shutting doen the whole cluster

This helps in adding (additional) high performance networks to existing clusters
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Add zstd as a compression option

Although the existing compression options are okay sometimes it would be nice to be have more options where speed and decent compression ratios are achievable. Zstd has been gaining traction both in the Linux area and is already on some IBM tape d...
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

On the Storage Scale Server (SSS) formerly ESS, the "mmlsnsd" command should show the NSD disk primary/secondary server relationship as previously done and does so for Spectrum Scale on SANs and Lenovo DSS.

Having "mmlsnsd" display the NSD disk data path ownership (primary/secondary server relationships) allows customers to balance disk assignments. In addition, Lenovo DSS does this already and customers migrating from Lenovo , like MorganStanley, ha...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

CNSA / Rest API - Restrict GUI User access to a single Filesystem

Today, each CNSA Instance requires two GUI Users with the Access Roles of ContainerOperator and CsiAdmin. These GUI Roles have permissions to connect and manage all existing Filesystems on the Remote Scale Cluster. Fthat means: an Openshift Admin ...
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration