Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 6

Enhance AFM to transfer NFSv4 ACLs from NFS servers like Isilons and store the ACLs correct as GPFS ACLs

Storage Scale and Storage Scale System are great products. But in many cases, another storage is already used and there are no easy ways to migrate PBytes of data - except perhaps with AFM? Almost all installations use e.g. AD integrations and NFS...
9 months ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Auto expansion of inode to trigger once usage reaches around 95%

Today the Scale alerts and auto expansion are triggered when inode usage is very close to 100% and typically that means when the filesystem in use is very active and busy and is not optimal for performance. Would like to see the auto expansion of ...
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Monitoring quota usage per fileset (not inodes, capacity usage)

Hi team, we found a gap in quota monitoring of spectrum scale filesets, we have the capability to monitor inode usage per-fileset, but not quota consumption in terms of capacity. Becasuse of that we urgently need a feature that could alert us when...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

ECE edition 5.1.9 version support 256 storage node in one cluster

as title, we need 5.1.9 version(LTS)support 256 Necessary
about 2 months ago in Storage Scale (formerly known as GPFS) 0 Functionality already exists

Drive letter definition per Windows client.

Currently Windows client mount drive letters can only be defined when a cluster is created. This is inflexible if a drive letter is not defined on a cluster and Windows clients need adding later. The only workaround is to create a remote cluster t...
4 months ago in Storage Scale (formerly known as GPFS) 0 Functionality already exists

Allowing to mapping CES address to fixed interface

The current design is alis CES ip to one of interface or the interface has the fixed ip in the same subnet. This is not flexibile and will wast of fixed ip. Customer hope to support static CES IP mapping that can assign CES IP to dedicated etherne...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists