Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

My ideas: Storage Scale (formerly known as GPFS)

Showing 5

Ability to limit io per user for gpfs fs for a node/server

Client need the ability to impose restriction on IO that a user can open on a compute server via cgroup or any other method. They currently have cpu and ram limit per user but users seem to overload io request that retards system responds.
16 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Network resiliency enhancement - need mechanism for prioritizing control traffic over data traffic via QOS/PFC when RDMA is not enabled.

In a heavy workload situations, some clients are removed from the cluster as peer nodes are not able to communicate as heart-beat/control traffic stuck behind data traffic. Also explore, a dedicated VLAN(s) option for Control and Data traffic.
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review

ECE edition 5.1.9 version support 256 storage node in one cluster

as title, we need 5.1.9 version(LTS)support 256 Necessary
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review

Adding Enhancment on Measurement Monitoring on level of Fileset quota

Because in our company GPFS structure based on three or four filesystem like /production & /test and we created million of fileset based on project name but unfortunately some of these filesets reach maximum quota 100% and application down bec...
about 2 months ago in Storage Scale (formerly known as GPFS) 0 Under review

Scale ECE - optimize the efficiency of swapd processing, and ensure consistency in file access

Current Issue: A directory A is created on GPFS Client 1. Then on Client 2, directory A is accessed and a subdirectory B is created inside it. If directory A is subsequently deleted and recreated on GPFS Client 1, attempting to create subdirectory...
3 months ago in Storage Scale (formerly known as GPFS) 0 Under review