Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 535

GPFS Windows support for zimon interfaces

Please provide capability for performance monitoring of GPFS Windows filesystems. Currently, GPFS Windows doesn't support zimon interfaces
almost 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

expose pagepool statistics

There seems to be no supported way, and in fact no way at all that I can find, to determine how effective the pagepool cache is. The official ways to determine an optimum pagepool size are 1) "try it and find out", and 2) "pay one of our consultan...
about 5 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

CES event callback support

There are no events for CES that can be used with `mmaddcallback`. If I run `mmhealth node eventlog`, I see lots of events, such as nfs_active, nfs_in_grace, handle_network_problem_info, nodestatechange_info, ces_network_ips_down, etc. It would be...
about 5 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Increase number of dependent filesets

Guardant Health is using Kubernetes to control their workflow. They will have thousands of jobs and they will need thausands of PVCs. The projection is that they will go over the 10,000 fileset limit in 2022. The target for 2022 would be at least ...
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Allowing to mapping CES address to fixed interface

The current design is alis CES ip to one of interface or the interface has the fixed ip in the same subnet. This is not flexibile and will wast of fixed ip. Customer hope to support static CES IP mapping that can assign CES IP to dedicated etherne...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

Decrease memory requirement in CES.

The Scale FAQ list CES node need 128GB memory. It's huge to most of SMB scenarios. Most of Customer often have 50/100/200 SMB/NFS connections per CES ndoe pair. Customer hope to support 16G/32G/64G/128G memory models in CES node. If it's no, custo...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

When using Protocol nodes, the export IPs should be the only one to allow mounting shares

With the use of Protocol Nodes, the CES requires IPs for export shares. However, even having the IP specific for share, the internal IPs (NODE IPs), which are used for administration only, also allow the NFS Mounts. The request is to isolate the n...
almost 9 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Spectrum Scale can't prioritize small I/O over Serial data block

Spectrum Scale can't prioritize small I/O over Serial data block. A data transfer being done by one customer does work fast but the other user can't even ls specifics directories.
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Improve support for remote CES Protocol Clusters sharing only NFS exports, to support access to the same remote filesystem concurrently

The customer currently has more than. 32000 nfs exports across 5000 paths, which takes a significant amount of time to perform an export definition refresh (any time a new export is added) this generally in excess of 5 minutes for the export defin...
almost 2 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

mmchconfig maxblocksize is not dynamically changeable

PMR TS001057386.Spectrum Scale in Version 5 has a new maxblocksize default. But to Change these for old Clusters all Daemon must be down to change these parameter. The request here mmchconfig maxblocksize should be dynamicaly changeable without co...
about 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration