Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 167

Subnets Config Parameter in CNSA / Remote Cluster

With MROT capability on Scale/Scale System (ESS) for Ethernet networks, the ability to use that vs. Network Bonds in OCP environments would be beneficial. In a recent lab test using an ESS3500 + OpenShift (Fusion), MROT was used initially, however...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Limit NFS Exports to a CES Group

Following limitation for NFS exports is in place (see A8.9 in Q&A section Spectrum Scale 5.1.2): If a file system that was previously exported successfully by NFS on a CES node becomes unavailable, the NFS daemon exits and the CES node becomes...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Expand the number of supported Protocol Clusters that can be configured on a IBM Spectrum Scale Storage Cluster

Expand support for the current limitation of one storage cluster and up to five protocol clusters to one storage cluster and up to ten protocol clusters.
almost 6 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Shorten the pending IO time to 5-30s during node expel

When node in ECE cluster was expelled from cluster, the IO pending time is long as 1-3 minutes. Some of applications failure due to long pending time,especial to OLTP or AI training jobs. Competitors which using traditional dual-controllers mode c...
11 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

CES Group failover, within the locality, when using a stretched cluster

Client in a active-active Spectrum Scale stretched cluster using protocol nodes. They have 6 protocol nodes in total. Nowadays, in case of 1 node failure, the CES IP can be failover to any nodes, either in the same site or to the remote site. This...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support ILM pool migrations in stretched cluster without killing WAN

Currently when migrating files between replicated storage pools in a stretched cluster, intense load is experienced on the WAN. This is because the migration is essentially a copy of the files (hopefully?? respecting readReplicaPolicy) and the wri...
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enhance Ansible Toolkit with exclude node support on Install and deploy

In an heterogene world of scale cluster we had more an more the requirement to install new nodes with higher versions of scale because of some dependencies of OS-Kernel-Levels. (ABI changes as example). We want the same function like exclude suppo...
6 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Spectrum Scale VMWare ESX V7 Support

Hallo,we plan to migrate our SAS-Grid Cluster to a VMWare based Plattform. These Platform is installed with ESX Version 7.0 U2. But currently scale support only V6.x.This gap stop our Migrationsplans currently. With the introduction ov Version 7 E...
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Scale GUI - Need to improve pmcollector service with large amount of performance data

Problem: "pmcollector" service failed to start while there was large amount of performance data (about 10GB for 6 months). Performance data can not be shown on the GUI. And restarting the GUI did not help. ZIMON can not process large amount of per...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Scale GUI Dashboard Nodes by Latency

Add the ability to "Cherry Pick" and display in the Scale GUI the top nodes by latency. This could be on the statistics widget or a new widget. Add the granularity of client nodes, NSD nodes, nodes by nodeclass XYZ, or custom picked nodes. Additio...
12 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration