Skip to Main Content
IBM System Storage Ideas Portal
Hide about this portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 546 of 5589

When tape data is encrypted, but Spectrum Scale recall cache and Stub files are not, how do we encrypt those without recalling the data off tape?

Using both Storage Archive and Storage Scale system (ESS) we need to be able to encrypt the data on ESS without impacting or recalling the tape data (Which is already encryped via the TS4500 Library). The client will be migratiing from Gen2 ESS to...
over 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Process and automation to simplify the patching and software update of large node count clusters

Managaing the the currency of Spectrum/Storage Scale systems are difficult when you have over 1000 nodes within the cluster in question. The request is for the process and automation for both the local repository and cluster endpoints where upon r...
12 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Virtual Host Style addressing support for CES S3

Virtual Host Style addressing (https://bucket.my-s3-server.com) is the preferred S3 addressing option for AWS - see https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html Scale CES S3 does not support this out-of-the box yet. Wi...
8 months ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Spectrum Scale ECE - Support 512 storage nodes in a single ECE cluster

With the rapid growth in demand for AIGC and autonomous driving, our customers are expecting a single GPFS cluster and file system to scale to a larger size. We need a single GPFS cluster to surpass the current limit of 256 storage nodes, reaching...
about 1 month ago in Storage Scale (formerly known as GPFS) 1 Under review

Allow use of directory other then /tmp for GPFS temporary files.

We use GPFS / Spectrum Storage on our High Performance Computing Cluster for it's primary file system and use various mm* commands to monitor the health of the cluster and alert us if GPFS goes offline on a node. If a users HPC compute job inadver...
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Not under consideration

Allow IBM Spectrum Protect Client installed directly on ESS Appliance Family

When performing backup operations on ESS using MMbackup and IBM Spectrum Protect, the current architecture needs to have a Backup Proxy in order to configure the IBM Spectrum Protect and MMbackup. In this configuration, MMbackup currently forces t...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Support Scale Client running on VMware ESXi 8.x

According to the support page, VMware ESX 7.x is supoorted but not ESXi8.x . Table 72. VMware support matrix on VM guest https://www.ibm.com/docs/en/STXKQY/gpfsclustersfaq.html#virtual also please provide GDS(GPUDirect Storage ) support informatio...
11 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support more than 15 remote clusters with fileset access control

Customer needs strict separation of filesets accessible on different compute clusters, and want to utilize RFAC to control what view each of the compute clusters has access to in the file system. Customer was then surprised when they hit the limit...
about 1 year ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Efficient Deletion of Fileset backup data based on mmBackup per Fileset

We are running multi-petabyte Scale file systems in our HPC environment. User data is organized per project using independent filesets. The backup of user data is performed using mmBackup per fileset. For new projects, a new fileset is created. Wh...
28 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Enable AWS STS support

Currently generating a temporary token is associated to an object or file. In the case of many thousands of files we would prefer STS which would allow us to create temporary Sessions associated to a bucket that can be used to upload, via S3 many ...
9 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration