Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 529

Recover failed Ganesha node - which mark with F flag, without reboot.

Hi We had NFS Ganesha node that failed - but we did not able to recover it , only reboot solve it. Sysmon attempted to remove the F flag three times, but all attempts to acquire the lock failed.As a result, the node remained in a failed state desp...
11 days ago in Storage Scale (formerly known as GPFS) 0 Submitted

REST API for mms3 - enhancement: create S3 user

Currently a REST API call to create a new S3 user account is missing, This is needed for several client applications. The AWS REST API and client provides this function which is currently used by a client application... workaround to create user a...
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Scale S3/Nooba tiering to tape

With the new capability S£/Nooba replacing S3/Swift we would use the Scale ILM capability to move buckets/filesets from disk to tape thanks to Storage Archive integration
10 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Process and automation to simplify the patching and software update of large node count clusters

Managaing the the currency of Spectrum/Storage Scale systems are difficult when you have over 1000 nodes within the cluster in question. The request is for the process and automation for both the local repository and cluster endpoints where upon r...
9 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Virtual Host Style addressing support for CES S3

Virtual Host Style addressing (https://bucket.my-s3-server.com) is the preferred S3 addressing option for AWS - see https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html Scale CES S3 does not support this out-of-the box yet. Wi...
4 months ago in Storage Scale (formerly known as GPFS) 2 Future consideration

When tape data is encrypted, but Spectrum Scale recall cache and Stub files are not, how do we encrypt those without recalling the data off tape?

Using both Storage Archive and Storage Scale system (ESS) we need to be able to encrypt the data on ESS without impacting or recalling the tape data (Which is already encryped via the TS4500 Library). The client will be migratiing from Gen2 ESS to...
about 1 year ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

'mmdf Device -h' for Human Readable Output

I'm used to do 'df -h' on linux operating systems to get human readable output. With 'mmdf' I have to use 'mmdf Device --block-size=auto' for human readable output, which is a bit unhandy for me. My suggestion is, to invent 'mmdf device -h' for hu...
about 2 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enable mmbackup to run into remote mounted file systems.

Many Customers have preference using specific and dedicated Networks to perform backups with IBM Spectrum Protect with Spectrum Scale. Today this is not possible using mmbackup because the tool can only be used on the local cluster. This request h...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Support Scale Client running on VMware ESXi 8.x

According to the support page, VMware ESX 7.x is supoorted but not ESXi8.x . Table 72. VMware support matrix on VM guest https://www.ibm.com/docs/en/STXKQY/gpfsclustersfaq.html#virtual also please provide GDS(GPUDirect Storage ) support informatio...
8 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to limit io per user for gpfs fs for a node/server

Client need the ability to impose restriction on IO that a user can open on a compute server via cgroup or any other method. They currently have cpu and ram limit per user but users seem to overload io request that retards system responds.
about 1 month ago in Storage Scale (formerly known as GPFS) 0 Under review