Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Ideas

Showing 855 of 6005

AFM Caching from NFS Server on Non-Standard Port

Add the ability for NFS server/export to be used as a "home" on a non-standard configurable port. This should capability should also be in the AFM parallel data transfer mappings.
3 months ago in Storage Scale (formerly known as GPFS) 0 Under review

Allow multiple targets in an AFM-fileset (AFM-S3 only perhaps)

It would be good if we could have data coming from multiple sources in an AFM fileset. Perhaps a client has data that needs to be presented from NFS and S3. The application would have to talk to two different filesets to get data. The current use ...
10 months ago in Storage Scale (formerly known as GPFS) 3 Under review

Auto expansion of inode to trigger once usage reaches around 95%

Today the Scale alerts and auto expansion are triggered when inode usage is very close to 100% and typically that means when the filesystem in use is very active and busy and is not optimal for performance. Would like to see the auto expansion of ...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Functionality already exists

we need an a way to avoid such NFS System Hang IO when any of protocol nodes restarted and ganesha go in grace period time ...i suggest to enhance in way of ganesha work to avoid clients that connected to other protocol nodes to affect with this time and only cliets that connected to this rebooted node hange in time of grace period until VIP move to another availabe node ...As we have a Big cluster with 6 nodes of protocol nodes and when one of them go down all banking sectors affected with this time of grace period ..hope to find us a solution for this long time problem desing.

because not all system will go down and only clients that connected to protocol node affected one will disconnect only and other will still working fine. all mega projects that use Spectrum scale and to avoid SPOF description when one of protocol ...
3 months ago in Storage Scale (formerly known as GPFS) 0 Submitted

AFM Azure Blob direct support

For data exchange, data access from multiple sites and also access of data from within Azure we would like to use Spectrum Scale AFM connected to a Azure blob similar as connected to S3 storage. Currently this will require a S3 / Azure converter l...
almost 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

CNSA / Rest API - Restrict GUI User access to a single Filesystem

Today, each CNSA Instance requires two GUI Users with the Access Roles of ContainerOperator and CsiAdmin. These GUI Roles have permissions to connect and manage all existing Filesystems on the Remote Scale Cluster. Fthat means: an Openshift Admin ...
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Protocols - SMB Multichannel

SMB Multichannel enables file servers to use multiple network connections simultaneously. It facilitates aggregation of network bandwidth and network fault tolerance when multiple paths (f.ex. multiple network adapters configured for LACP on clien...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Planned for future release

Restore more than 1 VM at the time with the tdpve web gui

Restore Multiple VMS at ones from the web Gui from the same job ibm spectrum protect virtual environments
almost 4 years ago in Storage Protect Family / Product functionality 1 Future consideration

Customize At Risk intervals to reflect real environment

In order to have the Operations Center reflect correctly the risk status of Spectrum Protect nodes we should be able to inform accuratedly when the backups really run. Some clients, for instance, have nodes that run backups only on weekdays, so th...
almost 7 years ago in Storage Protect Family / Product functionality 2 Future consideration

GUI snapshot deletion from oldest to newest

Due to inode manipulation, it is a best practice and less effort to the GPFS to delete the oldest snapshot first. However, when doing the snapshot deletion manually, in the client using the GUI, it should also order it from oldest to newest.This w...
over 6 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration