Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 172

Provide a changelog for Scale releases

Please provide a changelog identifying the APARs fixed in a Scale Mod, Release, or Version change
almost 5 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Ability to limit io per user for gpfs fs for a node/server

Client need the ability to impose restriction on IO that a user can open on a compute server via cgroup or any other method. They currently have cpu and ram limit per user but users seem to overload io request that retards system responds.
7 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Adding SMB support on ppc64LE / SLES for SpectrumScale

We need SMB support on ppc64LE / SLES.SLES has a market share of 99% in HANA on Power.HANA will be the only database for SAP.To use SpectrumScale in HANA environments we need SLES Support for smb.
over 5 years ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Report Spectrum Scale Event if FD hard Limit of Ganesha server is above the threshold.

Ganesha NFS server has a limit of max. open files, which is calculated by 80% of the maxFilesToCache parameter. If Ganesha Server is reaching this limit, Ganesha NFS server will stop working and all NFS clients will loose access to data. In curren...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to change NSD names

We would like the ability to change the name of existing NSDs. This change could be done while entire GPFS NSD Server Cluster is down, similar to when you change the NSD ServerList.
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Manual ticket creation by mmcallhome/GUI

mmcallhome is capable of creating HW tickets at IBM support portal on failing hardware automatically. Sometimes it would be helpful to create ticket manually. The credential/system information are directly available. Also any snaps and trace data ...
8 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enable SMB Multichannel

SMB Multichannel enables file servers to use multiple network connections simultaneously. It facilitates aggregation of network bandwidth and network fault tolerance when multiple paths (f.ex. multiple network adapters configured for LACP on clien...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Expediting file system recovery for very large environments

Enterprise Scale environments are typically comprised of a large quantity of i-nodes and/or file counts for footprints of a few hundred PB in a single filesystem. When an unexpected event occurs, such as fs log corruption, running mmfsck to recove...
5 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

AFM Azure Blob direct support

For data exchange, data access from multiple sites and also access of data from within Azure we would like to use Spectrum Scale AFM connected to a Azure blob similar as connected to S3 storage. Currently this will require a S3 / Azure converter l...
about 3 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

List and stop long running commands

Some Spectrum Scale commands can take a long time to run, such as mmapplypolicy, mmrestripefs, mmecheckquota. These might be run manually, or started by cron a batch system, or a callback. But sometimes, once started, there is a need to stop the c...
about 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration