Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 178

Add Storage Class Attribute to Data Pushed to S3

When pushing data to S3 via the Cloudgateway service it would be helpful to be able to specify the storage class. If you don't specify the storage class the data will be stored in S3 Standard. You can create a Lifecycle policy to push the data to ...
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Improve 'mmvdisk filesystem delete' behavior

When deleting a vdisk set from a filesystem the command is: mmvdisk filesystem delete --file-system udata --vdisk-set VS2It responds with:mmvdisk: This will run the GPFS mmdeldisk command on file system 'udata'But it does NOT say what it is going ...
over 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Ability to change NSD names

We would like the ability to change the name of existing NSDs. This change could be done while entire GPFS NSD Server Cluster is down, similar to when you change the NSD ServerList.
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Tools or self tune mechanisms for troubleshooting

Customers would like to have tools that would allow them to analyze our diagnostic information so they could more quickly determine what was wrong with the system and try to correct it (i.e. Self tune based on the latency and throughput of the net...
over 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

GPFS AFM gateways must handle migrated/offline files in asynchronous fashion

As per the case above, we have experienced many issues in the gateway (AFM) performance, stability and global usability as a result of long waiters arising from the use of offline files in the backend of the GPFS cache and home relationship. These...
over 4 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Support Soft-RoCE drivers

As 100/200 Gigabit RoCE technologies mature, not all clients are able to leverage the same "hard" RoCE HBAs and associated drivers such as Mellanox, which is the currently supported device driver. This request recognizes there are some HBAs that m...
over 4 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Confusing status message 'mmlspdisk'

My Client submits the following:After carrier release the status is even more confusing [DIARSS root@ems1-xcat ~]# mmlspdisk all --not-okpdisk: replacementPriority = 6.77 name = "e1d2s02" device = "//essio32.ib.rsshpc1/dev/sdpe(notEnabled/closed),...
over 4 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Allow acknowledging of host problems on Spectrum Scale GUI

System: Spectrum Scale GUIActor: Admins monitoring the system The GUI is our primary way of monitoring the health and performance of the system. When a worker node goes down, which is common in a large HPC system, the health of the filesystems sho...
over 4 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Allow more than 20 threshold rules in the GUI (mmhealth)

You can only create a maximum of 20 threshold rules, either through the Spectrum Scale GUI or directly with the mmhealth command. This limit should be increased (e.g. to a minimum of 50)
almost 5 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Provide a changelog for Scale releases

Please provide a changelog identifying the APARs fixed in a Scale Mod, Release, or Version change
almost 5 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration