Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 524

Option to avoid down drives

Option: when a device is down allows writes to blocks on the device to go to a different device rather than create a missed update. This protects the file system from large numbers of missed update blocks and retains full replica count.
almost 6 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Allow for separation of client services from storage services

Allow for the optional separation of the gpfs mmfsd process into a client process mmfscd independent of the gpfs storage process mmfssd. When something goes wrong with I/O subsystem in one node the applications on that node should still be able to...
almost 6 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Describe how to install/upgrade purchased DAE/DME license files when first using Spectrum Scale Developer Edition

Client wants to install Spectrum Scale Developer Edition today while waiting to receive their DME licenses. Requesting documentation which describes how to install/upgrade purchased DAE/DME license files when Spectrum Scale Developer Edition is cu...
almost 4 years ago in Storage Scale (formerly known as GPFS) 0 Not under consideration

Improve 'mmvdisk filesystem delete' behavior

When deleting a vdisk set from a filesystem the command is: mmvdisk filesystem delete --file-system udata --vdisk-set VS2It responds with:mmvdisk: This will run the GPFS mmdeldisk command on file system 'udata'But it does NOT say what it is going ...
almost 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Spectrum Scale Config Changes in GUI

Provide the ability for the Spectrum Scale GUI to show and make changes to the GPFS config parameters and provide recommendations to nodes and nodeclasses in the GUI. A tuning page would check the configuration of nodes and nodeclasses and allow f...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Provie a button or tab that exposes API calls that would have resulted in the previous GUI activity.

The command line actions used are shown when a GUI action is completed, but this does not always map to a specific API call or sequence of API calls. It would be helpful for those trying to move to automated workflows to see what the API call woul...
almost 2 years ago in Storage Scale (formerly known as GPFS) 3 Future consideration

Tools or self tune mechanisms for troubleshooting

Customers would like to have tools that would allow them to analyze our diagnostic information so they could more quickly determine what was wrong with the system and try to correct it (i.e. Self tune based on the latency and throughput of the net...
almost 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Overflow data placement

When creating a new filesystem with more than 1 pool, GPFS will guess where to put the file data. It would be nice if it was also capable of spillover to other pools, when the pool it guessed for here runs full: RULE 'default' SET POOL 'data' LIMI...
almost 6 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Use enterprise SATA drive in ECE

In our configuration, except SAS HDD/SSD and NVME, we also need to use enterprise SATA HDD/SSD as pdisk in ECE. These SATA drives can pass the ECE precheck for DpoFua and SCT ERC. The attachment is the list.
about 2 years ago in Storage Scale (formerly known as GPFS) 0 Not under consideration

support filesystem metadata migration to 4K-based NSDs

Provide a transparent (e.g. no downtime) mechanism to introduce NSDv2 4k-aligned metadata NSDs into filesystems created with NSDv1 format NSDs.
about 8 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration