Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 172

About Alarms

The client's filesystem space is 93% used gpfs sent an alarm pool-data_high_error through gui-snmp trip on October 20, which was notified to the client in time through the client's alarm platform, but the current administrator did not deal with it...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

GPFS supports the dirsync mount option

Our bussinuss architure is like FuseClient --> FileServer(GPFS NativeClient)-->GPFS. IBM GPFS does not support dirsync mount option in Linux. GPFS Native Client will save the dir meta changes(create/mknod/link/unlink/rename...) in cache. The...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

API enhancements for invoicing/billing

In the web gui for Scale you can easily get the accounts and used storage under the "Object / Accounts" button. But there is no simple way to retrieve this information using api/rest. (There is not even a scheduled report function for it). This is...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Enhancements on Spectrum Scale Installation toolkit to support remote cluster configurations and updates

Currently there are a gap in this toolkit to define remote cluster configs. We know that the code are already available in the open source stuff but not in the offical release of the installer. It should be added to simplify the current manually s...
about 3 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Allow for bulk add of mmces addresses in node-affinity

We have found that for deployments of 16 or more CES nodes that node-affinity mode is the most predictive and stable, the challenge is in deployment time. In balanced mode all of the addresses can be deployed in 1-2 minutes. With 32 CES nodes runn...
over 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Spectrum Scale 4.2.3 - Monitoring I/O processes

We have in our production environment a GPFS cluster, version 4.2.3, used by the SAS system in version 9.4 M3The cluster consists of 11 servers, with Linux operating system RedHat 6.7 and 7.3, being 5 physical and 06 virtual.It has a dedicated dec...
over 6 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

On the Storage Scale Server (SSS) formerly ESS, the "mmlsnsd" command should show the NSD disk primary/secondary server relationship as previously done and does so for Spectrum Scale on SANs and Lenovo DSS.

Having "mmlsnsd" display the NSD disk data path ownership (primary/secondary server relationships) allows customers to balance disk assignments. In addition, Lenovo DSS does this already and customers migrating from Lenovo , like MorganStanley, ha...
almost 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Improve 'mmvdisk filesystem delete' behavior

When deleting a vdisk set from a filesystem the command is: mmvdisk filesystem delete --file-system udata --vdisk-set VS2It responds with:mmvdisk: This will run the GPFS mmdeldisk command on file system 'udata'But it does NOT say what it is going ...
over 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Tools or self tune mechanisms for troubleshooting

Customers would like to have tools that would allow them to analyze our diagnostic information so they could more quickly determine what was wrong with the system and try to correct it (i.e. Self tune based on the latency and throughput of the net...
over 4 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Make mmclone permissions work like cp

Currently, a file clone created with mmclone always has its group owner set to the primary gid of the creating process, without regard to the setgid bit of the containing directory, and without regard to the group owner of the original file. Other...
about 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration