Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 535

Quota management delegation

I was discussing removal of root use for commands with ibm dev Germany and this came up. Need the ability to delegate quota management to different groups of users on large scale file systems. You can have different admins for different teams resp...
over 5 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

About Alarms

The client's filesystem space is 93% used gpfs sent an alarm pool-data_high_error through gui-snmp trip on October 20, which was notified to the client in time through the client's alarm platform, but the current administrator did not deal with it...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Implement end-to-end checksums for non GNR environments

Implement data block checksums in GPFS to protect against silent data corruption. This is particularly important in an FPO environment where there are no RAID controllers to protect against individual SATA drives returning bad data to the applicat...
almost 9 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Spectrum Scale automatic kernel modules

The technology is there to make this happen, especially if using Enterprise distributions such as Red Hat or Suse. You can also throw Ubuntu to some extent in the mix as they don't apply kAPI breaking changes very often. Our customer was successfu...
almost 8 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

LDAP enhancements related to RFC2307 mappings on Spectrum Scale/GPFS Windows

Currently, GPFS Windows has functional, performance, and HA limitations related to the LDAP AD lookup. The details are documented in IBM Case Number: TS005919437. https://www.ibm.com/mysupport/s/case/5003p00002YlqdkAAB/kosmos-gpfs-windows-filesyst...
over 3 years ago in Storage Scale (formerly known as GPFS) 1 Not under consideration

Support LibFabrics

As more high speed networks are available (IB,OmniPath, Etc ROCE, other RDMA capable networks) The Open Fabrics org has defined a more generic interface then libverbs (which is very network specific) to enable make it easier to take advantage of n...
over 5 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

GPFS supports the dirsync mount option

Our bussinuss architure is like FuseClient --> FileServer(GPFS NativeClient)-->GPFS. IBM GPFS does not support dirsync mount option in Linux. GPFS Native Client will save the dir meta changes(create/mknod/link/unlink/rename...) in cache. The...
about 1 year ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Changing dynamic pagepool can cause the system to hang

The proposed changes to prevent system hang are as follows: 1. The code checks for available memory and if the available memory is less than the requested value, return the value users can change 2. As it takes time to allocate memory, the code re...
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

Rolling upgrade of CES protocol node with object services

Current operation: Because object operations interact with all nodes, all object nodes must be at the same level. Therefore, object services need to be stopped across the cluster for the duration of the upgrade. This RFE is for allowing rolling up...
over 4 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Support for Windows Client nodes in AFM Cache Cluster

Require support for Spectrum Scale Windows client nodes with AFM Cache clusters.
almost 7 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration