Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 601 of 6035

Improvement to Distributed Lock Manager (DLM) for shared files.

Request for an improved DLM mechanism when leveraging a scale out protocole node approach to better handle Locking broadcasting overhead for "shared to all" files and folder.
over 9 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Add support for exporting remotely mounted GPFS filesystems with ganesha via CES

The CES ganesha nfs server "allows" a remotely mounted GPFS filesystem to be exported, but doing so results in incorrect cache behavior. Ganesha maintains its own inode cache, but cache invalidation doesn't work correctly for remotefs filesystems....
almost 10 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Dockers utilization for Spectrum Scale processes

Scale best practices recommends to have separate servers for Scale servers and Protocol Nodes. In ESS also is not supported to include Protocol Nodes inside the ESS nodes due to GNR resource contention. This implies to have almost four servers for...
almost 10 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Increase independent filespace limit

AFM, fileset snapshots and some other features can only be used with independent filesets. Thus GPFS customers are advised to prefer them over dependent filesets.But GPFS has a hardcoded limit of 1000 independent filesets per filesystem which is t...
almost 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

Deliver for each Server a GPFS Client for free

GPFS aka Spectrum Scale is great- to reach a broader customer set, I suggest that we'll deliver a GPFS client/ NSD "driver for free" !
almost 10 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Use transactional write model and issue periodic disk cache flushes

GPFS (at least on Linux) does not attempt to flush the cache of the block devices to which it is writing. It relies on O_DIRECT to bypass host write cache but this does not guarantee data has been flushed to stable media. This RFE requests that GP...
almost 10 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Support backup and restore of metadata

Allow administrator to backup and restore GPFS filesystem metadata without losing data stored on dataOnly or dataAndMetadata NSDs.
almost 10 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Implement end-to-end checksums for non GNR environments

Implement data block checksums in GPFS to protect against silent data corruption. This is particularly important in an FPO environment where there are no RAID controllers to protect against individual SATA drives returning bad data to the applicat...
almost 10 years ago in Storage Scale (formerly known as GPFS) 5 Not under consideration

When using Protocol nodes, the export IPs should be the only one to allow mounting shares

With the use of Protocol Nodes, the CES requires IPs for export shares. However, even having the IP specific for share, the internal IPs (NODE IPs), which are used for administration only, also allow the NFS Mounts. The request is to isolate the n...
almost 10 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

Address GPFS SNMP scaling limitations

When the GPFS SNMP agent is run in an environment with a large number of NSDs it is unable to process the performance data from the NSDs due to internal buffer exhaustion. The GPFS SNMP agent then becomes unresponsive and will never start up and r...
almost 10 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration