Skip to Main Content
IBM System Storage Ideas Portal
Hide about this portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 140

Enable opt-in flush-on-close feature in GPFS

Provide a mechanism by which a GPFS client may opt-in to enable implicit flush whenever close is called on a file opened for writing. This provides some additional safety against mmfsd crashes and node crashes impacting data which users believe wa...
almost 9 years ago in Storage Scale (formerly known as GPFS) 3 Delivered

Ability for protocol replication to co-exist with other non-protocol AFM-based DR instances

We need protocol replication to be able to co-exist with other non-protocol AFM-based DR instances On our advanced edition GPFS clusters, we wish to have both: o AFM-based DR for normal filesets (managed via mmafmctl) o protocol DR for filesets wi...
about 9 years ago in Storage Scale (formerly known as GPFS) 3 Delivered

A new option to support FQDN Name in mmgetstate

This customer has general policy for hostname and hostname does include .(dot) and the characters after .(dot) is a main separator.A sample example is below;host1.poc1host1.poc2..host1.pocn So when we add this in a cluster, mmgetstate use just sho...
over 9 years ago in Storage Scale (formerly known as GPFS) 3 Delivered

Treat all independent filesets identically to the global namespace operations

Independent filesets do not raise callbacks on lowSpace or noSpace. Therefore it is very easy for an independent fileset to become full instantly affecting users.
over 10 years ago in Storage Scale (formerly known as GPFS) 4 Delivered

Use the running policy engine to SetXattr for all files and folders under a PATH_NAME

Ability to automatically tag the Xattr of data with defined attributes such as a project name.
over 10 years ago in Storage Scale (formerly known as GPFS) 6 Delivered

Override the sort routine within policy processing

Ability to specify extra args to sort (E.G. for natural sort for numerical sequences) or to use a user specified program to perform the sort using a different algorithm.
over 10 years ago in Storage Scale (formerly known as GPFS) 3 Delivered

Metablocks to require less context switching

Metablocks to require less context switching (I.E. increased performance)
over 10 years ago in Storage Scale (formerly known as GPFS) 5 Delivered

Permit GPFS commands to be executed based on failure group

We have a large GPFS FPO cluster (560 nodes). When performing maintenance it is important that we can manage according to failure group. for example we may need to upgrade all nodes in one failure group. Hence if we have 3 copies of the data, we a...
over 10 years ago in Storage Scale (formerly known as GPFS) 4 Delivered

Improve ability to detect BROKEN files

Currently it is not possible to easily identify BROKEN files in a GPFS FPO cluster following the failure of a disk. This can arise when there is a single copy of the data (which is on the failed disk). IBM advise running the following command: mmf...
over 10 years ago in Storage Scale (formerly known as GPFS) 4 Delivered

Add rpm dependency on m4 package

GPFS hasa dependency on the m4 package. in particular it is required by the mmchpolicy command. Consequently please can you add it as a dependency in the GPFS rpms. thanks Chris
almost 11 years ago in Storage Scale (formerly known as GPFS) 4 Delivered