Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 528

mmhealth incorrectly reports Windows nodes as FAILED

Scale 4.2.3NSD nodes on x86, rhel7.3client nodes on x86, rhel and Windows 2012 R2When executed on NSD node, mmhealth incorrectly shows Windows node as FAILED:---# mmhealth cluster show node Component Node Status Reasons----------------------------...
almost 7 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

Avoid HSM recalls into GPFS snapshots when files are deleted in the live FS

When a migrated file is part of a GPFS snapshot and the file is deleted, the Tivoli HSM client is sent a DMAPI event and recalls the file into the snapshot..This behaviour is undesired especially in SOBAR usage scenarios (see Use Case below).Inste...
over 10 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration

Reliability option for when suspension of disk occurs on disk outages

IBM: Currently when allocating disk space, gpfs only consider the disk status(ready,suspended/to-be-emptied, emptied, replacing, replacement) but not "availability"(up, down, recovering, unrecovered). for "availability", it will impact whether GPF...
almost 7 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

mmunlinkfileset -f report unmatch error message

[root@pnsd1 ~]# mmunlinkfileset aggr3 user_scratch -fUnable to quiesce fileset at all nodes.Fileset user_scratch has open files. Specify -f to force unlink. From above output we can see the error is not accurate reflecting customer's issue. They h...
about 7 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

GUI vs. waiters

GUI thread fork()s mmlsfs ... but when the cluster is hanging anyway .. it does not makes any sense , maybe GUI can be optimized to check cluster state/waiters ... As if a cluster hnags (has waiters) adding more monitoring hanging processes on top...
over 7 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Spectrum Scale (ESS) request for SMI-S CIM Agent for 3rd party SRM monitoring support

Require an SMI-S CIM Agent to be developed for SCALE / ESS to allow 3rd party SRM tool discovery and support (NetApp OnCommand, SolarWinds....etc) for capacity and performance monitoring. It is understood that Spectrum Control can be used with the...
over 7 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Need approved process to create new secondary for protocol cluster AFM-based DR

With AFM-based DR, I am familiar with the process for creating a new secondary when the original secondary is lost (mmafmctl <filesystem> changeSecondary). What is the process for creating a new secondary when mmcesdr is being used to manage...
over 7 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

Need a configurable timeout value for IBM Spectrum Scale replication

When a V7000 disk failure occured, whole RAID5 array couldn't complete I/O request from AIX OS at all, for about 100 seconds.After 100 seconds, the failed drive was removed from the array, then V7000 returned to normal operation and responded to A...
over 7 years ago in Storage Scale (formerly known as GPFS) 3 Not under consideration

GPFS as gateway for different storage system

Michael Daubman and Lisa visited CMA(China Meteorological Administration). in the meeting, CMA raised this requirement CMA has multiple GPFS cluster and they also have different NAS storage system from different vendors. usually, they will set up ...
almost 8 years ago in Storage Scale (formerly known as GPFS) 2 Not under consideration

RFE for support of "mmafmctl <filesystem> failoverToSecondary ?j <fileset> --norestore" within mmcesdr

AFM-based DR support a “--norestore” option when invoking # mmafmctl <filesystem> failoverToSecondary –j <fileset> --norestore e.g. secondary# mmafmctl kyc2 failoverToSecondary -j sec02 --norestorePrimary Id (afmPrimaryId) 149423358914...
about 8 years ago in Storage Scale (formerly known as GPFS) 4 Not under consideration