Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 6

Ganesha - Event 1 could not be processed for fd 5 rc I/O error , report the client that abuse the Ganesha, monitor that error and have action

When Ganesha server hit this error - the NFS server just hang , there is no ip failover, or restart of the NFS server: epoch 001103c9 : gpfs.ganesha.nfsd-1492239[fsal_up_0.48] GPFSFSAL_UP_Thread :FSAL_UP :WARN :Event 1 could not be processed for f...
8 months ago in Storage Scale (formerly known as GPFS) 1 Is a defect

support for runnign VMs in GPFS (see TS003291090)

See TS003291090 If a KVM host runs GPFS client and mounts a filesystem from a gpfs cluster. The KVM host runs a VM which uses a container file stored in the same file system under a independent fileset named kvm01 which is under the root fileset. ...
about 4 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

Changing dynamic pagepool can cause the system to hang

The proposed changes to prevent system hang are as follows: 1. The code checks for available memory and if the available memory is less than the requested value, return the value users can change 2. As it takes time to allocate memory, the code re...
almost 4 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

mmnetverify fails ungracefully when --target-node is Windows node

When run from Scale node on x86 RHEL 7.3 (x86), mmnetverify fails ungracefully when --target-node is a Windows node.Moreover, it fails inconsistently when used with the node's "Scale" interface vs. the Admin node name, providing additional confusion.
over 6 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

mmhealth incorrectly reports Windows nodes as FAILED

Scale 4.2.3NSD nodes on x86, rhel7.3client nodes on x86, rhel and Windows 2012 R2When executed on NSD node, mmhealth incorrectly shows Windows node as FAILED:---# mmhealth cluster show node Component Node Status Reasons----------------------------...
over 6 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect

mmunlinkfileset -f report unmatch error message

[root@pnsd1 ~]# mmunlinkfileset aggr3 user_scratch -fUnable to quiesce fileset at all nodes.Fileset user_scratch has open files. Specify -f to force unlink. From above output we can see the error is not accurate reflecting customer's issue. They h...
over 6 years ago in Storage Scale (formerly known as GPFS) 2 Is a defect