Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Storage Scale (formerly known as GPFS)

Showing 165

File Audit Logging x Object/S3

With Scale we have FAL for NFS and CIFS, for Object/S3 we have the capability to have the mapping for the SWIFT API user only and not with the specific user or application that is submitting the request (put, get, etc). Our client is asking thew c...
over 2 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

more control over hiding snapshot dirs / extend mmsnapdir

There seems to be no way to get rid of the .snapshots subdir in the root of indendepent snapshots. This is a problem when mirroring data using rsync. Because we do not want to rsync snapshots we need to exclude them. But for correct mirroring we a...
over 2 years ago in Storage Scale (formerly known as GPFS) 2 Future consideration

Spectrum Scale GUI - Increase the size of the login banner field

Many sites are required to present a legal warning on systems when a connection is initiated. Currently the Spectrum Scale login banner field is restricted to ~44 characters which is not sufficiently large enough to hold our required legal warning...
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

The GUI should work with multiple proxies (Proxy-HA)

The support of several proxy servers (proxy-HA) is a widely used method throughout the routeClient -> Hardware Loadbalancer -> Proxy server -> Loadbalancer -> GUI-HA to ensure reliability.
over 2 years ago in Storage Scale (formerly known as GPFS) 1 Future consideration

Lessen Spectrum Scale CNSA's pods dependency on running as super user (root)

Today as of Scale version 5.1.1.4, some of the Container Native Storage Access (CNSA) containers for running Scale on Openshift have the need to run as super-user (root). Although it's understandable from the multi-decade history of Scale itself a...
almost 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

check if underlying file system is mounted when creating a NFS share

When a new NFS share is created, the mmnfs command should check if the underlying file system is mounted on all CES nodes and fail if this precondition is not fulfilled.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

add a default editor to the mmeditacl command

Currently there is no built-in editor for the command mmeditacl. However, there is a built-in editor for the command mmedquota. This is a request to add a built-in editor to the mmeditacl command.
over 3 years ago in Storage Scale (formerly known as GPFS) 0 Future consideration

network access controlm for mmsysmon REST API

Currently mmsysmon REST API allows all inbound connections. Our cybersecurity penetration team has successfully sent network packets that caused REST API to malfunction, leaving hundreds of connections in CLOSE_WAIT state. In this scenario CherryP...
3 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

gpfs support mount fileset or directory

I hope that GPFS (General Parallel File System) supports mounting filesets or directories. Since each fileset or directory is used by different individuals, if mounting of filesets and directories is supported, it would achieve the effect of data ...
4 months ago in Storage Scale (formerly known as GPFS) 0 Future consideration

Storage Scale - A GPFS node with multiple nic drivers (ip addresses) should not be able to rejoin the cluster when the daemon interface is down

Our storage nodes have multiple nic drivers and IP addresses (RoCE network). When the nic for daemon IP is down (dev bond0), it will still keep trying to rejoin the cluster again and again, because its another nic (dev bond1) is still working. But...
4 months ago in Storage Scale (formerly known as GPFS) 1 Future consideration