Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

ADD A NEW IDEA

Ideas

Showing 634

now ibm csi issue job to gui, gui only has 16 threads to handle it, in some condition it is very slow, gui should support more threads to hanlde csi task

gui should support more threads to hanlde csi task
4 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Enhance the Policy Engine to process newline chars (0x0A) in path names.

We run GHI over Storage Scale for user file systems and have to cope with all sorts of path names. GHI makes use of the policy engine for data movement (migration, purging). We think that any path name working in (core) Storage Scale has also to b...
5 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Enable remote Clusters to connect at all local interfaces

For security or performance reasons the network is divided into different zones, either to isolate different tenants or to separate different types of network traffic. To enable multitenancy, a Storage Scale Cluster should be able to serve multipl...
8 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Scale Storage Native Driver for Mainframe zOS

We are trying to get access to the mainframe storage on the midrange end to allow Data and AI tools like Content Aware Storage and Watson Knowledge Catalog. Following are the gaps: 1. There is no native driver for zOS. Current support is limited t...
10 days ago in Storage Scale (formerly known as GPFS) 0 Under review

CNSA - support Kubernetes on Azure Linux

IBM Storage Scale will be able to provide the high-performant, scalable, and highly-available storage subsystem to meet the I/O requirement for containerized AI workload running on large GPU clusters with Azure Linux.
16 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Storage Scale Recompilation Requirement Post-RHEL Kernel Upgrade

Request IBM to remove the need for binary kernel module recompilation after RHEL kernel updates for IBM Storage Scale. This requirement adds operational overhead and complexity to patching and automation processes. While RHEL provides stable ABIs ...
17 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Ability to grant access on a virtual path level within a bucket

A topic we discussed with the client was the ability to grant access on a virtual path level within a bucket. This can be achieved with bucket policies. However, in Scale S3 we do not support the Condition clause of a bucket policy. The condition ...
18 days ago in Storage Scale (formerly known as GPFS) 0 Under review

Provide helper to determine metadata size

We have a GNR cluster with > 650 million files, distributed over two filesystems. All disks are DataAndMetadata. Policy runs and mmbackup are taking very long. One speedup suggestion of the GPFS support was to use explicit metadata storage. Unf...
24 days ago in Storage Scale (formerly known as GPFS) 0 Submitted

use sort from rust-coreutils to speedup mmbackup/mmapplypolicy

For large filesystems mmbackup and mmapplypolicy spend long times with sorting data. This idea is about improving those times considerably. https://github.com/uutils/coreutils is a re-implementation of the GNU coreutils. The sort command is much q...
25 days ago in Storage Scale (formerly known as GPFS) 0 Submitted

Prevent attribute changes by non-root users

We are running a big filesystem that controls file access by (nfsv3) ACLs. The ACLs are carefully crafted to ensure the desired file accesses for the data. As ACLs are not directly visible via ls (there's only the tiny + character) users tend to m...
25 days ago in Storage Scale (formerly known as GPFS) 0 Submitted