Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Needs more information
Created by Guest
Created on Sep 12, 2025

cesSharedRoot and cnfsSharedRoot effeciency improvements

The cesSharedRoot and cnfsSharedRoot requirements made economic sense in the past, before mmvdisk began enforcing a minimum of 4 vdisks per Data Adapter (DA) on SSD-based Recovery Groups (RGs).

Currently, in a single ESS Building Block, depending on pdisk size, nearly 1TB of space is allocated per node for IP addresses, locks, and other metadata that actually consume only about a dozen GB of data. This results in extremely poor cost efficiency. When replication or multiple Building Blocks are deployed, this waste multiplies significantly.

For example, our current CES filesystem shows only 7.2GB used out of 9.7TB allocated (1% utilization): `CES 9.7T  7.2G  9.7T   1% /gpfs/CES`. We've mitigated this by sharing one CES filesystem across multiple CES clusters. However, if we were required to provision one filesystem per CES cluster, we would need 5 such filesystems on each backend, dramatically increasing waste.

Since CCR already uses PAXOS consensus protocol and the GUI stores data using it, we recommend implementing a solution that eliminates the need for a dedicated filesystem. This could be achieved through CCR on the CES cluster itself (not the backend), ETCD or similar distributed key-value store, or any other lightweight solution that doesn't require provisioning multi-TB filesystems for minimal data storage needs.

Also - could also be LV on the CES nodes and do a replica 3 FS on the CES cluster.
 

 

Idea priority High
  • Admin
    Ulf Troppens
    Sep 22, 2025

    Based on the last comment it seems that you are looking for

    a) short-term: different means to provision the filesystem for the cesSharedRoot directory, and

    b) long-term: replacement of cesSharedRoot by different consistency mechanism.


    Is this correct understanding?

  • Guest
    Sep 17, 2025

    PLEASE RE-OPEN THIS RFE FOR FURTHER CONSIDERATION AS OUTLINED BELOW

    From: Bolinches, Luis (WorldQuant) <Luis.Bolinches@worldquant.com>
    Sent: Wednesday, September 17, 2025 11:15 AM
    To: DAVID BECHTOLD; Wesley JONES; Sumit Kumar
    Subject: RE: [EXTERNAL] Fw:  Idea received: cesSharedRoot and cnfsSharedRoot effeciency improvements

    Adding Wes Jones and Sumit Kumar for the short term utility part for the CES Shared Root is for the CES team

    Luis Bolinches

    WorldQuant Aligned Team 

    From: Bolinches, Luis (WorldQuant) 
    Sent: Wednesday, September 17, 2025 5:14 PM
    To: DAVID BECHTOLD <dkbechto@us.ibm.com>
    Subject: RE: [EXTERNAL] Fw: Idea received: cesSharedRoot and cnfsSharedRoot effeciency improvements

    Hi

    I'm drafting this email of our concerns and hopefully avoid the need for a call.

    Regarding Replica 3: The first issue is that this isn't the intended vdisk configuration—it's meant for loghome and similar uses, not for data storage. Additionally, this approach won't address our storage efficiency problem since we'd need to scale it across all ESS systems in either the filesystem or at minimum the entire site. This would only reduce our capacity from 90TB to 70TB, which is far from adequate.

    Filesystem Convergence Option: While converging into other filesystems has been suggested, best practices recommend keeping them separated. We encountered a critical issue last week when our chosen filesystem—which also contains production data—entered the same problematic state. This risk is unacceptable for our environment.

    Critical Request: Please ensure this issue is escalated to the development team directly—not through presales or former lab services channels. The incident we experienced was critical and has significant implications for our production environment. We need substantial improvements in both the filesystem manager code and the underlying architecture before we can trust this solution in production.

    Proposed Solution: The end goal is to eliminate the cesroot requirement entirely. In the interim, we need a method to provision cesSharedRoot on the new utility nodes. This could be accomplished through:

    • Logical volumes (LV)
    • New qcow2 disk allocation
    • Other alternative approaches

    Luis Bolinches

    WorldQuant Aligned Team

     

     

  • Guest
    Sep 17, 2025
    In current Storage Scale, cesSharedRoot can be placed on any available file system. It is recommended to be on a separate file system, it does not have to be. CES can not run without this file system.

    The configuration allows to define the path. From mmchconfig doc page :
    cesSharedRoot
    Specifies a directory in a GPFS file system to be used by the Cluster Export Services (CES) subsystem. For the CES shared root, the recommended value is a dedicated file system, but it is not enforced. The CES shared root can also be a part of an existing GPFS file system. In any case, cesSharedRoot must reside on GPFS and must be available when it is configured through mmchconfig.
    GPFS must be down on all CES nodes in the cluster when changing the cesSharedRoot attribute.

    Note you must specify separate folders for separate CES clusters (ie in a remote mount scenario).

    If this does not meet your requirement, then please provide more details on what is missing and reopen the IDEA.
  • Guest
    Sep 16, 2025

    Using 3WayReplication instead of 8+2p wastes a lot less capacity:

     

    # mmvdisk vs define --vdisk-set cesSharedvs1 --set-size 10G --code 3WayReplication --block-size 1m --recovery-group rg_ess3500 --DA DA1
    mmvdisk: Vdisk set 'cesSharedvs1' has been defined.
    mmvdisk: Recovery group 'rg_ess3500' has been defined in vdisk set 'cesSharedvs1'.
    
                         member vdisks
    vdisk set       count   size   raw size  created  file system and attributes
    --------------  ----- -------- --------  -------  --------------------------
    cesSharedvs1        4 8065 MiB   24 GiB  no       -, DA1, 3WayReplication, 1 MiB, dataAndMetadata, system