Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on Jan 23, 2018

Usability and Performance enhancements needed for very large filespace replication

There are 3 major problems currently with replication of very large filespaces.
1) The initial replication and ongoing replication of data for nodes with very large filespaces (100 million+ objects) and with a large amount of data (10+TB) works very well as long as the data on the source server is located in a disk pool.
Performance however drops to an order of magnitude or more slower if the data resides on tape. We even found that it was significantly faster overall to first allocate a massive disk pool, do a MOVE NODEDATA operation from tape pool to disk pool, and then run the initial replication to the target server. This is unacceptable and shows that the code was simply not optimized for the use case of tape.
2) Once replication is established, if REMOVE REPLNODE needs to be run on the node, the REMOVE REPLNODE command runs only in the foreground and seems to hang.
We believe the REMOVE REPLNODE command should run as a process (with the option of wait=yes) and report statistics of how many objects out of the total have been removed.
3) If a situation occurs where a replication sync needs to be performed between the source and target servers for the large node, there is a huge performance AND usability problem.
The time required simply to sync/confirm objects on the target seems to take many times longer than even the initial replication. Also, the replication sync process reports NO STATISTICS to show its progress or estimate of time to completion.

Idea priority High
  • Guest
    Reply
    |
    Jan 26, 2018

    This request may not be delivered within the release currently under development, but the theme is aligned with the current multi-year strategy. IBM may consider and evaluate any RFE Community feedback for this request through activities such as voting. IBM will update this request in the future.