Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on May 17, 2017

Need approved process to create new secondary for protocol cluster AFM-based DR

With AFM-based DR, I am familiar with the process for creating a new secondary when the original secondary is lost (mmafmctl <filesystem> changeSecondary). What is the process for creating a new secondary when mmcesdr is being used to manage AFM-based DR replication for protocol clusters?
No process is listed in the “Protocol cluster disaster recovery” documentation http://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_prodr.htm

My attempts to use “mmcesdr primary config” and “mmcesdr secondary config” (as per the section on “Creating the inband disaster recovery setup” at the above URL) and “mmafmctl fs1 changeSecondary” to achieve this did result in replication successfully occurring, but subsequent failover / failback for those pre-existing filesets that this process was used on did not, with issues creating the exports at the secondary site on failover, presumably as there are issues with the metadata for these filesets. For example, are additional steps required using “mmcesdr primary update” when creating a new secondary when the original secondary has been lost?

Note that for filesets that were newly created (i.e. had never been replicating previously) between the primary and the new secondary, these failed over and back o.k., so the issue is related only to existing primary filesets for which I tried to create a new secondary, and not for net new replication groups.

Again, I don't want to troubleshoot what I did wrong, but I must have the formal and complete process for creating a new secondary with protocol cluster replication when the original secondary is lost so that I can test that.

secondary# mmcesdr secondary failover
/ces/.async_dr/saved_dr_config_dir/DR_Config
Performing step 1/4, saving current NFS configuration to restore after failback.
Successfully completed step 1/4, saving current NFS configuration to restore after failback.
Performing step 2/4, failover of secondary filesets to primary filesets.
Error failing over independent fileset ces:async_dr to become primary, rc: 1
Error failing over 1 independent fileset(s). Those filesets will have to be failed over manually.
Completed with errors step 2/4, failover of secondary filesets to primary filesets.
Performing step 3/4, protocol configuration/exports restore.
mmcesdr: mmnfs export add: [E] Error creating new export object, invalid data entered (return code: 1).
mmcesdr: Error creating NFS export /fs1/nfs-cac1, failed with exit code: 1
mmcesdr: NFS export could not be created, rc = 5. Create NFS export associated with path /fs1/nfs-cac1 manually.
mmcesdr: Error creating NFS export /fs1/nfs-cac2, failed with exit code: 1
mmcesdr: NFS export could not be created, rc = 5. Create NFS export associated with path /fs1/nfs-cac2 manually.
.
.
mmcesdr: Error creating NFS export /fs1/nfs-cac99, failed with exit code: 1
mmcesdr: NFS export could not be created, rc = 5. Create NFS export associated with path /fs1/nfs-cac99 manually.
Error creating 101 export(s). Those exports will have to be created manually.
Completed with errors step 3/4, protocol configuration/exports restore.
Performing step 4/4, create/verify NFS AFM DR transport exports.
mmcesdr: The check for an existing NFS export /fs1/nfs-cac1 returned an error or the export does not exist.
mmcesdr: The check for an existing NFS export /fs1/nfs-cac10 returned an error or the export does not exist.
.
.
mmcesdr: The check for an existing NFS export /fs1/nfs-cac98 returned an error or the export does not exist.
mmcesdr: The check for an existing NFS export /fs1/nfs-cac99 returned an error or the export does not exist.
Unable to verify 101 NFS AFM DR transport export(s).
Completed with errors step 4/4, create/verify NFS AFM DR transport exports.


mmcesdr.log shows foe each export

2016-11-23-11-01-43:18227:protocolClusterFailover:8132: Re-creating SMB and NFS exports for failed over device:fileset: fs1:nfs-cac51
2016-11-23-11-01-43:18227:protocolClusterFailover:8157: Recreating NFS export with command: recreateNFSExportWithClientString /fs1/nfs-cac51 --add-to-clients E8C3%2DDL360G7%2D09%2Enam%2Ensroot%2Enet%2CE8C3%2DDL360G7%2D10%2Enam%2Ensroot%2Enet%28Access%5FType%3DRW%2CProtocols%3D3%3A4%2CSquash%3Dno%5Froot%5Fsquash%29 E8C3%2DDL360G7%2D09%2Enam%2Ensroot%2Enet%2CE8C3%2DDL360G7%2D10%2Enam%2Ensroot%2Enet --force
2016-11-23-11-01-43:18227:recreateNFSExportWithClientString:1946: Force re-creation if needed, therefore removeAndRecreateIfNeeded is: true
2016-11-23-11-01-43:18227:recreateNFSExportWithClientString:2053: Removing NFS export with path: /fs1/nfs-cac51
The NFS export was deleted successfully.
2016-11-23-11-01-50:18227:recreateNFSExportWithClientString:2131: Command to run: mmnfs export add /fs1/nfs-cac51 -c "E8C3-DL360G7-09.nam.nsroot.net,E8C3-DL360G7-10.nam.nsroot.net(Access_Type=RW,Protocols=3:4,Squash=no_root_squash);*(Delegations=none,Access_Type=RW,Protocols=3:4,Transports=TCP,Squash=NO_ROOT_SQ

Idea priority High
  • Guest
    Reply
    |
    Sep 30, 2020

    Due to processing by IBM, this request was reassigned to have the following updated attributes:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
    Component - Product functionality

    For recording keeping, the previous attributes were:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
    Component - V4 Product functionality

  • Guest
    Reply
    |
    Nov 21, 2019

    This is not currently within our strategic roadmap