Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Workspace Spectrum Discover
Created by Guest
Created on Feb 14, 2023

Limitation in the connmgr pod in OpenShift

We discussed a limitation in the connmgr pod in OpenShift with Joseph Dain in Hamburg. Due to the cgroup settings in OpenShift a pod can only have 1024 pids. Since every scan creates ~100 or more threads we reach this threshold soon causing the whole connmgr pod to become unreliable. A quick solution that we came up with Joseph is to increase the number of connmgr pods to distribute the scans to several pods and several nodes.


There still seems to be an issue with the synchronization if we have more than one connmgr pod but at least the scanned data gets ingested into the DB.


We also discussed an improved architecture. Our proposal is to have kind of an orchestrator in OpenShift that:

  • Every time a scan of a qtree is initiated the orchestrator:

    • creates a new pod

    • mounts the qtree in the pod so the pod doesn't have to mount it himself. This could possible remove the necessity for some high privileges in the SCC in OpenShift

    • Once the pod finishes the scan of a qtree the result it put into the DB and the pod is removed by the orchestrator

    • The orchestrator keeps track of the status of each scan

    • Another problem that can be solved with this approach is the possible exhaustion of file handles:

      • In the current connmgr pod every qtree stays mounted and if a new data connection can not be added the connmgr pod creates a directory under /nfs-scanner/ anyhow. Given more than 30000 qtrees and possibly several failed attempts to create a new data connection we may easily have a huge number of mounts and emtpy/useless directories under /nfs-scanner


Idea priority High
  • Admin
    THOMAS O'BRIEN
    Reply
    |
    Mar 15, 2023

    duplicate of SPD-I-41