This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
See this idea on ideas.ibm.com
We discussed a limitation in the connmgr pod in OpenShift with Joseph Dain in Hamburg. Due to the cgroup settings in OpenShift a pod can only have 1024 pids. Since every scan creates ~100 or more threads we reach this threshold soon causing the whole connmgr pod to become unreliable. A quick solution that we came up with Joseph is to increase the number of connmgr pods to distribute the scans to several pods and several nodes.
There still seems to be an issue with the synchronization if we have more than one connmgr pod but at least the scanned data gets ingested into the DB.
We also discussed an improved architecture. Our proposal is to have kind of an orchestrator in OpenShift that:
Every time a scan of a qtree is initiated the orchestrator:
creates a new pod
mounts the qtree in the pod so the pod doesn't have to mount it himself. This could possible remove the necessity for some high privileges in the SCC in OpenShift
Once the pod finishes the scan of a qtree the result it put into the DB and the pod is removed by the orchestrator
The orchestrator keeps track of the status of each scan
Another problem that can be solved with this approach is the possible exhaustion of file handles:
In the current connmgr pod every qtree stays mounted and if a new data connection can not be added the connmgr pod creates a directory under /nfs-scanner/ anyhow. Given more than 30000 qtrees and possibly several failed attempts to create a new data connection we may easily have a huge number of mounts and emtpy/useless directories under /nfs-scanner
Idea priority | High |
By clicking the "Post Comment" or "Submit Idea" button, you are agreeing to the IBM Ideas Portal Terms of Use.
Do not place IBM confidential, company confidential, or personal information into any field.
duplicate of SPD-I-41