This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
@IBM
As discussed with IBM, please close this request
Hi
No, we did not know “directory container pools”.
Maybe you should know a few numbers/facts about our environment.
- The diskpool we are talking about is our bufferpool for DB2-logs before they go to tape.
- Our DB2 instance size is 370 TeraByte and it's distributed over 20 LPARS (client nodes).
- DB2 is compressed and so are the logs, sent to SP (DB2-internal compression)
(As of end of 2021 DB2-logs will be archived NX842 compressed)
- DB2 sends an average of 17 TB DB2-Logs per day to the SP-Server.
- Frequent peaks of 25+ TB DB2-Logs per day
(In fact, these Logfiles are NOT archived evenly distributed over 24 h, but within 8-12h and at the peak we get 8 TB in 4 hrs)
- The SP random disk pool size is 5 TB
- So summarized the pool is "filled and migrated" to tapepool 4-5x per day
- As soon as the 5 TB pool is 20% filled, we start migration to tape.
- If we start too late, it could happen that DB2 fills up the diskpool to 100% because archiving is faster than migration.
- Every hour we do a BackupStg from the diskpool and the tapepool to a copy tapepool
- In case of a restore we can get the DB2-logs for all 20 LPARS parallel with 20 tape drives via lanfree path
I have the following questions
1) Is it possible to have that directory container pool without deduplication?
2) What factor is the effect of deduplication with NX842-compressed logfiles (experience for an average DB2)?
3) We keep 30 days of logfiles (= 500+ TB). What is the amount of "uniqe dedup-blocks" to be expected (experience for an average DB2)?
5) Where are those uniq dedup-blocks stored
4) What is the CPU requirement to deduplicate up to 25 TB per day?
5) Can container pool volumes be "node collocated"?
6) Does tiering happen "per container pool volume" or "per client-node over all volumes"?
7) How fast can tiering to tape be (parallelity of tiering processes)?
8) Does tiering to tape follow the “Node collocation” of the destination tape pool?
9) Can I define the maximum of space used when tiering has to be started?
10) Does protect stgp + protect stgp give the same functionality as we have now with ba stg?
11) Can a restore from tape which has to reassemble the db2-logfiles from dedup-blocks be as fast as a 20 tape-drive parallel lanfree restore of normal files?
Kind regards
Ruedi Hartmeier
Hi team,
my customer alludes this rfe is still under status submitted and since nearly 4 years nothing happened !
It status ´Updated: 29.11.2019´ but it is not clear, what exactly the udpate was....
We are waiting for a response since nearly 4 years now !
When can we expect an answer ?
Please can you update this RFE asap ?
Many thanks and BR, Lars.