Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on Jun 14, 2017

TSM does not start Migrations properly

TSM-Server does not start any new Migrations if there is still 1 Migration running from the previous "reaching HiMig".
Example
1. HiMig reached = 8 migrations started for diskpool
2. Migrations running an decreasing utilisation of pool
3. LoMig reached, migrations ending, except 1
4 Client sends a lot of data, utilisation reaches HiMig again.
5. The ONE single migrations left from before is still running
6. TSM Server DOES NOT start new Migration up to MigPr=8
7. That one single, still running migration has no chance to fight against client filling up the diskpool
8. Diskpool reaches 100%
9. Client (DB2) cannot archive it's DB2-Logs anymore
10. DB2-Logspace is filling up until DB2-Crash

Idea priority High
  • Guest
    Reply
    |
    Jul 15, 2021

    @IBM
    As discussed with IBM, please close this request

  • Guest
    Reply
    |
    Apr 21, 2021

    Hi

    No, we did not know “directory container pools”.

    Maybe you should know a few numbers/facts about our environment.
    - The diskpool we are talking about is our bufferpool for DB2-logs before they go to tape.
    - Our DB2 instance size is 370 TeraByte and it's distributed over 20 LPARS (client nodes).
    - DB2 is compressed and so are the logs, sent to SP (DB2-internal compression)
    (As of end of 2021 DB2-logs will be archived NX842 compressed)
    - DB2 sends an average of 17 TB DB2-Logs per day to the SP-Server.
    - Frequent peaks of 25+ TB DB2-Logs per day
    (In fact, these Logfiles are NOT archived evenly distributed over 24 h, but within 8-12h and at the peak we get 8 TB in 4 hrs)
    - The SP random disk pool size is 5 TB
    - So summarized the pool is "filled and migrated" to tapepool 4-5x per day
    - As soon as the 5 TB pool is 20% filled, we start migration to tape.
    - If we start too late, it could happen that DB2 fills up the diskpool to 100% because archiving is faster than migration.
    - Every hour we do a BackupStg from the diskpool and the tapepool to a copy tapepool
    - In case of a restore we can get the DB2-logs for all 20 LPARS parallel with 20 tape drives via lanfree path

    I have the following questions
    1) Is it possible to have that directory container pool without deduplication?
    2) What factor is the effect of deduplication with NX842-compressed logfiles (experience for an average DB2)?
    3) We keep 30 days of logfiles (= 500+ TB). What is the amount of "uniqe dedup-blocks" to be expected (experience for an average DB2)?
    5) Where are those uniq dedup-blocks stored
    4) What is the CPU requirement to deduplicate up to 25 TB per day?
    5) Can container pool volumes be "node collocated"?
    6) Does tiering happen "per container pool volume" or "per client-node over all volumes"?
    7) How fast can tiering to tape be (parallelity of tiering processes)?
    8) Does tiering to tape follow the “Node collocation” of the destination tape pool?
    9) Can I define the maximum of space used when tiering has to be started?
    10) Does protect stgp + protect stgp give the same functionality as we have now with ba stg?
    11) Can a restore from tape which has to reassemble the db2-logfiles from dedup-blocks be as fast as a 20 tape-drive parallel lanfree restore of normal files?

    Kind regards
    Ruedi Hartmeier

  • Guest
    Reply
    |
    Apr 15, 2021

    Hi team,
    my customer alludes this rfe is still under status submitted and since nearly 4 years nothing happened !
    It status ´Updated: 29.11.2019´ but it is not clear, what exactly the udpate was....

    We are waiting for a response since nearly 4 years now !

    When can we expect an answer ?

    Please can you update this RFE asap ?

    Many thanks and BR, Lars.