Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on Mar 1, 2017

Less tape-commits with server-processes like Backup Stg, Migration and Reclamation to improve throuput

TSM does a tape-commit (and thus a time-consuming backhitch on the tape drive) every time the backup-date, the nodename, the filespace or even the managementclass within a single filespaces of a single node changes.

This is regardless of / precedes the parameters you set for txnbytelimit on your client and txngroupmax, movebatchsize and movesizethres on your server.

Since TSM does not order the objects in a pool with the same criteria when moving to tape you can end up with much more time-consuming backhitches on your tape drives as expected.
So if 10 nodes with 10 filespaces each and 2 managementclasses used who backed up a total 10GB of data I would expect this to be written in a single pass to the tape drive. Since a tape drive with 360MB/s would write this 10GB in under 30 seconds.
Instead you get at least 10x10x2=200 tape commits each taking 0,5-4seconds each depending on your tape drives resulting in 10GB written in 30seconds+(200*0,5seconds)=130seconds or 80MB/s
But if the data is "interweaved" in the pool because the clients backuped simultaniously and/or with many threads you can easily end up with much much more tape commits.

When transferring data within a TSM-server
instead of doing tape-commits based on the structure of the data (backup-date, nodename, filespace, managementclass)
TSM should do tape-commits every x seconds or every x GB
resulting in fewer tape-commits, fewer tape-buffer underruns, backhitches and subsequently in a much higher throughput.

Idea priority High
  • Guest
    Reply
    |
    Jun 23, 2017

    IBM has evaluated the request, and has determined that it can not be implemented at this time or does not align with the current multi-year strategy.

    IBM recommends that customers perform backups of small objects to disk first and then migrate them to tape.