This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
This also appears to be a duplicate of http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=86101
Just to clarify, while the performance guide(s) for SP/TSM say this:
"Only one producer session per file system compares attributes for incremental backup. Incremental backup throughput does not improve for a single file system with a small amount of changed data"
Selective backups also seem to use only one producer thread as well, even at 8.1.4 client (current), but my test system is not large enough to say this with certainty.
I find that in conjunction with flash you are often limited by the TSM client disabling the TCP/IP autotuning by default, giving abysmal per-session network speeds. Multiple sessions is a workaround in those cases, but fixing the broken network settings is preferred when possible.
See http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=42266 for related RFE (and workaround).
The best way to do this would likely be, instead of traversing directories, add them to the same queue that filesystems get added to. This way, the performance monitor thread could still manage producers and consumers in the same way.
That might require MEMORYEFFICIENT=BIGDISK so that a second queue file is created for directories.
We would still have problems with a huge directory (getdents / getdirent doesn't run in parallel), but it should work fairly well for large numbers of small directories (no fork/free and no tcp drop/reconnect.)
Do you know the solution in my previous post?
http://www.general-storage.com/PRODUCTS/dsmISI-MAGS/Concat_Flyer_TSM-Isilon_dsmISI_MAGS_EN.pdf
This will be especially useful in today's environments where flash and 10 Gbps and faster networks are available. These automated multiple sesions capability should not only apply to backups but restores also. I have 2 customers here in Australia who are asking for the same thing.