Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Delivered
Created by Guest
Created on Nov 7, 2019

Unexpected Very High bandwidth utilization with large block size (16M) on GPFS File system

Unexpected Very High bandwidth utilization with large block size (16M) on File system . When I look at the performance graph , bandwidth used is in 30-50 GBps
when user runs a app/job against 16MB block size file system.
When same app/job is run against 4MB block size file system, bandwidth is in range of 100MB-1GBps max.
I/o is usually smaller size but strided i/o. Increasing pagepool to 32GB gives some relief but still does not rectify the problem completely and we still see high b/w with some workloads/jobs.

It seem prefetch is not being done optimally.

Data is being flushed out of the cache and then re-read.

For larger block sizes like 16MB, it would be good if we could do prefetch on a smaller than a block size basis

There needs to be some way to tune prefetching in this manner and I think that the existing variables are not sufficient. One idea would be to have the prefetching somewhat dynamic, in that we could have some simple heuristic approach examine the overall data recently prefetched, relative to the amount of data being consumed by the application (if we're prefetching way more data than we're consuming, we're likely prefetching in larger than optimal chunks).

Idea priority High
  • Admin
    THOMAS O'BRIEN
    Reply
    |
    Dec 1, 2023

    Delivered in Scale 5.1.2

  • Guest
    Reply
    |
    Sep 30, 2020

    Due to processing by IBM, this request was reassigned to have the following updated attributes:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
    Component - Product functionality

    For recording keeping, the previous attributes were:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
    Component - Technical Foundation

  • Guest
    Reply
    |
    Sep 17, 2020

    Work is under way to improve this issue, with some improvements anticipated for 5.1.1. It is being addresses as a Defect, so I am closing out the RFE.

  • Guest
    Reply
    |
    Feb 28, 2020

    Any updates on this, it is degrading the performance and causing outages for users.

  • Guest
    Reply
    |
    Nov 20, 2019

    This is now posing as a major problem. File system suffers performance issues when bandwidth shoots up upto 400-500 Gbps mainly due to this prefetch.