Skip to Main Content
IBM System Storage Ideas Portal


This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Created by Guest
Created on Jul 30, 2014

Multiple block sizes at the pool level

Pools could be configured with different block sizes (not metablocks) based on workload requirement.

Idea priority Urgent
  • Guest
    Reply
    |
    Sep 30, 2020

    Due to processing by IBM, this request was reassigned to have the following updated attributes:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Private RFEs
    Component - Product functionality

    For recording keeping, the previous attributes were:
    Brand - Servers and Systems Software
    Product family - IBM Spectrum Scale
    Product - Spectrum Scale (formerly known as GPFS) - Private RFEs
    Component - V3 Product functionality

  • Guest
    Reply
    |
    Jun 4, 2020

    Duplicate of another RFE, so closing this one

  • Guest
    Reply
    |
    Oct 22, 2014

    This is a requirement that has been around for a long time. In particular, once the support for different data and metadata block size was introduced, it seemed logical to take this one step further and allow specifying different block sizes for different data pools. Unfortunately, that extra step, while logical, is also substantially harder to take, since it introduces a new class of problems. Data and metadata can never be mixed within a single object -- an object is either data or metadata. So there is no scenario where a migration between the two could happen. This is different for different block sizes for different data pools. The idea of different data pools implies the ability to migrate files between them. Such a migration cannot be done atomically for a file with more than a single block, which means GPFS code would need to be able to describe a file which has blocks in two (or more) different pools, having different block sizes. If a migration between pools is interrupted, it can be later restarted, involving yet another pool. With the way the data addressing is implemented, describing data residing in different pools within the same file is quite hard.

    The provocative question that usually gets asked in discussions about this item is: why do users feel the need to use different block sizes in the first place? Wouldn't it be so much better if there was a single block size that worked equally well for all workloads? Why not use 16M for everything? The two reasons usually quoted are (a) performance, and (b) disk space utilization. While the reason (a) was very pressing some years ago, the performance picture has been changing, and the issue should not be quite as pressing now. In GPFS 4.1, the granularity of IO is substantially independent from the block size. That is, one can use 16M blocks with a small record size workload, and GPFS will only read and write the relevant parts of blocks, down to 4K granularity. There is still some known work in handling very large blocks with utmost efficiency though. The reason (b) is a real problem. Since the smallest unit of allocation in GPFS is a subblock, which is currently fixed to be a 1/32nd of a full block, a small file that doesn't fit in the inode will occupy at least a subblock. While the support for 4K inodes and data-in-inode has ameliorated the issue for very small files and directories, the problem still exists. One way to address it is to allow more than 32 subblocks per block, which is a work item under consideration. If both (a) and (b) were to be addressed, that may be a credible alternative to using multiple block sizes.

  • Guest
    Reply
    |
    Sep 19, 2014

    Due to processing by IBM, this request was reassigned to have the following updated attributes:
    Brand - Servers and Systems Software
    Product family - General Parallel File System
    Product - GPFS
    Component - V3 Product functionality
    Operating system - Multiple

    For recording keeping, the previous attributes were:
    Brand - Servers and Systems Software
    Product family - General Parallel File System
    Product - GPFS
    Component - V3
    Operating system - Multiple

  • Guest
    Reply
    |
    Jul 31, 2014

    Creating a new RFE based on Community RFE #56776 in product GPFS.