This portal is to open public enhancement requests against IBM System Storage products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Due to processing by IBM, this request was reassigned to have the following updated attributes:
Brand - Servers and Systems Software
Product family - IBM Spectrum Scale
Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
Component - Product functionality
For recording keeping, the previous attributes were:
Brand - Servers and Systems Software
Product family - IBM Spectrum Scale
Product - Spectrum Scale (formerly known as GPFS) - Public RFEs
Component - Technical Foundation
Not in roadmap for foreseeable future
It seems quite common to have a system pool on SSD/Flash for *mainly* metadata, and then another "data" pool on NL-SAS or similar. In such situation we often have way too much capacity in the system pool (as metadata calculations are difficult to get right), and it might be good to keep the option open to also later put policies in place for using this SSD/Flash pool for other types of files. Therefore it's nice to assign usage=dataAndMetadata to the system pool NSDs.
GPFS will then automatically put data on the "data" pool by some automated logic, without any placement policy being installed. For the customer/user it will look like the filesystem capacity = system pool + data pool, and they will get confused if filesystem suddenly start giving out of space errors when only the dataOnly pool is exhausted.
Therefore it would be nice if the automated placement logics would be smart enough to use *any* data pool for data, when there is no explicit placement policy installed. F.ex. by having a placement policy covering all data pools, with a 99% limit or similar.
No customer behind request, just a common confusion during Spectrum Scale Advanced Hands-on Workshop training.
Can you please explain the scenario in more detail? This does not seem to be a conventional way to use pools.
Also, is there a customer behind this request? If so please provide their details