Sunday, December 4, 2016

Quota handling across OpenStack projects

Quotas

Quotas in the OpenStack is used to prevent system capacities from being exhausted. If Service has a quota for the Resource it means that:
  • There is ability to set a Limit for the Resource.
  • There is ability to check the Usage of the Resource during the allocation.


 Let's define quotas:
  1. Quotas belongs to the Resource of the Service.
  2. Quota some kind of restriction for the consumers of the Resource.
  3. Consumer of the resource can be defined as Project or as a scope (User-Project or User-Domain).
  4. Service is doing quota enforcement when Consumer is allocating some Resource .
  5.  Enforcement is a check that usage is less than limit.
  6. If resource was successfully allocated Service increases  Usage.
  7. Each service has to have default limits. Limits can be customized by admin.

Resources

To understand what kind of resources can be limited let's take a look at "Project overview" page in Horizon:

Each Resource belongs to some Service (Nova, Neutron, Cinder). Let's take a look what Resources each Service have.

Nova:


Neutron:


Cinder:


So, how Horizon knows what resources, limits, defaults has some service?
Let's take a look at the code.
Horizon code openstack_dashboard/usage/quotas.py:


Nova code nova/quota.py:



Horizon has a hardcoded list of resources hardcoded, nova has a harcoded list of resources and Horizon has a hardcoded mapping to the services.
This is a problem!
Problem #1
Name of the resources and resource-service mapping  are hardcoded. It is hardcoded in the service code (Nova, for example) and it should be hardcoded in the client code (Horizon, for example).

Limits

There is two types of limits:
  1. Default limits
  2. Configured limits
Limits can be modified via Horizon or via service API. Horizon has to ask all services (and first of all decide what service to ask) for limits info to display limits per project.
Problem #2
There is no centralized quota management for OpenStack projects.
Quota basically is limit and usage. Usage is not manageable, it is a counter that is updating by Service. So, when we are talking about quota management it means limits management.

Hierarchical projects

OpenStack supports projects hierarchy. Projects tree can be built to provide a more manageable model and organize infrastructure according to divide and conquer paradigm.
From this, it logically follows that quota management and enforcement tools should also be aware of projects hierarchy.
For the nested projects limits enforcement should be in done such way that not only project usage cannot exceed project limit but the sum of project usage and subprojects usages cannot exceed project limit.
There is two ways of having quotas for hierarchical projects:
  • With overbooking
  • Without overbooking
The overbooking term in this context means that sum of subnodes limit values can be greater than parent limit value. For example, it allows consuming
more resources for more active projects still having an upper limit in the
parent project.
Let's take a look at examples for better understanding.
Nested quotas without overbooking
There is a projects tree. Prj_0_a has to subprojects: Prj_1_a and Prj_1_b.

The sum of subprojects limits is less than parent project limit.
Nested quotas with overbooking
There is a projects tree. Prj_0_a has to subprojects: Prj_1_a and Prj_1_b.
User can allocate 7 items of resource for Prj_1_a and then user will be abble to allocate only 3 items of resource for Prj_1_b due the parent project limit=10.
But if usage for Prj_1_a=0 than user is able to allocate all 10 items for Prj_1_b.
For more information read a proposed spec for hierarchical quotas in Nova:
https://review.openstack.org/#/c/394422

Currently each OpenStack service manages quotas by themselves. It leads to
differences in what is supported:
  • Cinder supports hierarchical quotas without overbooking and simple per-project quotas
  • Nova supports per-user quotas and simple per-project quotas
  • Neutron supports only simple per-project quotas

Problem #3

Cinder, Nova and Neutron supports (or going to support) hierarchical quotas in a different ways.
Solution for all problems
One of the possible solutions is to store limits and default limits in the Keystone. It means:
  1. Store limits and default limits in keystone database
  2. Manage limits and default limits via Keystone API 
  3. Include limits into token (like service catalog)

Limits in Keystone concept 

Backend 

 To store limits in Keystone database two tables should be created:

Keystone should have CRUD API for the resources and limits.

Include limits into keystone token
Let's take a look step-by-step how Horizon is able, for example, to show limits (Project overview page):
  • Horizon gets token from Keystone:
DEBUG keystone.middleware.auth Token: 
'token': {
    'is_domain': False, 
    'methods': ['token', 'password'], 
    'roles': [{'id': u'0f3168d8bff04eeb9d72c36437667078', 'name': u'admin'}], 
    'is_admin_project': False, 
    'project': {
        'domain': {'id': u'default', 'name': u'Default'}, 
        'id': u'e5507883ac1241648c1d0f83a47861a3', 
        'name': u'Prj_0_a'
    }, 
    'catalog': [...], 
    'expires_at': '2016-12-04T10:22:55.000000Z', 
    'audit_ids': [u'EQw8tGbmRdatSRL6dyxQdg', u'CXQ4mdPUQG-UtWbnKP-8mw'], 
    'issued_at': '2016-12-04T09:22:56.000000Z', 
    'user': {
        'domain': {'id': u'default', 'name': u'Default'}, 
        'password_expires_at': None, 
        'name': u'admin', 
        'id': u'63e911bc286e41b2b554e0cd607db57a'
    }
}
  • Horizon make request for Nova. Horizon uses novaclient and puts token id into request. 
  • Nova gets request from Horizon:
INFO [admin Prj_0_a] 192.168.1.22 "GET /v2.1/os-simple-tenant-usage/e5507883ac1241648c1d0f83a47861a3?start=2016-12-03T00:00:00&end=2016-12-04T23:59:59 HTTP/1.1" status: 200 len: 352 time: 0.0834420                                                                                                                                                                                    
INFO [admin Prj_0_a] GET /v2.1/limits?reserved=1 HTTP/1.0
  • Nova uses keystonemiddleware to verify token.
  • Keystonemiddleware uses keystoneclient to validate token:


  • Keystoneclient sends GET request to keystone to get token data:

  • Keystone populates token data and sends response with token data to the keystoneclient:


  • Keystoneclient parses token data (with access.AccessInfo class) and sends it to keystomemiddleware:

  • Keystonemiddleware also process token data from keystoneclient response and creates response for Nova with necessary token data in headers:


  • Nova creates response for Horizon and populates it with limits information from it's own database.
  • Horizon gets response from Nova and processes limits information to display it.
Now with understanding of workflow changes can be described by components:
  • Keystone
  1. Add flag 'nolimits' for the token request
  2. Add argument include_limits for issue_token() method of the token providers.
  3. Add argument include_limits for get_token_data() method of the token provider helpers.
  4. Token provider helpers should include limits data in the token if  include_limits=True.
  • Keystoneclient
  1. Add argument include_limits for validate() and get_token_data() methods of the tokens.TokenManager.
  2. Add new attribute limits to the access.AccessInfo. 
  3. Create new class Limits to process limits data from token.
  • Keystonemiddleware
  1. Add new option include_limits for auth_token config to include or not header X-Limits into response.
  2.  Add ability to set X-Limits header and populate it with limits data.
  3. Add argument include_limits for verify_token() methods of the IdentityServer and request strategy classes.
As a result of the following changes, for example, Nova will have such headers in response from keystonemiddleware:

Accept:application/json                                                                                                                                                             

Accept-Encoding: gzip, deflate                                                                                                                                                       
Connection: keep-alive                                                                                                                                                               
Content-Type: text/plain                                                                                                                                                             
Host: 192.168.1.22:8774                                                                                                                                                              
User-Agent: python-novaclient                                                                                                                                                        
X-Auth-Project-Id: e5507883ac1241648c1d0f83a47861a3                                                                                                                                  
X-Auth-Token: gAAAAABYQ...                                                                                                                                                   
X-Domain-Id: None                                                                                                                                                                    
X-Domain-Name: None                                                                                                                                                                  
X-Identity-Status: Confirmed                                                                                                                                                         
X-Is-Admin-Project: False                                                                                                                                                            
X-Limits: {
    "resource": "instances", 
    "limit": 10, 
    "region_id": "RegionOne", 
    "service": "compute"
}                                                                                        
X-Project-Domain-Id: default                                                                                                                                                         
X-Project-Domain-Name: Default                                                                                                                                                       
X-Project-Id: e5507883ac1241648c1d0f83a47861a3                                                                                                                                       
X-Project-Name: Prj_0_a                                                                                                                                                              
X-Role: admin                                                                                                                                                                        
X-Roles: admin                                                                                                                                                                       
X-Service-Catalog: [...]                                                                  
X-Tenant: Prj_0_a                                                                                                                                                                    
X-Tenant-Id: e5507883ac1241648c1d0f83a47861a3                                                                                                                                        
X-Tenant-Name: Prj_0_a                                                                                                                                                               
X-User: admin                                                                                                                                                                        
X-User-Domain-Id: default                                                                                                                                                            
X-User-Domain-Name: Default                                                                                                                                                          
X-User-Id: 63e911bc286e41b2b554e0cd607db57a                                                                                                                                          
X-User-Name: admin          
And keystone token will have new field limits:
'token': {
    'limits': {
        'resource': 'instances', 
        'limit': 10, 
        'region_id': 'RegionOne', 
        'service': 'compute'
    },
    ...
}
Quotas in Keystone spec: https://review.openstack.org/#/c/363765
Changes to support limits in token:
Keystone https://review.openstack.org/#/c/403588/
Keystonemiddleware https://review.openstack.org/#/c/403586/
Keystoneclient https://review.openstack.org/#/c/403578/

2 comments: