Mindmajix

Defining OpenStack Identity Service EndPoint

Defining Service EndPoint

We must define the service endpoints so that the Identity Service can track which OpenStack services are installed and where they are located on the network. Once the identity service has been started its endpoint must be defined.

For this, you must register each service in your OpenStack installation. To register a service, run these commands:

  • keystone service-create. Describes the service.
  • keystone endpoint-create. Associates API endpoints with the service.An endpoint in keystone is just a URL that can be used to access a service within OpenStack.  An endpoint is just like a point of contact for YOU (the user) to use an OpenStack service.

Each of the services in our cloud environment run on a particular URL and port— these are the service endpoint addresses for our services. When a client communicates with our OpenStack environment that runs an OpenStack Identity service, it is this service, that returns the endpoint URLs, which can be further used by the user in an OpenStack environment. To enable this feature, we must define these endpoints. In a cloud environment, we have the ability to define multiple regions. Regions can be thought of as different datacenters, which would imply that they would have different URLs or IP addresses. Under OpenStack Identity service, we can define these URL endpoints separately for each region. As we only have a single environment, we will refer this as RegionOne.

Getting started

To begin with, ensure you’re logged into our OpenStack Controller host— where OpenStack Identity service has been installed— or an appropriate Ubuntu client that has access to OpenStack Identity service has been installed.

To log on to our OpenStack Controller host that was created using Vagrant, issue the following command:

vagrant ssh controller

If the keystone client tool isn’t available, this can be installed on an Ubuntu client— to manage our OpenStack Identity service— by issuing the following commands:

sudo apt-get update

sudo apt-get -y install python-keystoneclient

Ensure that we have our environment set correctly to access our OpenStack environment for administrative purposes:

export ENDPOINT = 172.16.0.200
export SERVICE_TOKEN = ADMIN
export
SERVICE_ENDPOINT = http:// ${ ENDPOINT}: 35357/v2.0

How to achieve it…

Defining the services and service endpoints in OpenStack Identity service involves running the keystone client command to specify the different services and the URLs that they run from. Although we might not have all services currently running in our environment, we will be configuring them within an OpenStack Identity service for future use. To define endpoints for services in our OpenStack environment, carry out the following steps:

1) We can now define the actual services that OpenStack Identity service needs to know about in our environment:

# OpenStack Compute Nova API Endpoint keystone service-create \

–name nova \

–type compute \

–description ‘OpenStack Compute Service’

# OpenStack Compute EC2 API Endpoint keystone service-create \

–name ec2 \

–type ec2 \

–description ‘EC2 Service’

# Glance Image Service Endpoint keystone service-create \

–name glance \

–type image \

–description ‘OpenStack Image Service’

# Keystone Identity Service Endpoint keystone service-create \

–name keystone \

–type identity \

–description ‘OpenStack Identity Service’

#Cinder Block Storage Endpoint keystone service-create \

–name volume \

–type volume \

–description ‘Volume Service’

2) After finishing this, we can add in the service endpoint URLs that these services run on. To accomplish this, we need the ID that was returned for each of the service endpoints that were created in the previous step. This is then used as a parameter while specifying the endpoint URLS for that service.

Note:

OpenStack Identity service can be configured to service requests on three URLs:

  • a public facing URL (that the end users use),
  • an administrative URL (that user with administrative access can use with a completely different URL),
  • and an internal URL (that is, appropriate when presenting the services on either side of a firewall to the public URL).

For the following services, we will configure the public and internal service URLs to be the same, which is appropriate for our environment:

# OpenStack Compute Nova API

NOVA_SERVICE_ID = $ (keystone service-list \ | awk ‘/\ nova\ / {print $ 2}’)

PUBLIC =” http:// $ ENDPOINT: 8774/ v2/\ $( tenant_id) s”

ADMIN = $ PUBLIC

INTERNAL = $ PUBLIC

keystone endpoint-create \

–region RegionOne \

–service_id $ NOVA_SERVICE_ID \

–publicurl $ PUBLIC \

–adminurl $ ADMIN \

–internalurl $ INTERNAL

This will result in an output as shown below:

Screenshot_4

3) We continue to define the rest of our service endpoints as shown in the following steps:

# OpenStack Compute EC2 API

EC2_SERVICE_ID = $( keystone service-list \ | awk ‘/\ ec2\ / {print $ 2}’)

PUBLIC =” http:// $ ENDPOINT: 8773/ services/ Cloud”

ADMIN =” http:// $ ENDPOINT: 8773/ services/Admin”

INTERNAL = $ PUBLIC

keystone endpoint-create \

–region RegionOne \

–service_id $ EC2_SERVICE_ID \

–publicurl $ PUBLIC \

–adminurl $ ADMIN \

–internalurl $ INTERNAL

# Glance Image Service GLANCE_SERVICE_ID = $( keystone service-list \ | awk ‘/\ glance\ / {print $ 2}’)

PUBLIC =” http:// $ ENDPOINT: 9292/ v1″

ADMIN = $ PUBLIC

INTERNAL = $ PUBLIC

keystone endpoint-create \

–region RegionOne \

–service_id $ GLANCE_SERVICE_ID \

–publicurl $ PUBLIC \

–adminurl $ ADMIN \

–internalurl $ INTERNAL

# Keystone OpenStack Identity Service

KEYSTONE_SERVICE_ID = $( keystone service-list \  | awk ‘/\ keystone\ / {print $ 2}’)

PUBLIC =” http:// $ ENDPOINT: 5000/ v2.0″

ADMIN =” http:// $ ENDPOINT: 35357/ v2.0″

INTERNAL = $ PUBLIC

keystone endpoint-create \

–region RegionOne \

–service_id $ KEYSTONE_SERVICE_ID \

–publicurl $ PUBLIC \

–adminurl $ ADMIN \

–internalurl $ INTERNAL

#Cinder Block Storage ServiceService

CINDER_SERVICE_ID = $( keystone service-list \ | awk ‘/\ volume\ / {print $ 2}’)

PUBLIC =” http:// $ ENDPOINT: 8776/ v1/%( tenant_id) s”

ADMIN = $ PUBLIC

INTERNAL = $ PUBLIC

keystone endpoint-create \

–region RegionOne \

–service_id $ CINDER_SERVICE_ID \

–publicurl $ PUBLIC \

–adminurl $ ADMIN \

–internalurl $ INTERNAL

How it works…

Configuring the services and endpoints within OpenStack Identity service is done with the keystone client command.

We first add the service definitions, by using the keystone client and the service-create option with the following syntax:

keystone service-create \

–name service_name \

–type service_type \

–description ‘description’

service_name is an arbitrary name or label, defining our service of a particular type. We refer to the name when defining the endpoint to fetch the ID of the service.

The type option can be one of the following: compute, object-store, image-service, and identity-service. Note that we haven’t configured the OpenStack Object Storage service (type object-store) or Cinder, at this stage as these are covered in later recipes in the post.

The description field is again an arbitrary field describing the service.

Once we have added in our service definitions, we can tell OpenStack Identity service about the places from where those services run, by defining the endpoints using the keystone client and the endpoint-create option, with the following syntax:

 keystone endpoint-create \

–region region_name \

–service_id service_id \

–publicurl public_url \

-– adminurl admin_url \

–internalurl internal_url

Here service_id is the ID of the service which has been created during the first step. The list of our services and IDs can be obtained by running the following command:

keystone service-list

As OpenStack is designed for global deployments, a region defines a physical datacenter or a geographical area that comprises of multiple connected data centers. For our purpose, we just define just a single region— RegionOne. This is an arbitrary name that we can reference when specifying what runs in what datacenter/ area and we carry this through to when we configure our client for use with these regions.

All of our services can be configured to run on three different URLs, as follows, depending on how we want to configure our OpenStack cloud environment:

  • The public_url parameter is the URL that end users would connect to. In a public cloud environment, this would be a public URL that resolves to a public IP address.
  • The admin_url parameter is a restricted address for conducting administration. In a public deployment, you would keep this separate from the public_URL by presenting the service you are configuring on a different, restricted URL. Some services have a different URL for the admin service, so this is configured using this attribute.
  • The internal_url parameter would be the IP or URL that exists only within the private local area network. The reason for this is that you are able to connect to services from your cloud environment internally without connecting over a public IP address space, which could incur data charges for traversing the Internet. It is also potentially more secure and less complex to do so.

Tip

Once the initial keystone database has been set up, after running the initial keystone-manage db_sync command on the OpenStack Identity service server, administration can be done even remotely using the keystone client.

Enroll for Live Instructor Led Online OPENSTACK TRAINING


0 Responses on Defining OpenStack Identity Service EndPoint"

Leave a Message

Your email address will not be published. Required fields are marked *

Copy Rights Reserved © Mindmajix.com All rights reserved. Disclaimer.
Course Adviser

Fill your details, course adviser will reach you.