For optimal reading, please switch to desktop mode.
Published: Thu 23 May 2024
Updated: Thu 23 May 2024
By Jack Hodgkiss
In Deployment .
tags: kayobe openstack training
Introduction
OpenStack is an open-source cloud platform composed of more than two dozen components, each specialising in its own area such as Keystone for authentication, Nova handling compute services, or Cinder for block storage.
Fortunately, by utilising Kayobe, which is built upon Kolla-Ansible, we can configure and deploy these services with relative ease.
At StackHPC, there has been a desire to enable the easy and reproducible deployment of OpenStack in a multinode configuration for testing and development.
A multinode configuration of OpenStack is more complex when compared to something such as an all-in-one Kayobe environment .
This is due to there being multiple controllers, which brings with it concepts such as high availability of services and databases.
The motivation for such capability would be that it offers an accessible way of testing new containers and releases, validating configuration changes before deployment to production, and provides an environment for anyone to learn how to deploy, configure, and use OpenStack in a risk free environment.
Overview
The ability to deploy an OpenStack multinode environment is built on top of Terraform to deploy all nodes, Ansible playbooks to configure the hosts, and a bespoke Kayobe config environment .
These components work together to ensure that an engineer will have an environment suitable for their needs, provided underlying compute resources are available.
These resources are provided by an existing OpenStack deployment either using baremetal instances or virtualised compute.
The process of deploying a multinode environment starts with the Terraform configuration , which is designed to deploy the required number of instances in an appropriate configuration.
For example, most nodes will only have a single root volume whereas the Ceph storage nodes have additional volumes attached to act as OSDs.
A typical multinode configuration generated by this Terraform is composed of nine separate instances, which breaks down as follows:
3 Controllers
3 Storage (OSDs, Mons, Mgrs)
2 Compute
1 Seed
1 Ansible Control Host
At this point small adjustments can be made to the configuration of the instances that will be deployed.
This is done by providing a variables.tf
file containing configuration relating to the network within OpenStack to use, the flavors and disk size.
A typical variables.tf
file will look as follows:
prefix = "jph"
ansible_control_vm_flavor = "general.v1.small"
ansible_control_vm_name = "ansible-control"
ansible_control_disk_size = 100
seed_vm_flavor = "general.v1.small"
seed_disk_size = 100
multinode_flavor = "general.v1.medium"
multinode_image = "Ubuntu-22.04-lvm"
multinode_keypair = "id_multinode"
multinode_vm_network = "ipv4-vlan"
multinode_vm_subnet = "ipv4-vlan"
compute_count = "2"
controller_count = "3"
compute_disk_size = 100
controller_disk_size = 100
ssh_public_key = "~/.ssh/smslab.pub"
ssh_user = "ubuntu"
storage_count = "3"
storage_flavor = "general.v1.small"
storage_disk_size = 100
Once Terraform has been applied, the next step is to configure the instances using Ansible Playbooks .
These playbooks are designed to perform some basic adjustments to things such as /etc/hosts
for the public and internal APIs and resizing the logical volumes of the seed node.
The playbooks are vital in ensuring that all nodes can be reached and that the control host is equipped with everything it requires to proceed with deployment, such as the private key for SSH between the various nodes, in addition to the inventory containing each host deployed by Terraform and their IP address.
With the previous step complete, all that remains would be to SSH into the control host and exec ./deploy-openstack.sh
.
This shell script will then proceed to deploy OpenStack across the various nodes based upon the custom environment within StackHPC Kayobe Config .
The script will:
Configure various aspects of the hosts (LVM, users & groups, network interfaces…)
Deploy a Ceph cluster using Cephadm
Deploy HashiCorp Vault as a certificate authority for internal and backend TLS
Deploy OpenStack services using kayobe overcloud service deploy
Configure the new OpenStack deployment using OpenStack Configuration based upon the multinode-environment
Run Tempest against the newly deployed and configured OpenStack environment
During the deployment of the multinode environment, the hosts are configured with the goal of replicating real world configurations.
However, this is not always possible due to limitations and complexity that accompany such a goal.
One example would be how the nodes within a multinode environment are connected together through the use of a VXLAN overlay.
The VXLAN overlay was chosen for its ability to create a virtual layer 2 network on top of existing layer 3 networks without changes to underlying infrastructure.
This offers the ability to create a portable and flexible network configuration for the multinode enviromment that can be deployed within any OpenStack environment.
Two things to be mindful of would be the requirement of additional overhead imposed by the use of VXLAN, this mean a 50 byte reduction of MTU (Maximum Transmission Unit) for each interface.
The second issue would be ensuring that if multiple multinodes are operating within the same cloud, measures are to be taken to ensure that the VXLAN Network Identifier (VNI) does not match to avoid collisions.
This should not be a problem with VXLANs as they use 24-bits as opposed to VLANs which are limited to 12-bits, i.e. 16,777,215 vs 4,096 possible IDs.
Motivations and Benefits
Developing the ability to quickly and easily deploy multinode environments serves several key purposes:
Training and Development : Providing newcomers with dedicated training environments helps them understand OpenStack's complexity and gain hands-on experience without impacting production systems. Similarly, developers benefit from reliable environments for coding, testing, and debugging, accelerating the development cycle.
Testing and Risk Mitigation : Multinode deployments facilitate realistic testing scenarios, enabling organisations to simulate upgrade paths, test compatibility with new versions, and identify potential issues before applying changes in production. This approach also reduces the risk of disruptions to live systems.
Cloud Environment Flexibility : Organisations without dedicated staging environments can deploy, validate, and iterate on changes in a controlled environment, reducing the risk of disruptions to live systems. Additionally, multinode deployments may accurately replicate production-like environments, providing trainees with a realistic learning experience.
Challenges and Disadvantages
There are some challenges and disadvantages to the multinode environment:
Resource Intensive : as a typical multinode environment can require up to ten VMs capable of performing the various assigned tasks such as controllers, storage or compute, the resources required for running such a deployment could become costly from a hardware perspective (and by extension a financial one).
Not Always Fit for Purpose : sometimes multinodes are not always the best way to test new features or learn OpenStack and relying on them for every task could be needlessly expensive and time consuming.
Time to deploy : the time taken to deploy a multinode can vary due to many different factors such as number of nodes, network congestion and contention for resources with other users. This puts a delay on performing work that requires access to a multinode environment.
Conclusion
This multinode deployment capability, powered by Terraform and Kayobe, makes OpenStack training, development, and testing activities easy to perform.
By providing realistic environments for learning, development, and experimentation, this solution allows engineers to use the full potential of OpenStack while minimising risk and maximising efficiency.
Whether it is training new starters, debugging code, testing upgrades, or simulating production environments, the multinode environment acts as a valuable tool.
Get in touch
If you would like to get in touch we would love to hear
from you. Reach out to us via Twitter ,
LinkedIn
or directly via our contact page .