How to Create a Kubernetes Cluster Using EO-Lab OpenStack Magnum

In this tutorial, you will start with an empty Horizon screen and end up running a full Kubernetes cluster.

What We Are Going To Cover

  • Creating a new Kubernetes cluster using one of the default cluster templates

  • Visual interpretation of created networks and Kubernetes cluster nodes

Prerequisites

No. 1 Hosting

You need a EO-Lab hosting account with Horizon interface https://cloud.fra1-1.cloudferro.com/auth/login/?next=/.

The resources that you require and use will reflect on the state of your account wallet. Check your account statistics at https://tenant-manager.eo-lab.org/login and if you are not going to use the cluster any more, remove them altogether to save resources costs.

Magnum clusters created by certain users are bound together with an impersonation token and in the event of removing that user from the project, the cluster will loose authentication to Openstack API making cluster non-operational. A typical scenario would be for the tenant manager to create user accounts and let them create Kubernetes clusters. Later on, in this scenario, when the cluster is operational, the user would be removed from the project. The cluster would be present but the user could not, say, create new clusters, or persistent volume claims would be dysfunctional and so on.

Therefore, good practice in creation of new Kubernetes clusters is to create a service account dedicated to creating a Magnum cluster. In essence, devote one account to one Kubernetes cluster, nothing more and nothing less.

No. 2 Private and public keys

An SSH key-pair created in OpenStack dashboard. To create it, follow this article How to Create Key Pair in OpenStack Dashboard on EO-Lab.

The key pair created in that article is called “sshkey”. You will use it as one of the parameters for creation of the Kubernetes cluster.

Step 1 Create New Cluster Screen

Click on Container Infra and then on Clusters.

../_images/clusters_command.png

There are no clusters yet so click on button + Create Cluster on the right side of the screen.

../_images/create_new_cluster.png

On the left side and in blue color are the main options – screens into which you will enter data for the cluster. The three with the asterisks, Details, Size, and Network are mandatory; you must visit them and either enter new values or confirm the offered default values within each screen. When all the values are entered, the Submit button in the lower right corner will become active.

Cluster Name

This is your first cluster, name it just Kubernetes.

../_images/cluster_name_filled_in.png

Cluster name cannot contain spaces. Using a name such as XYZ k8s Production will result in an error message, while a name such as XYZ-k8s-Production won’t.

Cluster Template

Cluster template is a blueprint for base configuration of the cluster, where the version number reflects the Kubernetes version used.

Select k8s-stable-1.23.5 as the highest version that is available on EO-Lab cloud.

../_images/eolab_templates_to_select.png

You immediately see how the cluster template is applied:

../_images/cluster_template_detail2.png

Availability Zone

nova is the name of the related module in OpenStack and is the only option offered here.

Keypair

Assuming you have used Prerequisite No. 2, choose sshkey.

../_images/white_keypair_select.png

Addon Software - Enable Access to EO Data

This field is specific to OpenStack systems that are developed by Cloudferro hosting company. EODATA here means Earth Observation Data and refers to data gained from scientific satelites monitoring the Earth.

Checking this field on, will install a network which will have access to the downloaded satelite data.

If you are just trying to learn about Kubernetes on OpenStack, leave this option unchecked. And vice versa: if you want to go into production and use satellite data, turn it on.

Note

There is cluster template label called eodata_access_enabled=true which – if turned on – will have the same effect of creating a network for connecting to the EODATA.

This is what the screen looks like when all the data have been entered:

../_images/create_new_cluster_filled_in2.png

Click on lower right button Next or on option Size from the left main menu of the screen to proceed to the next step of defining a Kubernetes cluster.

Step 2 Define Master and Worker Nodes

In general terms, master nodes are used to host the internal infrastructure of the cluster, while the worker nodes are used to host the K8s applications.

This is how this window looks before entering the data:

../_images/cluster_size_new.png

If there are any fields with default values, such as Flavor of Master Nodes and Flavor of Worker Nodes, these values were predefined in the cluster template.

Number of Master Nodes

../_images/number_of_master_nodes_filled_in.png

Kubernetes cluster has master and worker nodes. In real applications, a typical setup would be running 3 master nodes to ensure High Availability of the cluster’s infrastructure. Here, you want to create your first cluster in a new environment so settle for just 1 master node.

Flavor of Master Nodes

../_images/flavor2_master2.png

Select eo1.large for master node flavor.

Number of Worker Nodes

../_images/worker_nodes_number.png

Enter 3. This is for introductory purposes only, in real life the cluster can consist of multiple worker nodes. The cluster sizing guidelines are beyond the scope of this article.

Flavor of Worker Nodes

Again, choose eo1.large.

Auto Scaling

../_images/auto_scaling_filled_in.png

When there is lot of demand for workers’ services, the Kubernetes system can scale to using more worker nodes. Our sample setting is minimum 2 and maximum 4 master nodes. With this setting the number of nodes will be dynamically adjusted between these values, based on the ongoing load (number and resource requests of pods running K8S applications on the cluster).

Here is what the screen Size looks like when all the data are entered:

../_images/size_screen_filled.png

To proceed, click on lower right button Next or on option Network from the left main menu.

Step 3 Defining Network and LoadBalancer

This is the last of mandatory screens and the blue Submit button in the lower right corner is now active. (If it is not, use screen button Back to fix values in previous screens.)

../_images/network_option.png

Enable Load Balancer for Master Nodes

This option will be automatically checked, when you selected more than one master node. Using multiple master nodes ensures High Availability of the cluster infrastructure, and in such case the Load Balancer will be then necessary to distribute the traffic between masters.

If you selected only one master node, which might be relevant in non-production scenarios e.g. testing, you will still have an option to either add or skip the Load Balancer. Note that using a LoadBalancer with one master node is still a relevant option, as this option will allow to access the cluster from outside of the cluster network. With no such option selected you will need to rely on SSH access to the master.

Create New Network

This box comes turned on, meaning that the system will create a network just for this cluster. Since Kubernetes clusters need subnets for inter-communications, a related subnetwork will be firstly created and then used further down the road.

It is strongly recommended to use automatic creation of network when creating a new cluster.

However, turning the checkbox off discloses an option to use an existing network as well.

Use an Existing Network

Using an existing network is a more advanced option. You would need to first create a network dedicated to this cluster in OpenStack along with the necessary adjustments. Creation of such a custom network is beyond the scope of this article. Note you should not use the network of another cluster, project network or EODATA network.

If you have an existing network and you would like to proceed, you will need to choose the network and the subnet from the dropdown below:

../_images/use_an_existing_network.png

Both fields have an asterisk behind them, meaning you must specify a concrete value in each of the two fields.

Cluster API

The setting of “Available on public internet” implies that floating IPs will be assigned to both master and worker nodes. This option is usually redundant and has security concerns. Unless you have a specific requirement, leave this option on “private” setting. Then you can always assign floating IPs to required nodes from the “Compute” section in Horizon.

Ingress Controller

Use of ingress is a more advanced feature, related to load balancing the traffic to the Kubernetes applications.

If you are just starting with Kubernetes, you will rather not require this feature immediately, so you could leave this option out.

Step 4 Advanced options

Option Management

../_images/management.png

There is just one option in this window, Auto Healing and its field Automatically Repair Unhealthy Nodes.

Node is a basic unit of Kubernetes cluster and the Kubernetes systems software will automatically poll the state of each cluster; if not ready or not available, the system will replace the unhealthy node with a healthy one – provided, of course, that this field is checked on.

If this is your first time trying out the formation of Kubernetes clusters, auto healing may not be of interest to you. In production, however, auto healing should always be on.

Option Advanced

../_images/advanced_option.png

Option Advanced allows for entering of so-called labels, which are named parameters for the Kubernetes system. Normally, you don’t have to enter anything here.

Labels can change how the cluster creation is performed. There is a set of labels, called the Template and Workflow Labels, that the system sets up by default. If this check box is left as is, that is, unchecked, the default labels will be used unchanged. That guarantees that the cluster will be formed with all of the essential parameters in order. Even if you add your own labels, as shown in the image above, everything will still function.

If you turn on the field I do want to override Template and Workflow Labels and if you use any of the Template and Workflow Labels by name, they will be set up the way you specified. Use this option very rarely, if at all, and only if you are sure of what you are doing.

Step 5 Forming of the Cluster

Once you click on Submit button, OpenStack will start creating the Kubernetes cluster for you. It will show a cloud message with green background in the upper right corner of the windows, stating that the creation of the cluster has been started.

Cluster generation usually takes from 10 to 15 minutes. It will be automatically abandoned if duration time is longer than 60 minutes.

If there is any problem with creation of the cluster, the system will signal it in various ways. You may see a message in the upper right corner, with a red background, like this:

../_images/unable_to_create_a_cluster.png

Just repeat the process and in most cases you will proceed to the following screen:

../_images/cluster_forming.png

Click on the name of the cluster, Kubernetes, and see what it will look like if everything went well.

../_images/creation_in_progress2.png

Step 6 Review cluster state

Here is what OpenStack Magnum created for you as the result of filling in the data in those three screens:

  • A new network called Kubernetes, complete with subnet, ready to connect further.

  • New instances – virtual machines that serve as nodes.

  • A new external router.

  • New security groups, and of course

  • A fully functioning Kubernetes cluster on top of all these other elements.

You can observe that the number of nodes in the cluster was initially 3, but after a while the cluster auto-scaled itself to 2. This is expected and is the result of autoscaler, which detected that our cluster is mostly still idle in terms of application load.

There is another way which we can view our cluster setup and inspect any deviations from required state. Click on Network in the main menu and then on Network Topology. You will see a real time graphical representation of the network. As soon as the one of the cluster elements is added, it will be shown on screen.

../_images/network_topology_with_labels.png

Also in the Horizon’s “Compute” panel you can see the virtual machines which were created for master and worker nodes:

../_images/new_instances2.png

Node names start with kubernetes because that is the name of the cluster in lower case.

Resources tied up from one attempt of creating a cluster are not automatically reclaimed when you again attempt to create a new cluster. Therefore, several attempts in a row will lead to a stalemate situation, in which no cluster will be formed until all of the tied up resources are freed up.

What To Do Next

You now have a fully operational Kubernetes cluster. You can

  • use ready-made Docker images to automate installation of apps,

  • activate the Kubernetes dashboard and watch the state of the cluster online,

  • access EODATA through a cluster

and so on.

Here are some relevant articles:

Read more about ingress here: Using Kubernetes Ingress on EO-Lab FRA1-1 OpenStack Magnum

Article How To Use Command Line Interface for Kubernetes Clusters On EO-Lab OpenStack Magnum shows how to use command line interface to create Kubernetes clusters.

To access your newly created cluster from command line, see article How To Access Kubernetes Cluster Post Deployment Using Kubectl On EO-Lab OpenStack Magnum.

To work with EODATA from a Kubernetes cluster, see article Accessing EODATA from Kubernetes Pods in EO-Lab FRA1-1 Cloud using boto3.