Installing OpenShift 3.11 on AWS
In this post, we will walkthrough the installation of OpenShift Origin(OKD) 3.11 on AWS infrastructure. This is good enough to run most production applications without breaking the bank.
The architecture and costs
To begin with, let’s see what infrastructure we will be provisioning for the cluster before we install. We will start by creating a VPC which will house all the cluster artifacts. This is a standard security practice where only your OKD web console and router will be exposed to the outside world. Everything else will be accessible only within the VPC. This includes:
1) A dedicated master node which will have the OpenShift API server, controller and console
2) A dedicated infra node which will have the router, and any infra related stuff(ex. metrics server)
3) One or more compute nodes which will run the applications
4) We also need a bastion host which will be used to provision the cluster using Ansible playbooks and later access the running cluster to do any housekeeping.
The router and console will need a DNS entry each, which is managed in this specific example using DNSimple, but can be adapted to any other DNS provider like Route53, CloudFlare or DigitalOcean. All of this is codified and managed using a Terraform module.
Download code used in this post.
At the time of writing this, OKD 4.x was in preview stage, hence we’re sticking to the latest stable version of 3.11. This is managed using Ansible playbooks and runs Kubernetes 1.11 under the hood.
We will be using
t3.large instances for all the VMs, except the bastion image which will be a
t2.small instance. This will work out to an approximate cost of 220 USD per month for a 3-node cluster. This cost can be lowered by around 50% if we are using reserved instances. Figuring out how many reserved instances you might need is both an art and a science. YMMV.
The OS used will be Centos 7, which is one of the OSes recommended by RedHat for installing OpenShift.
1) Terraform 0.12.x version to provision the infrastructure.
2) You need to have a domain name managed via a DNS provider.
3) A pair of SSH keys to use exclusively with the cluster.
4) AWS keypair for use with Terraform
Preparation and the Terraform stage
Terraform stores the current state of the infrastructure so that this can be later used to scale the cluster by adding or removing compute nodes. The recommended way is to use a terraform backend to preserve this state. The quick hacky way is to artifact the
terraform.tfstate file after every successful terraform operation. In this post, we will use the latter method.
Let’s prepare Terraform for creating the infrastructure. The
terraform.tfvars file is a blueprint of your OKD cluster. Here’s a minimal configuration.
public_key = "ssh-rsa XXXX" region = "ap-south-1" master_size = "t3.large" infra_size = "t3.large" node_sizes = ["t3.large"] domain = "openshift" key_name = "openshift-key" cluster_name = "my-openshift-cluster" cluster_id = "my-openshift-cluster" tld = "example.com"
The public key is the one which is taken from the SSH key generated for the cluster.
$ export AWS_ACCESS_KEY_ID=XXXXX $ export AWS_SECRET_ACCESS_KEY=abCD123+w4e $ # This might vary depending upon your DNS provider, this example runs DNSimple. $ export DNSIMPLE_TOKEN=XXX $ export DNSIMPLE_ACCOUNT=123 $ terraform init Initializing modules... - openshift_aws in openshift_aws Initializing the backend... Initializing provider plugins... - Checking for available provider plugins... - Downloading plugin for provider "dnsimple" (terraform-providers/dnsimple) 0.3.0... - Downloading plugin for provider "random" (hashicorp/random) 2.2.1... - Downloading plugin for provider "aws" (hashicorp/aws) 2.53.0... - Downloading plugin for provider "local" (hashicorp/local) 1.4.0... - Downloading plugin for provider "template" (hashicorp/template) 2.1.2... The following providers do not have any version constraints in configuration, so the latest version was installed. To prevent automatic upgrades to new major versions that may contain breaking changes, it is recommended to add version = "..." constraints to the corresponding provider blocks in configuration, with the constraint strings suggested below. * provider.aws: version = "~> 2.53" * provider.dnsimple: version = "~> 0.3" * provider.local: version = "~> 1.4" * provider.random: version = "~> 2.2" * provider.template: version = "~> 2.1" Terraform has been successfully initialized!
The infrastructure does not get created until we “apply” the configuration.
$ terraform apply . . . Outputs: admin_password = a%2ad.1nUphRW1p1wnQbvLBQwepQqTuO bastion_ip_address = 188.8.131.52 master_domain = https://console.aws-okd-1.example.com:8443/
terraform apply will prompt for your approval before actually creating the resources.
This step will create the infrastructure and do a few other things:
1) Generate an inventory based on the infrastructure created, which can be consumed by Ansible.
2) Print the bastion instance IP(which we will use to install OKD on the infrastructure), the console url and the login credentials.
terraform apply step might need to be run after 5 minutes again to update the private DNS names and IP addresses of the EC2 instances.
The bastion instance runs on Amazon Linux, and we will install the official OpenShift Ansible playbooks and the associated tools(Ansible and it’s dependencies) on this instance and run the playbooks from here.
This quick script can be used to set up the bastion instance.
set -x # Elevate priviledges, retaining the environment. sudo -E su # Install dev tools. yum install -y "@Development Tools" python2-pip openssl-devel python-devel gcc libffi-devel httpd-tools
We setup the playbooks and the prerequisites.
$ # Get the OKD 3.11 installer. $ git clone -b release-3.11 https://github.com/openshift/openshift-ansible --depth=1
We then install Ansible and it’s dependencies.
$ chown -R ec2-user:ec2-user openshift-ansible $ chmod -R a+w openshift-ansible/inventory $ cd openshift-ansible $ pip install -r requirements.txt
Before we run the ansible playbooks, we have to copy over the private key file from the ssh keypair we generated previously. Ansible relies on SSH to run commands and provision stuff remotely instead of an agent-based architecture followed by Chef or Puppet.
$ scp -i key -o IdentitiesOnly=yes key email@example.com:/home/ec2-user/openshift-ansible/key
We also need the inventory file generated by the terraform run step. This file specifies Ansible “where” to install OpenShift and other information, like the cluster login credentials, which VM to tag as master node etc.
$ scp -i key -o IdentitiesOnly=yes inventory.cfg firstname.lastname@example.org:/home/ec2-user/openshift-ansible/inventory.openshift
Finally, we update the
ansible.cfg file specifying what the inventory file will be, SSH options etc.
$ scp -i key -o IdentitiesOnly=yes ansible.cfg email@example.com:/home/ec2-user/openshift-ansible/
and copy over a
pre-install.yml playbook to the bastion machine.
$ scp -i key -o IdentitiesOnly=yes pre-install.yml firstname.lastname@example.org:/home/ec2-user/openshift-ansible
Now, we ssh into the bastion machine to start the installation.
The next step is to run the pre-install script which will setup the dependencies in the cluster VMs.
$ ansible-playbook pre-install.yml
Then, we will do a quick prerequisite check.
$ ansible-playbook playbooks/prerequisites.yml
And ultimately, install the cluster.
$ ansible-playbook playbooks/deploy_cluster.yml
This step takes around 20 minutes.
Post installation steps
First, we load the web console using the URL we configured in the DNS. We use the credentials we got from the terraform run output to login to the web console.
We then SSH into the master node from the bastion machine and then run,
$ ssh -i key email@example.com $ oc get nodes NAME STATUS ROLES AGE VERSION ip-10-0-1-191.ap-south-1.compute.internal Ready compute 4m v1.11.0+d4cacc0 ip-10-0-1-4.ap-south-1.compute.internal Ready infra 4m v1.11.0+d4cacc0 ip-10-0-1-84.ap-south-1.compute.internal Ready master 7m v1.11.0+d4cacc0
Let’s quickly deploy a LAMP application which uses persistent storage.
To wind down all the resources used for this cluster, first delete the sample app and it’s associated artifacts like database and persistent volume claim.
Then, on the shell, run the terraform destroy script to gracefully remove all the resources created for the cluster.
$ terraform destroy
This will prompt the user to confirm removal of all resources.
OpenShift vs EKS
How does OpenShift compare with the managed Kubernetes offered by AWS?
First off, we have to understand what’s the difference between OpenShift and Kubernetes in general. OpenShift is more of a finished product which uses Kubernetes as the orchestrator. It is a CNCF certified Kubernetes distribution, which means it is consistent with any Kubernetes installation. Besides, OpenShift comes with a lot of batteries included. Some of these are:
➡ An Oauth server with a builtin docker registry. This is like using AWS ECR with EKS so that the ACL is tightly bound.
➡ A web based UI/console, which is far more evolved than the default dashboard Kubernetes provides.
➡ Stricter security policies for running containers(which can be overridden, but isn’t recommended)
➡ A few Kubernetes constructs to provide a more complete developer experience, like BuildConfig and a builtin HA proxy router.
There are a few disadvantages, depending on how it affects your context.
➡ OpenShift is highly opinionated, which gives a uniform platform for your team/organization, but sometimes this might prove rigid.
➡ OKD 3.11 uses Kubernetes 1.11, which might not have the cutting edge features of EKS(which runs Kubernetes 1.15 at the time of writing this).
➡ There is more community and support around Kubernetes in general, than OpenShift.
You can evaluate and find out what is a good fit for you by downloading this free guide.
ShapeBlock helps you do all this automatically and comes with a lot of other stuff, like readymade templates for various application stacks, cluster scaling, automatic TLS certificates and Github & Gitlab integration. Sign up for free today to try it out on your cloud.