Spotinst raises $15M Series A Led By Intel Capital and Vertex Ventures Read The Official Press Release

Elastigroup & Kubernetes Operations (kops)

Using kops to leverage Spot Instances

I am very excited to write this blog post. After several weeks of intensive work, we are pleased to announce an integration with Kubernetes Operations (kops)– the easiest way to get a production-grade Kubernetes cluster up and running.

Starting today, Spotinst is an official ‘cloud provider’ in Kubernetes, and users can now create highly-available and highly-economical (pay less than $0.01 per CPU core hour!) with kops via KOPS_CLOUD_PROVIDER="spotinst" on AWS, Google Cloud, and Microsoft Azure in about 30 seconds.

Features

  • Automates the provisioning of Kubernetes clusters
  • Automatic utilization of Amazon EC2 Spot Instances, Google Preemptible VMs and Microsoft Low-priority VMs as k8s worker nodes
  • Deploys Highly Available (HA) Kubernetes Masters
  • Supports upgrading from kube-up
  • Command line autocompletion
  • Community supported!
  • UPDATE: Support for Gossip-based clusters!
    • This makes the process of setting up Kubernetes cluster using kops DNS-free, the only change you need to do is add .local suffix to your cluster name

No more talking – let’s go straight to work.

Installation

  1. Important! Download the proper binary file for your operating system; linux_amd64 or darwin_amd64.
  2. Download the proper kubectl binary file.
  3. Make sure the binary files are executable.
$ mv kops /usr/local/bin/kops ; mv kubectl /usr/local/bin/kubectl

Setup your environment

  1. Generate a Spotinst API token
  2. Setup an AWS IAM user
  3. Configure DNS
  4. Testing your DNS setup
  5. Create s3/gs cluster state store

Usage

I’ve prepared 6 shell files for your convenience to help you get started within just a few minutes.

  1. 00-env.sh – Setup environment variables on your local machines (need to be modified by you)
  2. 01-create.sh – Creates the k8s cluster
  3. 02-validate.sh – Validates that the cluster is up and running (master & nodes)
  4. 03-dashboard-install.sh – Installs the UI dashboard package will be available via (https://master-ip/ui)
  5. 04-get-password.sh – Generates an admin password for connecting to the k8s cluster
  6. 05-delete.sh – Deletes the k8s cluster

I’ve packed these files in a tar file, download here.

Live Example

I’ve configured my DNS as a sub-hosted zone under my Route53 domain 'amiram.ek8s.com'
In order to do so I’ve followed these commands

ID=$(uuidgen) && aws route53 create-hosted-zone --name amiram.ek8s.com --caller-reference $ID | \
    jq .DelegationSet.NameServers

Using this output, I’ve created a file called subdomain.json

$ cat subdomain.json
{
  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "amiram.ek8s.com",
        "Type": "NS",
        "TTL": 300,
        "ResourceRecords": [
          {
            "Value": "ns-x.awsdns-16.com"
          },
          {
            "Value": "ns-y.awsdns-53.org"
          },
          {
            "Value": "ns-z.awsdns-35.co.uk"
          },
          {
            "Value": "ns-k.awsdns-39.net"
          }
        ]
      }
    }
  ]
}

Then, I got my PARENT hosted zone id

# Note: This example assumes you have jq installed locally.
aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="ek8s.com.") | .Id'

Now, let’s make sure that traffic to *.amiram.ek8s.com will be routed to the correct subdomain hosted zone in Route53 using the following command:

aws route53 change-resource-record-sets \
 --hosted-zone-id  \
 --change-batch file://subdomain.json

Set the environment variables

$ ./00-env.sh

Create the cluster

$ ./01-create.sh
I0816 18:03:22.666556   53583 create_cluster.go:866] Using SSH public key: /Users/amirams/.ssh/id_rsa.pub
I0816 18:03:23.254765   53583 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1a
I0816 18:03:23.254786   53583 subnets.go:183] Assigned CIDR 172.20.64.0/19 to subnet us-east-1b
W0816 18:03:28.754446   53583 urls.go:66] Using nodeup location from NODEUP_URL env var: "http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.7.0/nodeup/linux/amd64/nodeup"
I0816 18:03:30.375772   53583 executor.go:91] Tasks: 0 done / 63 total; 34 can run
I0816 18:03:32.033606   53583 vfs_castore.go:422] Issuing new certificate: "master"
I0816 18:03:32.305329   53583 vfs_castore.go:422] Issuing new certificate: "kube-scheduler"
I0816 18:03:32.547222   53583 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0816 18:03:32.577315   53583 vfs_castore.go:422] Issuing new certificate: "kops"
I0816 18:03:32.635891   53583 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0816 18:03:32.794796   53583 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager"
I0816 18:03:33.077381   53583 vfs_castore.go:422] Issuing new certificate: "kube-proxy"
I0816 18:03:35.074153   53583 executor.go:91] Tasks: 34 done / 63 total; 13 can run
I0816 18:03:36.044340   53583 executor.go:91] Tasks: 47 done / 63 total; 16 can run
I0816 18:03:36.954705   53583 apply_cluster.go:679] Adding docker image: http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/release/v1.7.0/bin/linux/amd64/kube-proxy.tar
I0816 18:03:36.990869   53583 apply_cluster.go:679] Adding docker image: http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/release/v1.7.0/bin/linux/amd64/kube-proxy.tar
W0816 18:03:37.035124   53583 urls.go:87] Using protokube location from PROTOKUBE_IMAGE env var: "http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.7.0/protokube/images/protokube.tar.gz"
I0816 18:03:37.115178   53583 apply_cluster.go:679] Adding docker image: http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/release/v1.7.0/bin/linux/amd64/kube-apiserver.tar
I0816 18:03:37.232327   53583 apply_cluster.go:679] Adding docker image: http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/release/v1.7.0/bin/linux/amd64/kube-controller-manager.tar
I0816 18:03:37.311352   53583 apply_cluster.go:679] Adding docker image: http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/release/v1.7.0/bin/linux/amd64/kube-scheduler.tar
I0816 18:03:53.698849   53583 executor.go:91] Tasks: 63 done / 63 total; 0 can run
I0816 18:03:53.699967   53583 dns.go:152] Pre-creating DNS records
I0816 18:03:54.858970   53583 update_cluster.go:247] Exporting kubecfg for cluster
Kops has set your kubectl context to amiram.ek8s.com

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.amiram.ek8s.com
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
 * read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md
Wow – under less than 33 seconds!

we have a highly-available k8s cluster up and running!

In parallel, we can observe that 2 Elastigroups have been created. One for the master and one for the k8s nodes
created

We can also observe that the k8s nodes are spread across multiple instance types c3.large,c4.large,m4.large,m3.large and different Availability Zones us-east-1a,us-east-1b
nodes

Plus, it cost us less than $0.05 per core hour!

Validate the cluster:

$ ./02-validate.sh
Using cluster from kubectl context: amiram.ek8s.com

Validating cluster amiram.ek8s.com

INSTANCE GROUPS
NAME			ROLE	MACHINETYPE				MIN	MAX	SUBNETS
master-us-east-1a	Master	m4.large				1	1	us-east-1a
nodes			Node	m4.large,m3.large,c4.large,c3.large	2	2	us-east-1a,us-east-1b

NODE STATUS
NAME				ROLE	READY
ip-172-20-49-241.ec2.internal	node	True
ip-172-20-53-69.ec2.internal	master	True
ip-172-20-69-234.ec2.internal	node	True

Your cluster amiram.ek8s.com is ready

Installing the k8s UI dashboard and get access to the k8s cluster

$ ./03-dashboard-install.sh
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
$ ./04-get-password.sh
Using cluster from kubectl context: amiram.ek8s.com

KcYyrzzzZZZZzzzzZZZZZoKx9fy


k8s-dash

That’s it. I hope you will find useful.

Next post in the series will be: splitting your k8s cluster across AWS and Google Cloud.

Amiram Shachar
Spotinst, Founder and CEO

Stay current

Sign up for our newsletter, and we'll send you the latest updates on Spotinst, tips, tutorials and more cool stuff!