Scaling Kubernetes nodes
(efficiently) has always been difficult, Today we are announcing a new integration with Cluster Autoscaler which will allow our customers to scale Kubernetes workloads economically based on Spot prices and trends based on aggregated node utilization.
What is Kubernetes Cluster AutoScaler
Cluster AutoScaler is an open source project (Github) that enables users to automatically scale clusters up and down so that all scheduled pods will have enough resources. If the cluster doesn’t have enough capacity, a new node is added and underutilization is handled by the Autoscaler. If all pods are removed from a node, the node enters a un-utilized state and will eventually be terminated.
How Cluster AutoScaler works
Cluster AutoScaler periodically checks whether there are pods waiting for assignment and if there is enough capacity to schedule the pods. If there isn’t enough capacity, a scale up event will be triggered.
Cluster AutoScaler monitors the usage of all nodes. If a node is not needed for an extended period and is not highly utilized, the pods will be scheduled elsewhere and the node will be terminated.
The user has the ability to set the minimum and maximum allowed nodes so the number of running nodes will always be in that specified range.
Please consider: All replicated pod services should have the ability to tolerate moving between nodes and brief disruption.
Spotinst’s Elastigroup integration with Kubernetes allows you to save 80% of your compute costs by running heterogeneous Kubernetes minion nodes deployed in multiple availability zones that use various instance types to minimize disruptions in the Spot market.
Preemptive Instance Replacement and Clean
Elastigroup will ensure cluster availability using a prediction algorithm and designated monitoring services. In the case of a node failure, a new node is preemptively launched and the faulted instance will be reported to the master and configured as ‘unschedulable’. All pods and services will be rescheduled to run on different nodes.
We have added Kubernetes Autoscaler to allow you the ability to scale your cluster to meet the high demands of your users and save additional money in the process.
How To Get Started
Autoscaler integration Configuration:
- Save the following manifest as YAML file code:
apiVersion: v1 kind: ConfigMap metadata: name: kube-system-config namespace: kube-system data: spotinst.token: <SPOTINST_TOKEN> # Generate here: https://goo.gl/EUF6cG spotinst.account: <SPOTINST_ACCOUNT_ID> # Optional --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: cluster-autoscaler namespace: kube-system labels: kubernetes.io/cluster-service: "true" app: cluster-autoscaler spec: replicas: 1 selector: matchLabels: app: cluster-autoscaler template: metadata: labels: app: cluster-autoscaler spec: containers: - image: spotinst/kubernetes-cluster-autoscaler:1.0.0 name: cluster-autoscaler resources: limits: cpu: 100m memory: 300Mi requests: cpu: 100m memory: 300Mi command: - ./cluster-autoscaler - --v=4 - --stderrthreshold=info - --cloud-provider=spotinst - --skip-nodes-with-local-storage=false - --nodes=1:10:<SPOTINST_GROUP_ID> # --nodes=1:10:sig-566aceae env: - name: SPOTINST_TOKEN valueFrom: configMapKeyRef: name: kube-system-config key: spotinst.token - name: SPOTINST_ACCOUNT valueFrom: configMapKeyRef: name: kube-system-config key: spotinst.account volumeMounts: - name: ssl-certs mountPath: /etc/ssl/certs/ca-certificates.crt readOnly: true imagePullPolicy: "Always" volumes: - name: ssl-certs hostPath: path: "/etc/ssl/certs/ca-certificates.crt"
Generate a new token for the Spotinst API. Once generated, update the ConfigMap within the manifest (line #9).
- Create a new Elastigroup from scratch or import an existing Auto Scaling group. Please ensure your user data is set up properly. It should install, configure and start the Kubernetes services during the instance creation time.
- Copy the Elastigroup ID and update the manifest accordingly (line #49).
kubectl apply -f /path/to/cluster-autoscaler.yamlto start the Cluster Autoscaler.