Gitlab Kubernetes runner using EKS and spot instances

Sam

September 25, 2020



Gitlab offers a Kubernetes runner integration that you can use to create and monitor a kubernetes cluster on gitlab following a few instructions on their site. We wanted to use spot instances for our Kubernetes cluster to help keep costs down, so we did things a bit differently.


Essentially we set up an EKS cluster, added an auto-scaling node group that uses spot instances and then connected it to Gitlab and installed their integration. 

EKS

AWS has a pretty thorough walk through of how to create an EKS cluster with spot instances

If you follow their guide, you will have an EKS cluster with spot instances, here are somethings you should consider that we discovered while building our cluster:

  1. You can only create a nodegroup with spot instances via eksctl, and you can only add nodegroups to an EKS cluster that was created with eksctl (AWS say they intend to change this in the future).
  2. Instead of using the command in the walk through, you can tell eksctl to use a config file where you can specify region, vpc, subnets and ssh keys for your ec2 instances (useful if you want to keep the length of the command down).
    Note: if you do not specify a VPC it will create its own. 
    Then just run this command from inside the file directory: eksctl create cluster -f <your-config-filename>.yaml</your-config-filename>


(An example eksctl create cluster yaml file)


  1. We found it easier to create a ‘normal’ nodegroup first, following the instructions from AWS, then add a spot instance nodegroup to the cluster and delete the first nodegroup.
    This way the cloudformation stack creates IAM roles and security groups for the nodegroups that wont be created if you skip this step.
    Note: in the spot instance config file make sure to add the ssh key if you want to be able to ssh into your instances.

(example of spot instance nodegroup config file)


If you followed the guide from AWS you now have a EKS cluster with spot instances that has the node termination handler, and autoscaler installed. 


You can delete the 'normal' nodegroup with: eksctl delete nodegroup --cluster=<clustername> --name=<nodegroupname>.</nodegroupname></clustername>


Gitlab

Next, we followed Gitlabs instructions for adding an ‘Existing Kubernetes cluster’ to our organisation. After it has joined, which can take some time, we installed the ‘Gitlab runner’ and ‘Prometheus’, for monitoring, from the ‘Applications’ tab, and there you have it!


From here all that's needed is to tell your builds to use the new runner.

Conclusion

We found the process pretty easy to setup, and using config files means we can keep a record of our setup and build it again if we need to. We also connected our cluster to Rancher which is a great tool for monitoring your Kubernetes cluster.

If you are building docker images I suggest having a look at using Kaniko, a tool from Google to build images from dockerfiles inside a kubernetes cluster. Gitlab has some good documentation on how to implement it.  


Links

AWS spot instance EKS:

https://aws.amazon.com/getting-started/hands-on/amazon-eks-with-spot-instances/


Gitlab kuberenetes instructions 

https://gitlab.com/help/user/project/clusters/add_remove_clusters#add-existing-cluster


Eksctl documentation:

https://eksctl.io/


Gitlab and Kaniko

https://docs.gitlab.com/ee/ci/docker/using_kaniko.html



Sam

Sam

DevOps Engineer for nerd.vision and other teams. When I am not doing that I like traveling and playing drums.