Running Serverless Deployments with Kubernetes and Jenkins (Part 1/2)

Adrian Lee Xinhan
8 min readFeb 18, 2020

--

Serverless functions such as Lambda and Azure Functions have been becoming increasingly popular among developers. The primary reason is that serverless platforms allow developers to execute a piece of code in the required languages which they need without the need to maintain their infrastructure estate.

However, while AWS and Azure are fantastic platforms to run your serverless code, a thing to note is you are still tied down to the infrastructure which you are using. This means that if you have written your application in Lambda in AWS and want to port over to Azure in the future, depending on how complex your code is, it may be difficult to run your function in an entirely different cloud platform.

What if we are able to instead run our functions on serverless platforms which are agnostic of what the underlying cloud provider is?

In part 1 of this article, we will be examining the use case of running serverless functions in Kubernetes (running on AWS). I have chosen Kubernetes as my choice of platform as, in the future, if I wish to run my serverless platform on any Kubernetes environments on any cloud provider I can choose to do so without the lock-in.

So some pre-requisites before we begin and this is assuming you have these pre-requisites up before we continue.

  1. You would need a Kubernetes environment. You can use any Kubernetes environment (be it EKS, AKS, PKS, GKE or even your own bootstrapped environment etc). For the sake of this article, I will be using Kubernetes on AWS which I have bootstrapped using Kubeadm. I have used the AWS cloud provider and as such, it is integrated with my AWS elastic load balancer and other AWS services.
  2. You would also need to have a Jenkins server (you won’t need this till part 2)
  3. You would also need to have a postgresql server which you can install on Kubernetes (https://github.com/helm/charts/tree/master/stable/postgresql), you can use Bitnami (https://hub.docker.com/r/bitnami/postgresql/)or install on a VM or even run an RDS service in AWS. Im leaving this part open to which one you would prefer.

For my choice of serverless platform, I am using OpenFaas as a demonstration. Alex Ellis , founder from OpenFaas has done a great job in explaining how to run OpenFaas and I highly encourage you to take a look at his awesome website: https://docs.openfaas.com/

For the TLDR version, we are now going to deploy OpenFaas on Kubernetes using helm charts. https://github.com/openfaas/faas-netes/blob/master/HELM.md.

ubuntu@ip-172-31-18-70:~$ curl -sSLf https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bashubuntu@ip-172-31-18-70:~$ kubectl -n kube-system create sa tiller \
> && kubectl create clusterrolebinding tiller \
> --clusterrole cluster-admin \
> --serviceaccount=kube-system:tiller
ubuntu@ip-172-31-18-70:~$ helm init --skip-refresh --upgrade --service-account tillerubuntu@ip-172-31-18-70:~$ helm repo add openfaas https://openfaas.github.io/faas-netes/
ubuntu@ip-172-31-18-70:~$ helm repo update \
> && helm upgrade openfaas --install openfaas/openfaas \
> --namespace openfaas \
> --set functionNamespace=openfaas-fn \
> --set generateBasicAuth=true

Now we will get the pods which are running in our OpenFaas namespace

ubuntu@ip-172-31-18-70:~$ kubectl get pods -n openfaas
NAME READY STATUS RESTARTS AGE
alertmanager-644fd768c6-77lcx 1/1 Running 0 25d
basic-auth-plugin-f7c47768c-ztwf5 1/1 Running 0 22d
faas-idler-5d9979ff55-q89ql 1/1 Running 0 25d
gateway-9c746f75d-vvh2r 2/2 Running 0 22d
nats-67565877db-6tdnr 1/1 Running 0 25d
prometheus-787d76647c-5w6hz 1/1 Running 0 22d
queue-worker-db4cf7446-nvctg 1/1 Running 1 26d

We will also check our service for OpenFaas to ensure that it is running fine. I have purposely adjusted OpenFaas to run as a nodeport. You can do the same by adjusting the services and changing the type to NodePort

ubuntu@ip-172-31-18-70:~$ kubectl get svc -n openfaas
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.110.6.105 <none> 9093/TCP 26d
basic-auth-plugin ClusterIP 10.104.3.31 <none> 8080/TCP 26d
gateway ClusterIP 10.105.119.146 <none> 8080/TCP 26d
gateway-external NodePort 10.110.191.37 <none> 8080:31112/TCP 26d
nats ClusterIP 10.107.47.171 <none> 4222/TCP 26d
prometheus ClusterIP 10.99.243.0 <none> 9090/TCP 26d

Let us also get the password of our OpenFaas port by doing

PASSWORD=$(kubectl -n openfaas get secret basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode) && \
echo "OpenFaaS admin password: $PASSWORD"

So it looks all is well. Let us browse our OpenFaas portal and you should hit the below

Now let’s get onto the fun and interesting part: developing our services on serverless. An overview of our architecture is as below.

For this, I am going to run a postgresql database as I have explained in the pre-requisites. We have a table called device in which the schema looks like this and we have inserted some data into the table

Schema for postgresql device table

We are going to use Visual Studio Code to develop our serverless functions and we will start off with a GET function in which we will get the device data. To begin, we will create a directory and do a

$ faas-cli new --lang python get-device-data

This creates 3 files

get-device-data/handler.py 
get-device-data/requirements.txt
get-device-data.yml

Before we begin writing our code, we will first go to DockerHub and create a repo. Mine is as below. Remember your repo name (in my case it’s leexha/get-device-data) as we will be putting it into our get-device-data.yml file later.

For now, we will be editing our get-device-data/handler.py. Copy and paste the below code. Fill in your relevant dbname, user, pw and host of your Postgresql server.

Pretty explanatory as we are going to connect to our database and pull out the relevant information to print to our handler. I am outputting this as a json format so that my web application can easily consume it.

import requestsimport jsonimport psycopg2params = {  'dbname': 'demo',
'user': 'postgres',
'password': 'sssss',
'host': 'xxx.xxx.xxx.xxx',
'port': portnumber
}def connection():
conn = None
try :
conn = psycopg2.connect(**params)
except Exception as e :
print("[!] ",e)
else:
return conn
def query_db(query, args=(), one=False):
cur = connection().cursor()
cur.execute(query, args)
r = [dict((cur.description[i][0], value) \
for i, value in enumerate(row)) for row in cur.fetchall()]
cur.connection.close()
return (r[0] if r else None) if one else r
def handle(req):
if req == 'getalldevices':
my_query = query_db("SELECT * FROM device")
output=json.dumps(my_query, indent=4, sort_keys=True, default=str)
print(output)

In your requirements file, please put the below items

requests
psycopg2

Finally in your template

version: 1.0
provider:
name: openfaas
gateway: http://xxxxxxxx:31112
functions:
get-device-data
lang: python3
handler: ./get-device-data
image: leexha/get-device-data:latest

Let us try and build our image now using the command .

faas-cli build -f ./get-device-data.yml

and….. wait…. we ran into an error:

Collecting aniso8601==1.3.0 (from -r requirements.txt (line 1))
Downloading aniso8601-1.3.0.tar.gz (57kB)
Collecting click==6.7 (from -r requirements.txt (line 2))
Downloading click-6.7-py2.py3-none-any.whl (71kB)
Collecting Flask==0.12.2 (from -r requirements.txt (line 3))
Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB)
Collecting Flask-RESTful==0.3.6 (from -r requirements.txt (line 4))
Downloading Flask_RESTful-0.3.6-py2.py3-none-any.whl
Collecting Flask-SQLAlchemy==2.3.2 (from -r requirements.txt (line 5))
Downloading Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl
Collecting itsdangerous==0.24 (from -r requirements.txt (line 6))
Downloading itsdangerous-0.24.tar.gz (46kB)
Collecting Jinja2==2.9.6 (from -r requirements.txt (line 7))
Downloading Jinja2-2.9.6-py2.py3-none-any.whl (340kB)
Collecting MarkupSafe==1.0 (from -r requirements.txt (line 8))
Downloading MarkupSafe-1.0.tar.gz
Collecting psycopg2==2.7.3.1 (from -r requirements.txt (line 9))
Downloading psycopg2-2.7.3.1.tar.gz (425kB)
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.

Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:

python setup.py build_ext --pg-config /path/to/pg_config build ...

or with the pg_config option in 'setup.cfg'.

----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-01lf5grh/psycopg2/
ERROR: Service 'api' failed to build: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1

So what’s happening here? Psycog2 is a library which I am using to easily connect to Postgresql. Now in order for psycopg2 to work, we need to build Psycopg and we need the packages gcc musl-dev postgresql-dev. We also need that pg_config executable.

So for this go to python 3 > Dockerfile under template and paste in the highlighted line below:

 RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev

Now lets try and rebuild

$ faas-cli build -f ./get-device-data.yml ...  
Successfully tagged get-device-data:latest
Image: get-device-data built.

Yes!!!! It has been built successfully. Note that this is a hack which I am doing and a proper way of doing it is writing your own template file which Alex has detailed on his website.

Now, lets push to docker hub

faas-cli push -f ./get-device-data.yml

And you should see it in your docker hub

Great. Now we are going to deploy to our OpenFaas. For this lets use the command below

$ faas-cli deploy -f ./get-device-data.yml 
Deploying: get-device-data
No existing service to remove
Deployed.
200 OK
URL: http://xxxxxxx:31112/function/get-device-data

Let us try and invoke it in our openfaas platform.

Yes!!! Successful and go grab a beer.

Now a question you may have is what if my serverless functions are not available maybe due to an AWS outage such as an AZ failure?

No worries. We can simply spin up a new kubernetes environment in a different cloud infrastructure such as VMware, Azure, GCP and redeploy our Serverless functions.

I will see u in part 2 where we will integrate it with Jenkins a very popular CI/CD tooling.

--

--