Author: Rajat Jindal

kops rolling update pods unclean exit

kops rolling update pods unclean exit

As we were on-boarding more and more services to our kubernetes cluster, we were getting deep insights of how things work or don’t work.

One of the thing we realized when doing rolling updates of our kubernetes cluster is that pods were not getting enough time for graceful termination, and of-course our users were not thrilled about it. One of our user was running Kafka stack on our cluster and he complained that his Kafka indexes were getting corrupted due to this.

When he brought this up to our notice, we immediately knew it has to do something with how kops do the rolling update because graceful termination was a well known kubernetes ability.

So when we started investigating we found that kops have a feature flag to turn on draining of node. We thought we found our culprit but it turns out its default value is true since this PR was merged.

So we started digging a little more deep and thats when we found our issue.

kubectl which is a CLI for kubernetes has a command drain which actually takes care of deleting the pods on a given node thus draining it. And one of the flag exposed by that command is GracefulTerminationInSeconds which is of type int (whose zero value is 0) and that flag is configured to have a default value of “-1” in that command using Cobra and as per documentation in kubectl command following is the behavior for different values:

  • -1 use the graceful termination seconds specified by the pod
  • 0 delete the pod immediately
  • > 0 use this as graceful termination seconds

Also kubernetes exposes those commands as exported pkg functions that can be used as sdk, and kops happens to be making use of that. However it does not set any default value for parameter GracefulTerminationInSeconds and therefore end up running drain command with GracefulTerminationInSeconds = 0

As you can imagine the fix was as easy as configuring drain command with default value of -1

We opened the PR with this fix here and currently its waiting for the four golden words from one of kubernetes/kops member: lgtm

We will update the post here once the PR is merged.

Update: the PR has been accepted and merged.

External-dns with Kubernetes

External-dns with Kubernetes

One of the thing I’ve done enough at work is to get cname associated with any new web service I am deploying. The process was straight forward:

1. decide a cname
2. open a ticket for team that manages LAB DNS
3. someone from the team checks the ticket and create a new DNS entry

This all looks reasonable until you use external-dns, a Kubernetes incubator project, to start managing your domains. When deploying a load balancer or ingress, just add an annotation and external-dns will go and update your DNS entry automatically. in the DNS of your choice !!

---
apiVersion: v1
kind: Service
metadata:
  name: your-service
  labels:
    app: your-service
  annotations:
    external-dns.alpha.kubernetes.io/hostname: your-service.your-domain.com
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: your-arn-address-for-cert
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
  type: LoadBalancer
  ports:
    - port: 8443
      targetPort: 8443
      name: https
  selector:
    app: your-service

Observe the annotation external-dns.alpha.kubernetes.io/hostname

To get this functionality, you need to start external-dns pod whose typical deployment looks like below:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: external-dns
  name: external-dns
spec:
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      containers:
        - name: external-dns
          image: registry.opensource.zalan.do/teapot/external-dns:v0.4.8
          imagePullPolicy: IfNotPresent
          args:
            - --log-level=info
            - --domain-filter=your-domain.com
            - --policy=sync
            - --provider=aws
            - --source=service
            - --source=ingress
            - --registry=txt
            - --txt-owner-id=any-identifier-string
      serviceAccountName: external-dns

–domain-filter specify the parent domain for which subdomains will be managed
–source specify which sources to consider when scraping the annotations
–policy specify which if you want to keep DNS records in sync or just want to add new records
–provider specify which DNS provider to use

–registry=txt and
–txt-owner-id=any-identifier-string if specified, add a TXT record for every record to identify itself as owner of that record

Once you start using it, its addicting. No longer need to remember those ugly load balancer names or IP address. Power to #Kubernetes community.

helm charts identify last element of list in range

helm charts identify last element of list in range

If you are reading this then most likely you are struggling to identify last element of the list in helm templates. go on read on for answer:

Helm template:


---
[{{- $root := . -}}
{{- $lastIndex := sub (len $root.Values.config.hosts) 1}}
{{- range $i, $host := $root.Values.config.hosts }}
  {
    "host" : {{ $host }},
    "index" : {{ $i }}
  }{{- if ne $i $lastIndex -}}, {{ end }}
{{- end }}
]

when provided following values:


---
config:
  hosts: ["host1", "host2"]

results in:


[
  {
    "host" : "host1",
    "index" : "0"
  },
  {
    "host" : "host2",
    "index" : "1"
  }
]

Observe there is no comma after 2nd hash object.

Building Docker image on Raspberry Pi 3 with Ubuntu 16.04

Building Docker image on Raspberry Pi 3 with Ubuntu 16.04

When trying to build docker images using my raspberry pi 3, running Ubuntu Core 16.04, I kept getting following error:

error checking context: 'can't stat '/home/rajatjindal83/dockerfiles''.

My Dockerfile was pretty simple, so it was confusing what could be wrong.

FROM armhf/ubuntu
RUN apt-get update && apt-get -qy install git nano curl wget build-essential

I tried searching on Google, but couldn’t find anything that will fix this issue. When I couldn’t find any tried and tested solution, I started to think from a software engineering perspective of what could be causing this issue.

Apparently, Ubuntu Core use Snaps for installing packages, and by default Snaps cannot access ‘stuff’ they are not suppose to. and using same principle Docker (which is just another Snap package) didn’t had access to touch my home directory.

To fix this, I had to run following command:

snap connect docker:home core:home

References:
stack-over-flow
Snap Interfaces

Once I did this, I was able to successfully build my Docker image:

root@localhost:~/dockerfiles# docker build .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM armhf/ubuntu ---> fa40ea71de37
Step 2/2 : RUN apt-get update && apt-get -qy install git nano curl wget build-essential --->
 Using cache ---> eef28b5573c8
Successfully built eef28b5573c8
root@localhost:~/dockerfiles#
Write WordPress posts on the go using WordPress mobile app

Write WordPress posts on the go using WordPress mobile app

If you are like me who like to spend a lot of time browsing on your mobile phone, you are gonna love this app.

Recently i came across WordPress app on app-store. Having used a similar app earlier, I was not very hopeful but oh boy, I was surprised.

The super easy interface for this app totally blowed me away AND you can use it to manage any of your WordPress site. Thats right, it does not have to be hosted by WordPress.

Search for WordPress in AppStore and install it 


Open the app and click on login button


Click on “Log into your site …”


Enter the base url of your WordPress website


Enter username and password


If successful, it will show your profile as well as site will be added successfully


If you click on your site, it will take you to a nice interface where you can manage your posts, pages, media etc. pretty much everything !!


Once you click on posts, you can click at top dropdown to toggle between published or draft or scheduled or trashed posts. You can add new posts or edit existing one’s. This is pretty amazing

Once done editing you can add tags to the post or publish it right from your mobile phone


Try it out and let us know what you think about this cool app

how to use code syntax highlighter in wordpress

how to use code syntax highlighter in wordpress

One of the thing that you absolutely need for a technical blog is to have some kind of syntax highlighter to make code examples easy to read.

When i started this blog, in my first blog itself i started missing syntax highlighter.

I started to search for one. Following was my selection criteria:

  • Has to support multiple languages
  • Should be easy to setup
  • Should be easy for me to highlight code with just css class

I tried to install a few wordpress plugin but none worked per my expectation.

So i started to search for highlighter which does not have to be a wordpress plugin and i found highlight.js which looked promising And it worked perfectly fine when i tried it for few languages:

  • YAML
  • Dockerfile
  • Go lang

For embedding this with my WordPress installation i had to modify header.php file as follows:

<link rel="stylesheet" href="https://rajatjindal.com/highlight-js/styles/default.css">
<script src="https://rajatjindal.com/highlight-js/highlight.pack.js"></script>
<script>hljs.initHighlightingOnLoad();</script>

Once i did this, highlighting code was as easy as follows:

<pre><code class="yaml">---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: k8s-awesome-service</code></pre>

which will be displayed as follows:

---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: k8s-awesome-service

I do plan to use this extensively for other programming languages as i learn and write more.

I will update the post after a few weeks as i use it more

Kubernetes service type loadbalancer with AWS

Kubernetes service type loadbalancer with AWS

When you deploy a service using Kubernetes, as type LoadBalancer, it automatically creates a load balancer for you in AWS.

However by default it enables load balancer with tcp ports only

But if you want to use ssl with your application, you need https instead.

To enable that, you need to make sure that the annotation ‘service.beta.kubernetes.io/aws-load-balancer-ssl-ports' has comma separated list of port # (spec.ports[index].port) or port name (spec.ports[].name) that you want to serve on https. So your spec file will look something like below:


---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: k8s-awesome-service
  name: k8s-awesome-service
  namespace: default
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:iam::USER_ID:server-certificate/CERTIFICATE_NAME"
spec:
  ports:
  - port: 443
    targetPort: 4000
    name: https
  selector:
    k8s-app: k8s-awesome-service
  type: LoadBalancer
    

Observe how the ‘name’ of the port is mentioned as value of annotation service.beta.kubernetes.io/aws-load-balancer-ssl-ports