Tag: kubernetes

kops rolling update pods unclean exit

kops rolling update pods unclean exit

As we were on-boarding more and more services to our kubernetes cluster, we were getting deep insights of how things work or don’t work.

One of the thing we realized when doing rolling updates of our kubernetes cluster is that pods were not getting enough time for graceful termination, and of-course our users were not thrilled about it. One of our user was running Kafka stack on our cluster and he complained that his Kafka indexes were getting corrupted due to this.

When he brought this up to our notice, we immediately knew it has to do something with how kops do the rolling update because graceful termination was a well known kubernetes ability.

So when we started investigating we found that kops have a feature flag to turn on draining of node. We thought we found our culprit but it turns out its default value is true since this PR was merged.

So we started digging a little more deep and thats when we found our issue.

kubectl which is a CLI for kubernetes has a command drain which actually takes care of deleting the pods on a given node thus draining it. And one of the flag exposed by that command is GracefulTerminationInSeconds which is of type int (whose zero value is 0) and that flag is configured to have a default value of “-1” in that command using Cobra and as per documentation in kubectl command following is the behavior for different values:

  • -1 use the graceful termination seconds specified by the pod
  • 0 delete the pod immediately
  • > 0 use this as graceful termination seconds

Also kubernetes exposes those commands as exported pkg functions that can be used as sdk, and kops happens to be making use of that. However it does not set any default value for parameter GracefulTerminationInSeconds and therefore end up running drain command with GracefulTerminationInSeconds = 0

As you can imagine the fix was as easy as configuring drain command with default value of -1

We opened the PR with this fix here and currently its waiting for the four golden words from one of kubernetes/kops member: lgtm

We will update the post here once the PR is merged.

Update: the PR has been accepted and merged.

External-dns with Kubernetes

External-dns with Kubernetes

One of the thing I’ve done enough at work is to get cname associated with any new web service I am deploying. The process was straight forward:

1. decide a cname
2. open a ticket for team that manages LAB DNS
3. someone from the team checks the ticket and create a new DNS entry

This all looks reasonable until you use external-dns, a Kubernetes incubator project, to start managing your domains. When deploying a load balancer or ingress, just add an annotation and external-dns will go and update your DNS entry automatically. in the DNS of your choice !!

---
apiVersion: v1
kind: Service
metadata:
  name: your-service
  labels:
    app: your-service
  annotations:
    external-dns.alpha.kubernetes.io/hostname: your-service.your-domain.com
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: your-arn-address-for-cert
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
  type: LoadBalancer
  ports:
    - port: 8443
      targetPort: 8443
      name: https
  selector:
    app: your-service

Observe the annotation external-dns.alpha.kubernetes.io/hostname

To get this functionality, you need to start external-dns pod whose typical deployment looks like below:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: external-dns
  name: external-dns
spec:
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      containers:
        - name: external-dns
          image: registry.opensource.zalan.do/teapot/external-dns:v0.4.8
          imagePullPolicy: IfNotPresent
          args:
            - --log-level=info
            - --domain-filter=your-domain.com
            - --policy=sync
            - --provider=aws
            - --source=service
            - --source=ingress
            - --registry=txt
            - --txt-owner-id=any-identifier-string
      serviceAccountName: external-dns

–domain-filter specify the parent domain for which subdomains will be managed
–source specify which sources to consider when scraping the annotations
–policy specify which if you want to keep DNS records in sync or just want to add new records
–provider specify which DNS provider to use

–registry=txt and
–txt-owner-id=any-identifier-string if specified, add a TXT record for every record to identify itself as owner of that record

Once you start using it, its addicting. No longer need to remember those ugly load balancer names or IP address. Power to #Kubernetes community.

Kubernetes service type loadbalancer with AWS

Kubernetes service type loadbalancer with AWS

When you deploy a service using Kubernetes, as type LoadBalancer, it automatically creates a load balancer for you in AWS.

However by default it enables load balancer with tcp ports only

But if you want to use ssl with your application, you need https instead.

To enable that, you need to make sure that the annotation ‘service.beta.kubernetes.io/aws-load-balancer-ssl-ports' has comma separated list of port # (spec.ports[index].port) or port name (spec.ports[].name) that you want to serve on https. So your spec file will look something like below:


---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: k8s-awesome-service
  name: k8s-awesome-service
  namespace: default
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:iam::USER_ID:server-certificate/CERTIFICATE_NAME"
spec:
  ports:
  - port: 443
    targetPort: 4000
    name: https
  selector:
    k8s-app: k8s-awesome-service
  type: LoadBalancer
    

Observe how the ‘name’ of the port is mentioned as value of annotation service.beta.kubernetes.io/aws-load-balancer-ssl-ports