Setting up Linkerd

Posted June 5, 2019 by Thomas Kooi ‐ 5 min read

Installing Linkerd2 into an existing Kubernetes Cluster

During KubeCon EU 2019, I had the opportunity to delve deeper into the functionalities of an impressive tool -Linkerd 2. Its simplicity and potential are quite promising, and this article outlines my journey of setting it up and shares the experiences I had along the way.

To begin with, Linkerd’s Getting Started guide provides a solid foundation for installation and setup. The insights shared in this post are based on my experience with Linkerd version 2.3.1.

Getting the CLI tool

You will want to install the linkerd CLI tool. They provide an easy to use script for doing so;

curl https://run.linkerd.io/install | sh

Alternatively, go to their Releases and download the linkerd2-cli binary for your system. I’d recommend the stable version.

If you use Brew, you can also run

brew install linkerd

Awesome pre-requirements check

The Linkerd CLI tool has a check command to validate if you are ready to install it into your cluster;

linkerd check --pre

In essence, it performs three core functions:

  1. validating the Kubernetes version,
  2. ensuring that the necessary RBAC permissions are in place within your cluster for creating required resources,
  3. and verifying that there won’t be conflicts with Pod Security Policy.

Installing

During the installation process, enabling the auto-inject feature will likely be beneficial. As per my understanding, this feature is set to be enabled by default in future releases. So far, I haven’t encountered any issues with it. Note that when enabled, it’s an opt-in feature. This means you must explicitly activate it for a specific namespace, deployment, or statefulset to incorporate them into the mesh.

linkerd install --proxy-auto-inject | kubectl apply -f -

Running the linkerd install command will set-up the control plane for the mesh into it’s own namespace (linkerd by default).

Once completed, you can use linkerd check to validate the installation has succedeed.

linkerd check

One of the issues I had with this is that it’s not yet possible to configure affinity or nodeSelectors for the Linkerd control plane. I deployed this into a staging cluster as well, but before doing so, manually modified the spec for all deployments to include my desired nodeSelectors.

Joining the mesh

Joining a service into the mesh is pretty easy; it just needs some annotations. You can enable it at the namespace level (kubectl annotate <namespace> linkerd.io/inject=enabled) or add an annotation to a pod spec. For example with a deployment; spec.template.metadata.annotations.linkerd.io/inject: enabled.

Note that pods need to be recreate before that it will join the mesh. So any newly created pods will start up with a side car container, the linkerd-proxy.

After running with it

So here are some things I ran into when installing linkerd into a couple staging clusters:

The right interface

Due to how linkerd-proxy works, your process must bind to its loopback interface, such as 127.0.0.1 or 0.0.0.0 for all interfaces. Binding exclusively to the pod’s private IP and neglecting the loopback results in your service or process being unreachable by any traffic. I had to adjust a few services that were exclusively bound to the Pod’s IP to address this issue.

Health checks

Health checks or livenessProbes function properly with Linkerd installed. However, to prevent them from interfering with the linkerd-proxy and causing metric discrepancies, I modified this to host: 127.0.0.1. Bear in mind that this may not be applicable or effective in all scenarios. Additionally, it’s unclear how Linkerd will handle mTLS and health checks when the option for enforcement becomes available. Hence, bypassing it appeared to be a prudent choice for future compatibility as well.

Network Policies

If you are running with Network Policies, you will want to configure some rules for this. Here are the ones I did, they could probably be a bit better:

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-identity-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 8080
  - to:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          linkerd.io/control-plane-component: identity
          linkerd.io/control-plane-ns: linkerd

---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-prometheus-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 4191
  - from:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          linkerd.io/control-plane-component: prometheus
          linkerd.io/control-plane-ns: linkerd
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-egress-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Egress
  egress:
  - ports: []
  - to:
    - namespaceSelector:
        matchLabels:
          linkerd-namespace-label: 'true'
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-linkerd-ingress-access
  namespace: example
spec:
  podSelector:
    matchLabels:
      linkerd.io/control-plane-ns: linkerd
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 4143
    - port: 4190
    - port: 4191
  - from:
    - namespaceSelector:
        matchLabels:
          linkerd-namespace-label: 'true'
---

note; label the linkerd namespace with linkerd-namespace-label=true. Feel free to pick your own label name.

These network policies do the following:

  • allow-linkerd-identity-access: This policy allows pods with the label linkerd.io/control-plane-ns: linkerd within the example namespace to make egress connections on port 8080 to pods labeled with linkerd.io/control-plane-component: identity and linkerd.io/control-plane-ns: linkerd in any namespace.
  • allow-linkerd-prometheus-access: This policy permits ingress connections to pods labeled linkerd.io/control-plane-ns: linkerd in the example namespace on port 4191. The sources of these connections can be pods with labels linkerd.io/control-plane-component: prometheus and linkerd.io/control-plane-ns: linkerd from any namespace.
  • allow-linkerd-egress-access: This policy enables pods labeled linkerd.io/control-plane-ns: linkerd in the example namespace to make egress connections to any port on pods in any namespaces that are labeled with linkerd-namespace-label: 'true'.
  • allow-linkerd-ingress-access: This policy allows ingress connections on ports 4143, 4190, and 4191 to pods labeled linkerd.io/control-plane-ns: linkerd in the example namespace. The sources of these connections can be pods from any namespaces that are labeled with linkerd-namespace-label: 'true'.

Impact

Putting anything between your user and backend services will obviously have some costs. Higher CPU usage, latency, etc. You will probably want to check out this post about a linkerd performance benchmark.

Conclusion

In summary, Linkerd 2 presents a highly promising landscape. Its capabilities, especially in the realms of metrics and monitoring, are robust, while the simplicity of its setup process is truly commendable.

In the selection of a solution for production use, operational simplicity holds significant weight. It’s a crucial aspect that can streamline management, minimize errors, and improve efficiency. To this end, Linkerd unequivocally meets these expectations, making it a strong contender for those prioritizing ease-of-use and streamlined operations.