Skip to main content
  1. Posts/

How It Works: Cluster Log Shipper as a DaemonSet

·993 words·5 mins·
Dev Ops ELI5 Observability Kubernetes Container Logging Automation
Table of Contents
Kubernetes Logging - This article is part of a series.
Part 1: How It Works: Cluster Log Shipper as a DaemonSet (This Article)

Logs are essential for almost all programs. These data provide valuable insights for application behavior and troubleshooting clues and can even transform into metrics if needed.

Collecting logs for containers on a Kubernetes Worker Node is not much different than a regular VM. This post explains how it’s done.

General log collecting pipeline
#

Although different log shippers have different functionalities, the concept is the same.

I use fluentbit and Vector mostly and will take these as examples.

flowchart LR input([Input]) transform[Transform] output([Output]) input --> transform transform --> output

Input
#

Input (or source) is the phase that you decide where and how you would like to collect logs. It can be syslogs, stdin, HTTP, Docker, files, etc. It can even be other instances of fluentbit or vector if you have more complex architecture.

One of the most common use cases is reading from files. Modern log shippers will know which line to start reading, handle file rotation, scan for new files periodically, etc.

You can check fluentbit’s Tail Input and Vector’s kubernetes_logs source (which is like file with appended Kubernetes metadata).

Transform
#

Transform (or “filter” in fluentbit1) is the phase where you filter, enrich, or modify logs. This part is not mandatory. You can have zero or more transform phases; it’s up to you.

When deployed on edge devices, it’s very likely you don’t want to spend computing power on log processing before sending it. There is a pattern where you just ship it and process it afterward.

Check all the interesting things you can do with your log shipper!

Output
#

This is also self-explanatory. Output is where you send the logs.

You can send to one or more destinations like stdout, CloudWatch Logs, S3, etc.

Just like the input section mentioned, since you can receive logs from other shippers, you can, of course, send to other shippers.

So, how do we collect container logs from…containers?
#

Before we talk about how to collect container logs of a Pod on a Worker Node, we need to know where logs are stored.

flowchart LR container output(["Destination"]) container_runtime["CRI Container Runtime"] log_files["/var/log/pods/..."] log_shipper_container["Log Shipper Container"] subgraph "Worker Node" subgraph "User Pod" container end container -. "stdout\nstderr" .-> container_runtime container_runtime -. write .-> log_files subgraph Log Shipper Pod log_shipper_container end log_files -. "mount\n(hostPath)" .-> log_shipper_container end log_shipper_container -. send .-> output

Containers that print logs through stdout or stderr will be written to files by CRI container runtime2.

The logs are stored in /var/log/pods/.... You might have seen log shipper using /var/log/containers, which are symbolic links of files in /var/log/pods/....

$ find /var/log/containers/ -type l -ls
53413099    0 lrwxrwxrwx   1 root     root          100 Apr 29 22:12 /var/log/containers/node-local-dns-wqck9_kube-system_node-cache-a5a5ceca32db68b482fe60d55651c9febe87fd5d38421bc58whatever.log -> /var/log/pods/kube-system_node-local-dns-wqck9_86ac731a-f21e-4204-aa75-whatever/node-cache/0.log
53413100    0 lrwxrwxrwx   1 root     root           96 Apr 29 22:12 /var/log/containers/kube-proxy-64zbn_kube-system_kube-proxy-920cbeebd8986fe9ccad62485fe4e62d23fe15076c617a913f5ewhatever.log -> /var/log/pods/kube-system_kube-proxy-64zbn_ccc12731-5d28-483f-bacc-whatever/kube-proxy/0.log
53413107    0 lrwxrwxrwx   1 root     root          100 Apr 29 22:12 /var/log/containers/aws-node-6vmbk_kube-system_aws-vpc-cni-init-3697d865dd42180b82d3ed46a0e4d194c9e983f701a1126d92f49bwhatever.log -> /var/log/pods/kube-system_aws-node-6vmbk_9d1b9e7f-9547-4a72-aac1-whatever/aws-vpc-cni-init/0.log
53494133    0 lrwxrwxrwx   1 root     root           92 Apr 29 22:12 /var/log/containers/aws-node-6vmbk_kube-system_aws-node-740124e68e5ff82b236565d394be894f9516bf1ad4619264bwhatever.log -> /var/log/pods/kube-system_aws-node-6vmbk_9d1b9e7f-9547-4a72-aac1-whatever/aws-node/0.log
# ...omitted

Now that we know where the logs are. Let’s take a look at how fluentbit’s chart makes use of /var/log.

# ...omitted
daemonSetVolumes:
  - name: varlog
    hostPath:
      path: /var/log
  - name: varlibdockercontainers
    hostPath:
      # comment from blog: you won't have this unless you are using Docker as container runtime
      path: /var/lib/docker/containers
# ...omitted

And the Vector one:

volumes:
  # ...omitted
  - name: var-log
    hostPath:
      path: "/var/log/"
# ...omitted

The log shipper container can then use hostPath volume to mount the folder we found above. It only reads, so a best practice here is to make it readOnly: true.

Log shipper containers can start consuming logs with the input section above.

By combining these parts in your log collecting pipeline, you can achieve many things.

Preventing data loss is the top priority
#

However, our log shipping journey has just begun. Your pipeline starts working doesn’t mean it will always work like a charm. One of the things that make admin headache is data loss.

If logs are ingested way too fast (e.g., high service traffic), you should use backpressure mechanism to make sure the shipper itself won’t suddenly increase memory to a unreasonable level.

This is usually done by the disk buffer with a hostPath volume (again) in order to “persist” the data on the Worker Node. This can help log shipper continues to finish the unsent data after a restart (e.g., crash), a rolling update (e.g., deployment), and keep memory usage stable.

You should prevent data loss at all costs. Admins can only live happily ever after when the settings are thoroughly considered (and hope for machines won’t just break).

Check your log shipper’s settings like the following:

How about log rotation? Should I take care of it?
#

Before we migrate container runtime from Docker to containerd because of the Dockershim deprecation, I actually worried a bit since we use Docker’s config to limit the log size.

The answer is, kubelet does it for you. Check the containerLogMaxSize and containerLogMaxFiles of the Kubelet Configuration (v1beta1) document.

By the time of writing, it defaults to 10Mi for containerLogMaxSize and 5 for containerLogMaxFiles.

You can always change the value by Setting Kubelet parameters via a config file.

Can I deploy as a system daemon instead of K8s DaemonSet?
#

I heard this question before, and of course, you can. These are merely “log files on the Worker Node” as I mentioned above.

However, I don’t see any reason to do it. If you are going to deploy one shipper per Worker Node, you can use Kubernetes’ native DaemonSet. Besides, it allows you to communicate with the API server and “enrich” container logs while making it easier to schedule and update.

Further readings
#

Cover: https://unsplash.com/photos/8Zs5H6CnYJo


  1. It of course does more than “filter”. It can enrich or modify data as well. Check filters for more infomation. ↩︎

  2. See the Container Runtime for more information. ↩︎

W.T. Chang
Author
W.T. Chang
Kubernetes Logging - This article is part of a series.
Part 1: How It Works: Cluster Log Shipper as a DaemonSet (This Article)

Related

The Making of Admission Webhooks, Part 2: The Implementation
·1611 words·8 mins
Dev Ops Kubernetes Admission Webhook Node.js
The Making of Admission Webhooks, Part 1: The Concept
·1356 words·7 mins
Dev Ops Kubernetes Admission Webhook
Query Stub Domains with CoreDNS and NodeLocal DNSCache
·1071 words·6 mins
Dev Ops Kubernetes Networking AWS DNS