All technological notes.
DaemonSet
an API object that ensures that exactly one replica of a Pod is running on each cluster node.
daemon Pods are deployed on every nodenode selector to restrict deployment to some of the nodes.nodes join/leave the cluster, the DaemonSet automatically adds/removes the Pod on those nodes.By default, DaemonSets will attempt to run on all nodes that are not tainted to prevent scheduling, such as control plane nodes.
Feature
Common use cases
kube-proxy, an agent used to set up Service forwarding on each node.calico, an agent for networking solutionDaemonSet controller
label selector,Node to the cluster, the DaemonSet controller creates a new Pod and associates it with that Node.Node, the DaemonSet controller deletes the Pod object associated with it.controller immediately recreates it.If an additional Pod appears, for example, if you create a Pod that matches the label selector in the DaemonSet, the controller immediately deletes it.
DaemonSet controller watches
DaemonSetPodNodeDaemonSet.Deployment.ReplicaSet directly only for niche cases (e.g., custom controllers, teaching) — most users should use Deployments.pod-template-generation:
controller-revision-hashcontroller-revision-hash:
by default, a DaemonSet deploys Pods to all nodes that don’t have taints that the Pod doesn’t tolerate,
spec.nodeAffinity field
DaemonSet Controller to ensure that the Pod is scheduled to each Node.Scheduler schedules the pod to specific node.spec.template.spec.nodeSelector field
gpu:cudagpu:cudaDaemonSet are no more important than Pods deployed via Deployments or StatefulSets.priority is represented by the PriorityClass object
priorityClassName field
priority class a Pod belongs tospec:
template:
spec:
priorityClassName: system-node-critical
| Command | Description |
|---|---|
kubectl explain ds |
See field docs (spec, updateStrategy, etc.). |
kubectl get ds -A |
List all DaemonSets across namespaces. |
kubectl get ds |
List all DaemonSets in current namespaces. |
kubectl get ds ds_name -n ns_name |
Show a DaemonSet. |
kubectl get ds ds_name -n ns_name -o wide |
Show a DaemonSet with node, images, and selectors. |
kubectl describe ds ds_name -n ns_name |
Detailed spec, events (great for scheduling/taints issues). |
kubectl apply -f ds.yaml |
Create/update from a manifest file. |
kubectl edit ds ds_name -n ns_name |
Edit the DaemonSet live (opens your editor). |
kubectl delete ds ds_name -n ns_name |
Delete the DaemonSet (Pods will be removed). |
| Command | Description |
|---|---|
kubectl label ds/ds_name key=value -n ns_name |
Add or change labels. |
kubectl set image ds/ds_name con_name=image:tag -n ns_name |
Update container image (triggers rolling update). |
kubectl set env ds/ds_name KEY=VALUE --containers=<ctr> -n ns_name |
Add/change env vars. |
kubectl annotate ds/ds_name key=value -n ns_name |
Add or change annotations. |
kubectl patch ds/ds_name -n ns_name -p '<json/strategic merge>' |
Quick, targeted spec changes. |
kubectl rollout status ds/ds_name -n ns_name |
Watch rollout progress. |
kubectl rollout history ds/ds_name -n ns_name |
See past revisions. |
kubectl rollout undo ds/ds_name -n ns_name [--to-revision=N] |
Roll back to a prior revision. |
kubectl rollout restart ds/ds_name -n ns_name |
Restart Pods managed by the DaemonSet. |
kubectl get pods -l <selector> -n ns_name -o wide |
List the DaemonSet’s Pods (one per node). |
kubectl logs -l <selector> -n ns_name --all-containers |
Stream logs from all matching Pods. |
kubectl drain <node> --ignore-daemonsets |
Drain a node without evicting DaemonSet Pods (node maintenance). |
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemon
spec:
selector:
matchLabels:
app: monitoring-agent
template:
spec:
containers:
- name: monitoring-agent
image: monitoring-agent
# list all ds
kubectl get ds -A
# NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# kube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 16d
# list ds in system ns
kubectl get ds kube-proxy -n kube-system
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 16d
# with pod, image ans selector
kubectl get ds kube-proxy -n kube-system -o wide
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
# kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 16d kube-proxy registry.k8s.io/kube-proxy:v1.34.1 k8s-app=kube-proxy
# show details
kubectl describe ds kube-proxy -n kube-system
# Name: kube-proxy
# Namespace: kube-system
# Selector: k8s-app=kube-proxy
# Node-Selector: kubernetes.io/os=linux
# Labels: k8s-app=kube-proxy
# Annotations: deprecated.daemonset.template.generation: 1
# Desired Number of Nodes Scheduled: 1
# Current Number of Nodes Scheduled: 1
# Number of Nodes Scheduled with Up-to-date Pods: 1
# Number of Nodes Scheduled with Available Pods: 1
# Number of Nodes Misscheduled: 0
# Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
# show tolerate
kubectl get ds kube-proxy -n kube-system -o yaml
# spec:
# template:
# spec:
# tolerations:
# - operator: Exists
# demo-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: demo-ds
spec:
selector:
matchLabels:
app: monitor
template:
metadata:
labels:
app: monitor
spec:
containers:
- name: monitor
image: busybox
command:
- sleep
- infinity
kubectl get ndoe
# NAME STATUS ROLES AGE VERSION
# controlplane Ready control-plane 34d v1.33.6
# node01 Ready <none> 34d v1.33.6
# node02 Ready <none> 34d v1.33.6
# get the existing ds
kubectl get ds -A
# NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# kube-flannel kube-flannel-ds 3 3 3 3 3 <none> 34d
# kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 34d
kubectl apply -f demo-ds.yaml
# daemonset.apps/demo-ds created
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 2 2 2 2 2 <none> 16s
kubectl get ds -o wide
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
# demo-ds 2 2 2 2 2 <none> 27s monitor busybox app=monitor
kubectl describe ds demo-ds
# Name: demo-ds
# Namespace: default
# Selector: app=monitor
# Node-Selector: <none>
# Labels: <none>
# Annotations: deprecated.daemonset.template.generation: 1
# Desired Number of Nodes Scheduled: 2
# Current Number of Nodes Scheduled: 2
# Number of Nodes Scheduled with Up-to-date Pods: 2
# Number of Nodes Scheduled with Available Pods: 2
# Number of Nodes Misscheduled: 0
# Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
# Pod Template:
# Labels: app=monitor
# Containers:
# monitor:
# Image: busybox
# Port: <none>
# Host Port: <none>
# Command:
# sleep
# infinity
# Environment: <none>
# Mounts: <none>
# Volumes: <none>
# Node-Selectors: <none>
# Tolerations: <none>
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal SuccessfulCreate 86s daemonset-controller Created pod: demo-ds-rqgbp
# Normal SuccessfulCreate 86s daemonset-controller Created pod: demo-ds-vwll7
kubectl get ds demo-ds -o yaml
# status:
# currentNumberScheduled: 2
# desiredNumberScheduled: 2
# numberAvailable: 2
# numberMisscheduled: 0
# numberReady: 2
# observedGeneration: 1
# updatedNumberScheduled: 2
kubectl get pod
# NAME READY STATUS RESTARTS AGE
# demo-ds-rqgbp 1/1 Running 0 51s
# demo-ds-vwll7 1/1 Running 0 51s
# confirm:
# ds has label controller-revision-hash, pod-template-generation
kubectl describe pod demo-ds-rqgbp
# Node: node02/192.168.10.152
# Start Time: Thu, 01 Jan 2026 18:40:12 -0500
# Labels: app=monitor
# controller-revision-hash=696f9dcc75
# pod-template-generation=1
# Controlled By: DaemonSet/demo-ds
# Node-Selectors: <none>
# Tolerations: node.kubernetes.io/disk-pressure:NoSchedule op=Exists
# node.kubernetes.io/memory-pressure:NoSchedule op=Exists
# node.kubernetes.io/not-ready:NoExecute op=Exists
# node.kubernetes.io/pid-pressure:NoSchedule op=Exists
# node.kubernetes.io/unreachable:NoExecute op=Exists
# node.kubernetes.io/unschedulable:NoSchedule op=Exists
# confirm affinity
kubectl get pod demo-ds-rqgbp -o yaml
# spec:
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchFields:
# - key: metadata.name
# operator: In
# values:
# - node02
# tolerations:
# - effect: NoExecute
# key: node.kubernetes.io/not-ready
# operator: Exists
# - effect: NoExecute
# key: node.kubernetes.io/unreachable
# operator: Exists
# - effect: NoSchedule
# key: node.kubernetes.io/disk-pressure
# operator: Exists
# - effect: NoSchedule
# key: node.kubernetes.io/memory-pressure
# operator: Exists
# - effect: NoSchedule
# key: node.kubernetes.io/pid-pressure
# operator: Exists
# - effect: NoSchedule
# key: node.kubernetes.io/unschedulable
# operator: Exists
# demo-ds-nodeselector.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: demo-ds
spec:
selector:
matchLabels:
app: monitor
template:
metadata:
labels:
app: monitor
spec:
nodeSelector:
node-role: front-end
containers:
- name: monitor
image: busybox
command:
- sleep
- infinity
kubectl apply -f demo-ds-nodeselector.yaml
# daemonset.apps/demo-ds created
kubectl get node -L node-role
# NAME STATUS ROLES AGE VERSION NODE-ROLE
# controlplane Ready control-plane 37d v1.33.6
# node01 Ready <none> 37d v1.33.6 front-end
# node02 Ready <none> 37d v1.33.6
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 1 1 1 1 1 node-role=front-end 30s
kubectl get pod -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# demo-ds-gj4hj 1/1 Running 0 2m36s 10.244.1.18 node01 <none> <none>
kubectl label node node02 node-role=front-end
# node/node02 labeled
# confirm ds add 1
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 2 2 2 2 2 node-role=front-end 8m38s
# confirm pod
kubectl get pod -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# demo-ds-gj4hj 1/1 Running 0 9m13s 10.244.1.18 node01 <none> <none>
# demo-ds-r8pqz 1/1 Running 0 61s 10.244.2.20 node02 <none> <none>
kubectl label node node01 node-role-
# node/node01 unlabeled
# confirm ds -1
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 1 1 1 1 1 node-role=front-end 10m
# confirm pod
kubectl get pod -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# demo-ds-r8pqz 1/1 Running 0 2m22s 10.244.2.20 node02 <none> <none>
# update:
# spec:
# nodeSelector:
# kubernetes.io/os: linux
kubectl apply -f demo-ds-nodeselector.yaml
# daemonset.apps/demo-ds configured
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 2 2 2 1 2 kubernetes.io/os=linux 18m
kubectl get pod -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# demo-ds-fgqkh 1/1 Running 0 24s 10.244.2.21 node02 <none> <none>
# demo-ds-wqqvw 1/1 Running 0 59s 10.244.1.19 node01 <none> <none>
kubectl patch ds demo-ds --type='json' -p='[{ "op": "remove", "path": "/spec/template/spec/nodeSelector"}]'
# daemonset.apps/demo-ds patched
kubectl get ds
# NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
# demo-ds 2 2 2 0 2 <none> 26m
node agents and daemons typically require greater access to the node
privileged container, give the container full access to the kernelprivileged container
example
# kube-proxy
spec:
template:
spec:
containers:
- name: kube-proxy
securityContext:
privileged: true
Giving a container access to specific capabilities
node agent or daemon typically only needs access to a subset of the system calls provided by the kernel.system calls it needs to do its job.example:
# kubectl get ds kube-flannel-ds -n kube-flannel -o yaml
spec:
template:
spec:
containers:
name: kube-flannel
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
NET_RAWcapability:
- allows the container to use special socket types and bind to any address
NET_ADMINcapability:
- allows various privileged network-related operations such as interface configuration, firewall management, changing routing tables, and so on.
- Both help in setting up the networking for all other
Podson aNode.
A node agent or daemon may need to access the host node’s file system.
node agent deployed through a DaemonSet could be used to install software packages on all cluster nodes.hostPath volumeExample
# kubectl get ds kube-proxy -n kube-system -o yaml
spec:
template:
spec:
volumes:
- hostPath:
path: /run/xtables.lock
type: FileOrCreate
name: xtables-lock
- hostPath:
path: /lib/modules
type: ""
name: lib-modules
hostPath.path: /run/xtables.lock:
- allows the process in the kube-proxy daemon Pod to access the node’s
xtables.lock file, which is used by theiptablesornftablestools that the process uses to manipulate the node’s IP packet filtering.hostPath.path: /lib/modules:
- allows the process to access the kernel modules that are installed on the node.
template.spec.hostNetwork=true:
IPC and PID namespaceshostIPC and hostPID:
IPC and PID namespaces# kubectl get ds kube-proxy -n kube-system -o yaml
spec:
template:
spec:
dnsPolicy: ClusterFirst
hostNetwork: true # access to host network
# get node ip: 192.168.10.150
ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:81:e1:09 brd ff:ff:ff:ff:ff:ff
altname enp2s1
inet 192.168.10.150/24 brd 192.168.10.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.10.107/24 metric 100 brd 192.168.10.255 scope global secondary dynamic ens33
valid_lft 1590sec preferred_lft 1590sec
inet6 fe80::20c:29ff:fe81:e109/64 scope link
valid_lft forever preferred_lft forever
# confirm kube-proxy ip = host ip
kubectl -n kube-system get po -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# kube-proxy-8rr2r 1/1 Running 5 (4d3h ago) 37d 192.168.10.150 controlplane <none> <none>
hostPort methodds.spec.template.containers.ports.hostPort field
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-agent
spec:
template:
spec:
containers:
- name: node-agent
image: luksa/node-agent:0.1
args:
- --listen-address
- :80
ports:
- name: http
containerPort: 80
hostPort: 11559
- containers within the node-agent exposes on
port 80, which bind tohostPortof the node.
- traffic received by the host Node on port 11559 is forwarded to port 80 within the node-agent container
- Can test the
daemon podbycurl node_ip:11559

NodePort Service:
nodePort forwards to random pod matched with service’s selectornodePort DaemonSet Pod:
nodePort forwards only to local Daemon PodDaemon Pod fails, the connection fails.Pointing the Kiada application to the agent via the Node’s IP address
With previous section, a daemon pod is bind to a node port
daemon pod?using the downward API to specify the node IP of the node where the individual pod is scheduled.
Example
kind: Deployment
spec:
template:
spec:
containers:
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: NODE_AGENT_URL
value: http://$(NODE_IP):11559
the app using the
NODE_AGENT_URLcan communicate with the localdaemon pod
hostNetwork methodA similar approach to the previous section is for the agent Pod to directly use the Node’s network environment instead of having its own.
Example
kind: DaemonSet
spec:
template:
spec:
hostNetwork: true # use node’s network interface(s)
containers:
- name: node-agent
ports:
- name: http
containerPort: 11559 # bound directly to port 11559.
readinessProbe:
failureThreshold: 1
httpGet:
port: 11559
scheme: HTTP
Limitation of the previous nodeip + port
svc.spec.internalTrafficPolicy = Local:
Service to forward traffic only within the same node
If the DaemonSet through which agent Pods are deployed uses a Node selector, some Nodes may not have an agent running.
Service with internalTrafficPolicy set to Local is used to expose the local agent, a client’s connection to the Service on that Node will fail.apiVersion: v1
kind: Service
metadata:
name: node-agent
labels:
app: node-agent
spec:
internalTrafficPolicy: Local # a local service
selector:
app: node-agent # only select the ds pod
ports:
- name: http
port: 80 # macth the ds pod port
kind: Deployment
spec:
template:
spec:
containers:
env:
- name: NODE_AGENT_URL
value: http://node-agent # match the service name
local Service:
hostPort or hostNetwork
hostNetwork: