All technological notes.
list all po that is not in default ns, output to /opt/non-default.txt
### Task: list pv
kubectl get pods -A --field-selector metadata.namespace!=default
list all images in the kube-system ns and output to /opt/image.txt
kubectl get pods -n kube-system -o jsonpath="{..image}" > /opt/image.txt
kubectl get pods -n kube-system -o jsonpath="{range .items[*].spec.containers[*]}{.image}{'\n'}{end}" > /opt/image.txt
Task 监控 pod foo 的日志并: 提取与错误 No such file or directory 相对应的日志行 将这些日志行写入 /tmp/KUTR00101/foo
kubectl run foo --image=busybox -- cat /tmp/msg
# pod/foo created
kubectl exec -it foo -- ulimit -n
kubectl logs foo | grep "RLIMIT_NOFILE" > /opt/KUTR00101/foo
k run fnf --image=busybox -- cat /tmp/msg
k get pod fnf
k logs fnf | grep "No such file or directory" > /opt/cka/answers/sorted_log.log
Task 按如下要求调度一个 Pod: 名称:kucc8 app containers: 2 container 名称/images:
kubectl run kucc8 --image=nginx --dry-run=client -o yaml > multi-con.yaml
vi multi-con.yaml
# apiVersion: v1
# kind: Pod
# metadata:
# name: kucc8
# spec:
# containers:
# - name: nginx
# image: nginx
# - name: redis
# image: redis
kubectl apply -f multi-con.yaml
# pod/kucc8 created
# confirm
kubectl get pod kucc8
CKA EXAM OBJECTIVE: Understand the primitives used to create robust, self-healing, application deployments. Task :
# pod-multiple01.yaml
apiVersion: v1
kind: Pod
metadata:
name: multicontainer
spec:
containers:
- name: redis
image: redis:6.2.6
- name: nginx
image: nginx:1.21.6
kubectl apply -f pod-multiple01.yaml
# pod/multicontainer created
k describe pod multicontainer
# Containers:
# nginx:
# Container ID: containerd://b275f940b9804b275dbfe06b14eb6f714ab26aaf5c7074834796748d6cccb6b6
# Image: nginx:1.21.6
# redis:
# Container ID: containerd://5d18eaf6b0424a3c1eb7765005deb0424bee00e81bf114520d2ca33699edd233
# Image: redis:6.2.6
设置配置环境: [candidate@node-1] $ kubectl config use-context k8s
Context 将一个现有的 Pod 集成到 Kubernetes 的内置日志记录体系结构中(例如 kubectl logs)。 添加 streaming sidecar 容器是实现此要求的一种好方法。
Task 使用 busybox Image 来将名为 sidecar 的 sidecar 容器添加到现有的 Pod 11-factor-app 中。 新的 sidecar 容器必须运行以下命令: /bin/sh -c tail -n+1 -f /var/log/11-factor-app.log 使用挂载在/var/log 的 Volume,使日志文件 11-factor-app.log 可用于 sidecar 容器。 除了添加所需要的 volume mount 以外,请勿更改现有容器的规格。
tee task-sidecar01-env.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
name: 11-factor-app
spec:
containers:
- name: count
image: busybox
command:
- "/bin/sh"
- "-c"
args:
- |
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/11-factor-app.log;
i=$((i+1));
sleep 1;
done
EOF
kubectl apply -f task-sidecar01-env.yaml
# pod/11-factor-app created
kubectl get pod
# NAME READY STATUS RESTARTS AGE
# 11-factor-app 1/1 Running 0 3m54s
kubectl exec -it 11-factor-app -- cat /var/log/11-factor-app.log
# 0: Thu Jan 8 20:45:44 UTC 2026
# 1: Thu Jan 8 20:45:45 UTC 2026
# 2: Thu Jan 8 20:45:46 UTC 2026
# 3: Thu Jan 8 20:45:47 UTC 2026
# 4: Thu Jan 8 20:45:49 UTC 2026
kubectl get pod 11-factor-app -o yaml > sidecar.yaml
# backup
cp sidecar.yaml sidecar.yaml.bak
vi sidecar.yaml
# sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: 11-factor-app
spec:
volumes:
- name: varlog
emptyDir: {}
containers:
- name: count
image: busybox
volumeMounts:
- name: varlog
mountPath: /var/log
command:
- "/bin/sh"
- "-c"
args:
- |
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/11-factor-app.log;
i=$((i+1));
sleep 1;
done
- name: sidecar
image: busybox
volumeMounts:
- name: varlog
mountPath: /var/log
command:
- "/bin/sh"
- "-c"
args:
- |
tail -n+1 -f /var/log/11-factor-app.log
kubectl delete pod 11-factor-app --grace-period=1
# pod "11-factor-app" deleted
kubectl apply -f sidecar.yaml
# pod/11-factor-app replaced
kubectl get pod 11-factor-app
# NAME READY STATUS RESTARTS AGE
# 11-factor-app 2/2 Running 0 17s
# confirm
kubectl logs 11-factor-app -c sidecar
# 0: Thu Jan 8 20:53:07 UTC 2026
# 1: Thu Jan 8 20:53:09 UTC 2026
# 2: Thu Jan 8 20:53:10 UTC 2026
# 3: Thu Jan 8 20:53:11 UTC 2026
# 4: Thu Jan 8 20:53:12 UTC 2026
# 5: Thu Jan 8 20:53:13 UTC 2026
# 6: Thu Jan 8 20:53:14 UTC 2026
# 7: Thu Jan 8 20:53:15 UTC 2026
# 8: Thu Jan 8 20:53:16 UTC 2026
A legacy app needs to be integrated into the Kubernetes built-in logging architecture (i.e. kubectl logs). Adding a streaming co-located container is a good and common way to accomplish this requirement.
Task
Update the existing Deployment synergy-deployment, adding a co-located container named sidecar using the busybox:stable image to the existing Pod.
The new co-located container has to run the following command: /bin/sh-c "tail -n+1 -f /var/log/synergy-deployment.log"
Use a Volume mounted at /var/log to make the log file synergy-deployment.log available to the co located container.
Do not modify the specification of the existing container other than adding the required.
Hint: Use a shared volume to expose the log file between the main application container and the sidecar
tee env-deploy.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: synergy-deployment
spec:
replicas: 1
selector:
matchLabels:
app: synergy
template:
metadata:
labels:
app: synergy
spec:
containers:
- name: synergy
image: busybox
command: ["/bin/sh", "-c"]
args:
- |
i=1;
while true; do
echo "$(date) synergy log line $i" >> /var/log/synergy-deployment.log;
i=$((i+1));
sleep 2;
done
EOF
kubectl apply -f env-deploy.yaml
CKA EXAM OBJECTIVE: Configure volume types [ … ] Task:
tee task-sidecar-env.yaml<<EOF
apiVersion: v1
kind: Pod
metadata:
name: logger
spec:
nodeName: node01
containers:
- name: writer
image: busybox
command:
- "/bin/sh"
- "-c"
args:
- |
mkdir -pv /var/log
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/log01.log;
i=$((i+1));
sleep 1;
done
EOF
kubectl apply -f task-sidecar-env.yaml
# task-sidecar.yaml
apiVersion: v1
kind: Pod
metadata:
name: logger
spec:
nodeName: node01
volumes:
- name: pod-vol
emptyDir: {}
containers:
- name: writer
image: busybox
volumeMounts:
- name: pod-vol
mountPath: /var/log
command:
- "/bin/sh"
- "-c"
args:
- |
mkdir -pv /var/log
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/log01.log;
i=$((i+1));
sleep 1;
done
- name: reader
image: busybox
volumeMounts:
- name: pod-vol
mountPath: /var/log
command:
- "/bin/sh"
- "-c"
args:
- |
tail -f /var/log/log01.log
k apply -f task-sidecar.yaml
Create a Pod mc-pod in the mc-namespace namespace with three containers. The first container should be named mc-pod-1, run the nginx:1-alpine image, and set an environment variable NODE_NAME to the node name. The second container should be named mc-pod-2, run the busybox:1 image, and continuously log the output of the date command to the file /var/log/shared/date.log every second. The third container should have the name mc-pod-3, run the image busybox:1, and print the contents of the date.log file generated by the second container to stdout. Use a shared, non-persistent volume.
kubectl create namespace mc-namespace
apiVersion: v1
kind: Pod
metadata:
name: mc-pod
namespace: mc-namespace
spec:
volumes:
- name: shared-vol
emptyDir: {}
containers:
- name: mc-pod-1
image: nginx:1-alpine
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: shared-vol
mountPath: /var/log/shared
- name: mc-pod-2
image: busybox:1
command: ["/bin/sh", "-c"]
args:
- while true; do date >> /var/log/shared/date.log; sleep 1; done
volumeMounts:
- name: shared-vol
mountPath: /var/log/shared
- name: mc-pod-3
image: busybox:1
command: ["/bin/sh", "-c"]
args:
- tail -f /var/log/shared/date.log
volumeMounts:
- name: shared-vol
mountPath: /var/log/shared
Create a deployment named logging-deployment in the namespace logging-ns with 1 replica, with the following specifications:
The main container should be named app-container, use the image busybox, and should run the following command to simulate writing logs: sh -c "while true; do echo 'Log entry' >> /var/log/app/app.log; sleep 5; done"
Add a sidecar container named log-agent that also uses the busybox image and runs the command: tail -f /var/log/app/app.log
log-agent logs should display the entries logged by the main app-container
Setup env
k create ns logging-ns
# sidecar.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: logging-deployment
namespace: logging-ns
spec:
replicas: 1
selector:
matchLabels:
app: logging-deployment
template:
metadata:
labels:
app: logging-deployment
spec:
containers:
- name: app-container
image: busybox
command:
[
"sh",
"-c",
'while true; do echo "Log entry" >> /var/log/app/app.log; sleep 5; done',
]
volumeMounts:
- name: data
mountPath: /var/log/app/
- name: log-agent
image: busybox
command: ["sh", "-c", "tail -f /var/log/app/app.log"]
volumeMounts:
- name: data
mountPath: /var/log/app
volumes:
- name: data
emptyDir: {}
kubectl apply -f sidecar.yaml
设置配置环境: [candidate@node-1] $ kubectl config use-context k8s
Task 将 deployment presentation 扩展至 4 个 pods
kubectl create deploy presentation --image=nginx
# deployment.apps/presentation created
# confirm
kubectl get deploy presentation
# NAME READY UP-TO-DATE AVAILABLE AGE
# presentation 1/1 1 1 10s
kubectl scale deploy presentation --replicas=4
# deployment.apps/presentation scaled
# confirm
kubectl rollout status deploy presentation
# deployment "presentation" successfully rolled out
kubectl get deploy presentation -o wide
# NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
# presentation 4/4 4 4 3m54s nginx nginx app=presentation
Scale the dragon Deployment to 8 pods
kubectl create deploy dragon --image=nginx
kubectl get deploy dragon
# NAME READY UP-TO-DATE AVAILABLE AGE
# dragon 1/1 1 1 6s
kubectl scale deploy dragon --replicas=8
# deployment.apps/dragon scaled
# confirm
kubectl get deploy dragon
# NAME READY UP-TO-DATE AVAILABLE AGE
# dragon 8/8 8 8 62s
CKA EXAM OBJECTIVE: Understand application deployments and how to perform rolling update and rollbacks Task:
k create ns king-of-lions
k create deploy mufasa --image=nginx -n king-of-lions
k get deploy -n king-of-lions
# check history
k rollout history deploy mufasa -n king-of-lions
# deployment.apps/mufasa
# REVISION CHANGE-CAUSE
# 1 <none>
kubectl rollout undo deploy mufasa --to-revision=1 -n king-of-lions
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next, upgrade the deployment to version 1.17 using rolling update. the rollout history show the image change
kubectl create deploy nginx-deploy --image=nginx:1.16
kubectl get deploy
# NAME READY UP-TO-DATE AVAILABLE AGE
# nginx-deploy 1/1 1 1 11s
kubectl rollout history deploy nginx-deploy --revision=1
# deployment.apps/nginx-deploy with revision #1
# Pod Template:
# Labels: app=nginx-deploy
# pod-template-hash=794f94f75f
# Containers:
# nginx:
# Image: nginx:1.16
# Port: <none>
# Host Port: <none>
# Environment: <none>
# Mounts: <none>
# Volumes: <none>
# Node-Selectors: <none>
# Tolerations: <none>
# update image
kubectl set image deploy nginx-deploy nginx=nginx:1.17
# deployment.apps/nginx-deploy image updated
kubectl rollout history deploy nginx-deploy --revision=2
# deployment.apps/nginx-deploy with revision #2
# Pod Template:
# Labels: app=nginx-deploy
# pod-template-hash=6c879966f8
# Containers:
# nginx:
# Image: nginx:1.17
# Port: <none>
# Host Port: <none>
# Environment: <none>
# Mounts: <none>
# Volumes: <none>
# Node-Selectors: <none>
# Tolerations: <none>
Deploy a StatefulSet named web with 2 replicas using the NGINX image. Each pod should have its own 1Gi persistent volume for /usr/share/nginx/html.
Ensure that the StatefulSet pods have stable network identities and persistent storage that remains associated with the ordinal index (even if pods are rescheduled).
Create a Headless Service named web to facilitate stable networking for the StatefulSet
# task-sts.yaml
apiVersion: v1
kind: Service
metadata:
name: web
spec:
ports:
- name: http
port: 80
clusterIP: None
selector:
app: web
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: web
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- name: pvc-sts
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: pvc-sts
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
kubectl apply -f task-sts.yaml
# service/web created
# statefulset.apps/web created
# confirm sts
kubectl get sts web -o wide
# NAME READY AGE CONTAINERS IMAGES
# web 2/2 44s nginx nginx
kubectl describe sts web
# Selector: app=web
# Volume Claims:
# Name: pvc-sts
# StorageClass:
# Labels: <none>
# Annotations: <none>
# Capacity: 1Gi
# Access Modes: [ReadWriteOnce]
# confirm pvc
kubectl get pvc -l app=web
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
# pvc-sts-web-0 Bound pvc-3232a60c-c69d-4d6a-a746-a62925e8fdbb 1Gi RWO local-path <unset> 14m
# pvc-sts-web-1 Bound pvc-a0cf66e7-0c90-491d-928b-1733d5db1abe 1Gi RWO local-path <unset> 14m
# confirm svc
kubectl get svc web
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# web ClusterIP None <none> 80/TCP 2m15s
kubectl describe svc web
# Endpoints: 10.244.2.134:80,10.244.2.136:80
kubectl run --rm -it sts-test --image=busybox --restart=Never -- nslookup web.default
# Server: 10.96.0.10
# Address: 10.96.0.10:53
# Name: web.default.svc.cluster.local
# Address: 10.244.2.136
# Name: web.default.svc.cluster.local
# Address: 10.244.2.134
Create a deployment named hr-web-app using the image kodekloud/webapp-color with 2 replicas.
k create deploy hr-web-app --image=kodekloud/webapp-color --replicas=2
# deployment.apps/hr-web-app created
k get deploy -n default
# NAME READY UP-TO-DATE AVAILABLE AGE
# hr-web-app 2/2 2 2 35s
Expose the hr-web-app created in the previous task as a service named hr-web-app-service, accessible on port 30082 on the nodes of the cluster.
The web application listens on port 8080.
k create deploy hr-web-app --image=nginx --replicas=2
k get deploy hr-web-app
# NAME READY UP-TO-DATE AVAILABLE AGE
# hr-web-app 2/2 2 2 12s
kubectl expose deployment hr-web-app --type=NodePort --port=8080 --name=hr-web-app-service --dry-run=client -o yaml > svc.yaml
vi svc.yaml
# apiVersion: v1
# kind: Service
# metadata:
# name: hr-web-app-service
# spec:
# ports:
# - port: 8080
# protocol: TCP
# targetPort: 8080
# nodePort: 30082
# selector:
# app: hr-web-app
# type: NodePort
k apply -f svc.yaml
# service/hr-web-app-service created
# confirm
k describe svc hr-web-app-service
# Name: hr-web-app-service
# Namespace: default
# Labels: <none>
# Annotations: <none>
# Selector: app=hr-web-app
# Type: NodePort
# IP Family Policy: SingleStack
# IP Families: IPv4
# IP: 10.104.64.79
# IPs: 10.104.64.79
# Port: <unset> 8080/TCP
# TargetPort: 8080/TCP
# NodePort: <unset> 30082/TCP
# Endpoints: 10.244.196.149:8080,10.244.140.77:8080
# Session Affinity: None
# External Traffic Policy: Cluster
# Internal Traffic Policy: Cluster
# Events: <none>
Create a service named messaging-service to expose the messaging pod within the cluster on port 6379. The messaging pod is running in the default namespace.
k expose pod messaging --name=messaging-service --port=6379 --type=ClusterIP
# service/messaging-service exposed
# confirm
k describe svc messaging-service
# Name: messaging-service
# Namespace: default
# Labels: tier=msg
# Annotations: <none>
# Selector: tier=msg
# Type: ClusterIP
# IP Family Policy: SingleStack
# IP Families: IPv4
# IP: 172.20.133.163
# IPs: 172.20.133.163
# Port: <unset> 6379/TCP
# TargetPort: 6379/TCP
# Endpoints: 172.17.0.10:6379
# Session Affinity: None
# Internal Traffic Policy: Cluster
# Events: <none>
A new application orange is deployed. There is something wrong with it. Identify and fix the issue.
k get pod orange
# NAME READY STATUS RESTARTS AGE
# orange 0/1 Init:Error 2 (20s ago) 22s
k describe pod orange
# Init Containers:
# init-myservice:
# Container ID: containerd://f4d945a07e1e05db42758f503897e8e796c8eb1d40d1179c19ff302364cc1fe8
# Image: busybox
# Image ID: docker.io/library/busybox@sha256:2383baad1860bbe9d8a7a843775048fd07d8afe292b94bd876df64a69aae7cb1
# Port: <none>
# Host Port: <none>
# Command:
# sh
# -c
# sleeeep 2;
k edt pod orange
# sleeeep 2; => sleep 2;
k replace --force -f /tmp_file
# confirm
k get pod orange
# NAME READY STATUS RESTARTS AGE
# orange 1/1 Running 0 14s
Q12. Use Namespace project-1 for the following. Create a DaemonSet named daemon-imp with image httpd:2.4-alpine and labels id= daemon-imp and uuid=18426a0b-5f59-4e10-923f-c0e078e82233. The Pods of that DaemonSet should run on all nodes, also in controlplanes.
# ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemon-imp
namespace: project-1
spec:
selector:
matchLabels:
id: daemon-imp
uuid: 18426a0b-5f59-4e10-923f-c0e078e82233
template:
metadata:
labels:
id: daemon-imp
uuid: 18426a0b-5f59-4e10-923f-c0e078e82233
spec:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
containers:
- name: daemon-imp
image: httpd:2.4-alpine
# get controlplane taint
kubectl describe node controlplane | grep -i taint
# Taints: node-role.kubernetes.io/control-plane:NoSchedule
kubectl apply -f ds.yaml
kubectl get po -l id=daemon-imp -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# daemon-imp-55vgx 1/1 Running 0 6m19s 10.244.196.158 node01 <none> <none>
# daemon-imp-b7n9j 1/1 Running 0 6m19s 10.244.140.94 node02 <none> <none>
# daemon-imp-b9x9r 1/1 Running 0 6m19s 10.244.49.71 controlplane <none> <none>
Q11. create a replicaset with below specifications Name = web-app Image= nginx Replicas= 3
Please note, there is already a pod running in our cluster named web-frontend , please make sure the total number of pods running in the cluster is not more than 3.
CKA EXAM OBJECTIVE: Use ConfigMaps and Secrets to configure applications Task :
tee ./index.html<<EOF
<html>
<title></title>
<body>
<h1>home</h1>
</body>
</html>
EOF
k create ns metallica
kubectl create configmap metal-cm -n metallica --from-file=index.html
# configmap/metal-cm created
k describe cm metal-cm
# Name: metal-cm
# Namespace: default
# Labels: <none>
# Annotations: <none>
# Data
# ====
# index.html:
# ----
# <html>
# <title></title>
# <body>
# <h1>home</h1>
# </body>
# </html
# deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: enter-sandman
namespace: metallica
spec:
replicas: 1
selector:
matchLabels:
app: enter-sandman
template:
metadata:
labels:
app: enter-sandman
spec:
volumes:
- name: pod-vol
configMap:
name: metal-cm
containers:
- image: nginx
name: nginx
volumeMounts:
- name: pod-vol
mountPath: "/var/www/"
k apply -f deploy.yaml
# confirm
k get pod -n metallica
# NAME READY STATUS RESTARTS AGE
# enter-sandman-866c78fd8b-nrbsq 1/1 Running 0 4m38s
k describe pod enter-sandman-866c78fd8b-nrbsq -n metallica
# Containers:
# nginx:
# Mounts:
# /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqg26 (ro)
# /var/www/ from pod-vol (rw)
# Volumes:
# pod-vol:
# Type: ConfigMap (a volume populated by a ConfigMap)
# Name: metal-cm
We have created a new deployment called nginx-deploy. Scale the deployment to 3 replicas. Has the number of replicas increased? Troubleshoot and fix the issue.
k create deploy nginx-deploy --image=nginx -n default
# change controller manager
sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
# finds
# spec:
# containers:
# - command:
# - kube-controller-manager
# replace
# spec:
# containers:
# - command:
# - kube-contro1ler-manager
k get pod -n kube-system
# kube-controller-manager-controlplane 0/1 CrashLoopBackOff 1 (14s ago) 15s
# try scale out
k scale deploy nginx-deploy -n default --replicas=3
# deployment.apps/nginx-deploy scaled
k get deploy nginx-deploy
# NAME READY UP-TO-DATE AVAILABLE AGE
# nginx-deploy 1/3 1 1 3h2m
# check deploy event
k describe deploy nginx-deploy
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal ScalingReplicaSet 2m48s deployment-controller Scaled up replica set nginx-deploy-c9d9f6c6c from 0 to 1
# check controlplane component: controller manager fails
k get pod -n kube-system
# kube-controller-manager-controlplane 0/1 CrashLoopBackOff 3 (79s ago) 2m29s
# try to check log for detail info
k logs kube-contro1ler-manager-controlplane -n kube-system
# error: error from server (NotFound): pods "kube-contro1ler-manager-controlplane" not found in namespace "kube-system"
# try to check: manifest
sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
# debug: replace with correct command
sudo vi /etc/kubernetes/manifests/kube-controller-manager.yaml
# finds
# spec:
# containers:
# - command:
# - kube-contro1ler-manager
# replace
# spec:
# containers:
# - command:
# - kube-controller-manager
# confirm: components
k get po -n kube-system
# kube-controller-manager-controlplane 1/1 Running 0 12s
# confirm: deploy scale out
k get deploy
# NAME READY UP-TO-DATE AVAILABLE AGE
# nginx-deploy 3/3 3 3 6m28s
An NGINX Deploy named nginx-static is Running in the nginx-static NS. It is configured using a CfgMap named nginx- config. Update the nginx-config CfgMap to allow only TLSv1.3 connections. re-create, restart, or scale resources as necessary.
Add the ip address of the servic3e in /etc/hosts and name it web.k8s.local.
verify:
curl -vk --tls-max 1.2 https://web.k8s.localsuccess: curl -vk --tlsv1.3 https://web.k8s.local
# Namespace
kubectl create ns nginx-static
# Create a self-signed cert for web.k8s.local (TLS secret)
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt \
-subj "/CN=web.k8s.local" \
-addext "subjectAltName=DNS:web.k8s.local"
kubectl create secret tls web-tls --cert=tls.crt --key=tls.key -n nginx-static
# ConfigMap with nginx.conf (initially allows TLSv1.2 + TLSv1.3)
tee cm.yaml<<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: nginx-static
data:
nginx.conf: |
events {}
http {
server {
listen 443 ssl;
server_name web.k8s.local;
ssl_certificate /etc/nginx/tls/tls.crt;
ssl_certificate_key /etc/nginx/tls/tls.key;
ssl_protocols TLSv1.2 TLSv1.3;
location / {
default_type text/plain;
return 200 "nginx-static OK\n";
}
}
}
EOF
kubectl apply -f cm.yaml
# configmap/nginx-config created
tee deploy.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-static
namespace: nginx-static
spec:
replicas: 1
selector:
matchLabels:
app: nginx-static
template:
metadata:
labels:
app: nginx-static
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: nginx
image: nginx:1.25-alpine
ports:
- containerPort: 443
hostPort: 443
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: tls
mountPath: /etc/nginx/tls
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-config
- name: tls
secret:
secretName: web-tls
EOF
kubectl apply -f deploy.yaml
# deployment.apps/nginx-static created
# remove TLSv1.2
kubectl edit cm nginx-config -n nginx-static
# configmap/nginx-config edited
# restart deployment to apply new cm
kubectl rollout restart deploy nginx-static -n nginx-static
# deployment.apps/nginx-static restarted
CKA EXAM OBJECTIVE: Use ConfigMaps and Secrets to configure applications Task:
# task-secret-env.yaml
apiVersion: v1
kind: Pod
metadata:
name: kiwi-secret-pod
namespace: kiwi
spec:
containers:
- name: busybox
image: busybox
command:
- "sh"
- "-c"
args:
- |
mkdir -pv /var/log
touch /var/log/msg.txt
while true; do
echo "$(date) $USERKIWI:$PASSKIWI" >> /var/log/msg.txt;
sleep 10;
done
k create ns kiwi
k apply -f task-secret-env.yaml
k get pod -n kiwi
k create secret generic juicysecret --from-literal=username="kiwis" --from-literal=password="aredelicious" -n kiwi
k describe secret juicysecret -n kiwi
# task-secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: kiwi-secret-pod
namespace: kiwi
spec:
containers:
- name: busybox
image: busybox
env:
- name: USERKIWI
valueFrom:
secretKeyRef:
name: juicysecret
key: username
- name: PASSKIWI
valueFrom:
secretKeyRef:
name: juicysecret
key: password
command:
- "sh"
- "-c"
args:
- |
mkdir -pv /var/log
touch /var/log/msg.txt
while true; do
echo "$(date) $USERKIWI:$PASSKIWI" >> /var/log/msg.txt;
sleep 10;
done
k replace --force -f task-secret-pod.yaml
Create a ConfigMap named app-config in the namespace cm-namespace with the following key-value pairs:
ENV=production LOG_LEVEL=info
Then, modify the existing Deployment named cm-webapp in the same namespace to use the app-config ConfigMap by setting the environment variables ENV and LOG_LEVEL in the container from the ConfigMap.
kubectl create ns cm-namespace
k create deploy cm-webapp -n cm-namespace --image=nginx
kubectl create configmap app-config --from-literal=ENV=production --from-literal=LOG_LEVEL=info -n cm-namespace
# configmap/app-config created
k describe cm app-config -n cm-namespace
# Name: app-config
# Namespace: cm-namespace
# Labels: <none>
# Annotations: <none>
# Data
# ====
# ENV:
# ----
# production
# LOG_LEVEL:
# ----
# info
# BinaryData
# ====
# Events: <none>
# edit existing deploy
kubectl edit deployment cm-webapp -n cm-namespace
# deployment.apps/cm-webapp edited
kubectl rollout restart deploy cm-webapp -n cm-namespace
# deployment.apps/cm-webapp restarted
k get po -n cm-namespace
# NAME READY STATUS RESTARTS AGE
# cm-webapp-646666688-94k9v 1/1 Running 0 22s
# confirm
k exec -it cm-webapp-646666688-94k9v -n cm-namespace -- sh
# echo $ENV
# production
# echo $LOG_LEVEL
# info
Deploy a Vertical Pod Autoscaler (VPA) with name analytics-vpa for the deployment named analytics-deployment in the default namespace. The VPA should automatically adjust the CPU and memory requests of the pods to optimize resource utilization. Ensure that the VPA operates in Recreate mode, allowing it to evict and recreate pods with updated resource requests as needed.
k create deploy analytics-deployment --image=nginx --replicas=2
# vpa.yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: analytics-vpa
namespace: default
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: analytics-deployment
updatePolicy:
updateMode: "Recreate"
resourcePolicy:
containerPolicies:
- containerName: "*"
controlledResources: ["cpu", "memory"]
k apply -f vpa.yaml
# verticalpodautoscaler.autoscaling.k8s.io/analytics-vpa created
# confirm
k get vpa
# NAME MODE CPU MEM PROVIDED AGE
# analytics-vpa Recreate 69s
k describe vpa analytics-vpa
# Name: analytics-vpa
# Namespace: default
# Labels: <none>
# Annotations: <none>
# API Version: autoscaling.k8s.io/v1
# Kind: VerticalPodAutoscaler
# Metadata:
# Creation Timestamp: 2026-01-18T03:05:30Z
# Generation: 1
# Resource Version: 23967
# UID: bfb69d0a-48b5-4088-ab5b-57ff131414b5
# Spec:
# Resource Policy:
# Container Policies:
# Container Name: *
# Controlled Resources:
# cpu
# memory
# Target Ref:
# API Version: apps/v1
# Kind: Deployment
# Name: analytics-deployment
# Update Policy:
# Update Mode: Recreate
# Events: <none>
Deploy a sample workload and configure Horizontal Pod Autoscaling for it. Specifically: . Use the existing deployment `cpu-demo’ . Configure an HPA to scale this deployment from 1 up to 5 replicas when the average CPU utilization exceeds 50%.
kubectl create deploy cpu-demo --image=busybox -- sleep infinity
kubectl autoscale deploy/cpu-demo --min=1 --max=5 --cpu=50%
# horizontalpodautoscaler.autoscaling/cpu-demo autoscaled
kubectl get hpa
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
# cpu-demo Deployment/cpu-demo cpu: <unknown>/50% 1 5 1 84s
kubectl describe hpa cpu-demo
# Metrics: ( current / target )
# resource cpu on pods (as a percentage of request): <unknown> / 50%
# Min replicas: 1
# Max replicas: 5
Create a Horizontal Pod Autoscaler (HPA) with name webapp-hpa for the deployment named kkapp-deploy in the default namespace with the webapp-hpa.yaml file located under the root folder. Ensure that the HPA scales the deployment based on CPU utilization, maintaining an average CPU usage of 50% across all pods. Configure the HPA to cautiously scale down pods by setting a stabilization window of 300 seconds to prevent rapid fluctuations in pod count.
Note: The kkapp-deploy deployment is created for backend; you can check in the terminal.
k create deploy kkapp-deploy --image=nginx --replicas=2
# webapp-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: webapp-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: kkapp-deploy
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
behavior:
scaleDown:
stabilizationWindowSeconds: 300
k apply -f /root/webapp-hpa.yaml
# horizontalpodautoscaler.autoscaling/webapp-hpa created
k describe hpa webapp-hpa
# Name: webapp-hpa
# Namespace: default
# Labels: <none>
# Annotations: <none>
# CreationTimestamp: Sat, 17 Jan 2026 21:41:01 -0500
# Reference: Deployment/kkapp-deploy
# Metrics: ( current / target )
# resource cpu on pods (as a percentage of request): <unknown> / 50%
# Min replicas: 2
# Max replicas: 10
# Behavior:
# Scale Up:
# Stabilization Window: 0 seconds
# Select Policy: Max
# Policies:
# - Type: Pods Value: 4 Period: 15 seconds
# - Type: Percent Value: 100 Period: 15 seconds
# Scale Down:
# Stabilization Window: 300 seconds
# Select Policy: Max
# Policies:
# - Type: Percent Value: 100 Period: 15 seconds
# Deployment pods: 0 current / 0 desired
# Events: <none>
Create a new HorizontalPodAutoscaler (HPA) named apache-server in the autoscale namespace. This HPA must target the existing Deployment called apache-server in the autoscale namespace. Set the HPA to target for 50% CPU usage per Pod. . Configure hpa to have at min 1 Pod and no more than 4 Pods[max]. Also, we have to set the downscale stabilization window to 30 seconds.
kubectl create ns autoscale
kubectl create deploy apache-server --image=nginx -n autoscale
kubectl -n autoscale set resources deploy/apache-server --requests=cpu=100m,memory=64Mi --limits=cpu=200m,memory=128Mi
kubectl autoscale deployment apache-server -n autoscale --cpu-percent=50 --min=1 --max=4 > hpa.yaml
vi hpa.yaml
# apiVersion: autoscaling/v2
# kind: HorizontalPodAutoscaler
# metadata:
# name: apache-server
# namespace: autoscale
# spec:
# scaleTargetRef:
# apiVersion: apps/v1
# kind: Deployment
# name: apache-server
# minReplicas: 1
# maxReplicas: 4
# metrics:
# - type: Resource
# resource:
# name: cpu
# target:
# type: Utilization
# averageUtilization: 30
# behavior:
# scaleDown:
# stabilizationWindowSeconds: 30
kubectl apply -f hpa.yaml
# horizontalpodautoscaler.autoscaling/apache-server created
# confirm
kubectl get hpa -n autoscale
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
# apache-server Deployment/apache-server cpu: <unknown>/30% 1 4 1 2m4s
kubectl describe hpa apache-server -n autoscale
# Name: apache-server
# Namespace: autoscale
# Labels: <none>
# Annotations: <none>
# CreationTimestamp: Fri, 16 Jan 2026 19:39:03 -0500
# Reference: Deployment/apache-server
# Metrics: ( current / target )
# resource cpu on pods (as a percentage of request): <unknown> / 30%
# Min replicas: 1
# Max replicas: 4
# Behavior:
# Scale Up:
# Stabilization Window: 0 seconds
# Select Policy: Max
# Policies:
# - Type: Pods Value: 4 Period: 15 seconds
# - Type: Percent Value: 100 Period: 15 seconds
# Scale Down:
# Stabilization Window: 30 seconds
# Select Policy: Max
# Policies:
# - Type: Percent Value: 100 Period: 15 seconds
# Deployment pods: 1 current / 0 desired
# Conditions:
# Type Status Reason Message
# ---- ------ ------ -------
# AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
# ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Warning FailedGetResourceMetric 14s (x2 over 29s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
# Warning FailedComputeMetricsReplicas 14s (x2 over 29s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu resource metric value: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Create a Horizontal Pod Autoscaler with name backend-hpa for the deployment named backend-deployment in the backend namespace with the ~/webapp-hpa.yaml file
Ensure that the HPA scales the deployment based on memory utilization, maintaining an average memory usage of 65% across all pods.
Configure the HPA with a minimum of 3 replicas and a maximum of 15.
k create ns backend
k create deploy backend-deployment -n backend --image=nginx
touch webapp-hpa.yaml
# webapp-hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: backend-hpa
namespace: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend-deployment
minReplicas: 3
maxReplicas: 15
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 65
k apply -f webapp-hpa.yaml
# horizontalpodautoscaler.autoscaling/backend-hpa created
k describe hpa backend-hpa -n backend
# Name: backend-hpa
# Namespace: backend
# Labels: <none>
# Annotations: <none>
# CreationTimestamp: Mon, 19 Jan 2026 00:03:59 -0500
# Reference: Deployment/backend-deployment
# Metrics: ( current / target )
# resource memory on pods (as a percentage of request): <unknown> / 65%
# Min replicas: 3
# Max replicas: 15
# Deployment pods: 1 current / 3 desired
# Conditions:
# Type Status Reason Message
# ---- ------ ------ -------
# AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 3
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Normal SuccessfulRescale 11s horizontal-pod-autoscaler New size: 3; reason: Current number of replicas below Spec.MinReplicas
Create a Horizontal Pod Autoscaler (HPA) api-hpa for the deployment named api-deployment located in the api namespace. The HPA should scale the deployment based on a custom metric named requests_per_second, targeting an average value of 1000 requests per second across all pods. Set the minimum number of replicas to 1 and the maximum to 20.
Note: Deployment named api-deployment is available in api namespace. Ignore errors due to the metric requests_per_second not being tracked in metrics-server
k create ns api
k create deploy api-deployment --image=nginx -n api
tee hpa.yaml<<EOF
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
namespace: api
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-deployment
minReplicas: 1
maxReplicas: 20
metrics:
- type: Pods
pods:
metric:
name: requests_per_second
target:
type: AverageValue
averageValue: 1k
EOF
k apply -f hpa.yaml
# horizontalpodautoscaler.autoscaling/api-hpa created
kubectl get hpa -n api
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
# api-hpa Deployment/api-deployment <unknown>/1k 1 20 0 11s
kubectl describe hpa api-hpa -n api
# Name: api-hpa
# Namespace: api
# Labels: <none>
# Annotations: <none>
# CreationTimestamp: Wed, 21 Jan 2026 19:35:15 -0500
# Reference: Deployment/api-deployment
# Metrics: ( current / target )
# "requests_per_second" on pods: <unknown> / 1k
# Min replicas: 1
# Max replicas: 20
Install Argo CD in cluster: Add the official Argo CD Helm repository with the name argo. url: https://argoproj.github.io/argo-helm The Argo CD CRDs have already been pre-installed in the cluster. Generate a helm template of the Argo CD Helm chart version 7.7.3 for the argocd NS and save to /argo-helm.yaml Configure the chart to not install CRDs. Install Argo CD using Helm with release name argocd using the same version as above and configuration as used in the template 7.7.3. Install it in the argocd ns and configure it to not install CRDs.
You do not need to configure access to the Argo CD server UI.
Solution
https://argoproj.github.io/argo-helm/
# create ns
kubectl create ns argocd
# namespace/argocd created
# add repo
helm repo add argocd https://argoproj.github.io/argo-helm
# "argocd" has been added to your repositories
helm repo list
# NAME URL
# argocd https://argoproj.github.io/argo-helm
# update
helm repo update
# Hang tight while we grab the latest from your chart repositories...
# ...Successfully got an update from the "argocd" chart repository
# Update Complete. ⎈Happy Helming!⎈
# find chart within repo
helm search repo argo --version 7.7.3
# NAME CHART VERSION APP VERSION DESCRIPTION
# argocd/argo-cd 7.7.3 v2.13.0 A Helm chart for Argo CD, a declarative, GitOps..
# get values for crd within the chart: crds.install
helm show values argocd/argo-cd --version 7.7.3 | grep -i -A5 crd
# crds:
# # -- Install and upgrade CRDs
# install: true
# # -- Keep CRDs on chart uninstall
# keep: true
# # -- Annotations to be added to all CRDs
# annotations: {}
# # -- Addtional labels to be added to all CRDs
# additionalLabels: {}
# ## Globally shared configuration
# global:
# # -- Default domain used by all components
# render Helm chart templates locally, with value
helm template argocd argocd/argo-cd -n argocd --version 7.7.3 --set crds.install=false > ./argo-helm.yaml
# confirm
cat ./argo-helm.yaml | grep argocd
cat ./argo-helm.yaml | grep 7.7.3
# install
helm install argocd argocd/argo-cd --version 7.7.3 --namespace argocd --set crds.install=false
# NAME: argocd
# LAST DEPLOYED: Tue Jan 20 14:19:46 2026
# NAMESPACE: argocd
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# NOTES:
# Verify
helm list -n argocd
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# argocd argocd 1 2026-01-20 14:19:46.76485447 -0500 EST deployed argo-cd-7.7.3 v2.13.0
kubectl get pods -n argocd
Use Helm to deploy the Traefik Ingress Controller on the cluster. url: https://traefik.github.io/charts Install it in a dedicated namespace traefik with release name traefik. Ensure that Traefik’s support for the Kubernetes Gateway API is enabled via Helm values.
Could be from a url, could be from local
kubectl create ns traefik
# add repo url
helm repo add traefik https://traefik.github.io/charts
# "traefik" has been added to your repositories
# update repo
helm repo update
# Hang tight while we grab the latest from your chart repositories...
# ...Successfully got an update from the "traefik" chart repository
# Update Complete. ⎈Happy Helming!⎈
# confirm
helm repo list
# NAME URL
# traefik https://traefik.github.io/charts
# search for chart within repo
helm search repo traefik
# NAME CHART VERSION APP VERSION DESCRIPTION
# traefik/traefik 38.0.2 v3.6.6 A Traefik based Kubernetes ingress controller
# ...
# search for value of kubernetesGateway: kubernetesGateway.enabled
helm show values traefik/traefik | grep -i -A5 kubernetesGateway:
# kubernetesGateway:
# # -- Enable traefik experimental GatewayClass CRD
# enabled: false
# # -- Enable experimental plugins
# plugins: {}
# # -- Enable experimental local plugins
# --
# kubernetesGateway:
# # -- Enable Traefik Gateway provider for Gateway API
# enabled: false
# # -- Toggles support for the Experimental Channel resources (Gateway API release channels documentation).
# # This option currently enables support for TCPRoute and TLSRoute.
# experimentalChannel: false
# install
helm install traefik traefik/traefik -n traefik --set providers.kubernetesGateway.enabled=true
# NAME: traefik
# LAST DEPLOYED: Tue Jan 20 14:53:30 2026
# NAMESPACE: traefik
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# NOTES:
# traefik with docker.io/traefik:v3.6.6 has been deployed successfully on traefik namespace!
# confirm
helm list -n traefik
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# traefik traefik 1 2026-01-20 14:53:30.825103532 -0500 EST deployed traefik-38.0.2 v3.6.6
# confirm values
h get values traefik -n traefik
# USER-SUPPLIED VALUES:
# providers:
# kubernetesGateway:
# enabled: true
CKA EXAM OBJECTIVE: Use Helm to install cluster components Task:
cd ~/cka
helm create demo
# explore local dir
cd ~/cka/demo
ls
# find the value "template/deployment.yaml"
grep -i replica values.yaml
# replicaCount: 1
# minReplicas: 1
# maxReplicas: 100
# create ns
kubectl create ns battleofhelmsdeep
# namespace/battleofhelmsdeep created
# install with setting parameters
helm install demo ~/cka/demo --set replicaCount=3 -n battleofhelmsdeep
# NAME: demo
# LAST DEPLOYED: Tue Jan 20 15:08:41 2026
# NAMESPACE: battleofhelmsdeep
# STATUS: deployed
# REVISION: 1
helm list -n battleofhelmsdeep
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# demo battleofhelmsdeep 1 2026-01-20 15:08:41.370316625 -0500 EST deployed demo-0.1.0 1.16.0
# confirm
k get deploy -n battleofhelmsdeep
# NAME READY UP-TO-DATE AVAILABLE AGE
# demo 3/3 3 3 31s
One co-worker deployed an nginx helm chart web in the kk-ns namespace on the cluster. A new update is pushed to the helm chart, and the team wants you to update the helm repository to fetch the new changes.
After updating the helm chart, upgrade the helm chart version to 38.0.2.
helm repo add traefik https://traefik.github.io/charts
helm repo update
# find the latest version
helm search repo traefik
# NAME CHART VERSION APP VERSION DESCRIPTION
# traefik/traefik 38.0.2 v3.6.6 A Traefik based Kubernetes ingress controller
# find the previous version
helm search repo traefik --version ^36.0.0
# NAME CHART VERSION APP VERSION DESCRIPTION
# traefik/traefik 36.3.0 v3.4.3 A Traefik based Kubernetes ingress controller
# install previous version
helm install web traefik/traefik --version 36.3.0
# NAME: web
# LAST DEPLOYED: Tue Jan 20 15:19:12 2026
# NAMESPACE: backend
# STATUS: deployed
# REVISION: 1
# TEST SUITE: None
# NOTES:
helm list
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# web backend 1 2026-01-20 15:19:12.161116507 -0500 EST deployed traefik-36.3.0 v3.4.3
# get existing release
helm list
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# web backend 1 2026-01-20 15:19:12.161116507 -0500 EST deployed traefik-36.3.0 v3.4.3
# update traefik repo
helm repo update traefik
# Hang tight while we grab the latest from your chart repositories...
# ...Successfully got an update from the "traefik" chart repository
# Update Complete. ⎈Happy Helming!⎈
# find the latest version
helm search repo traefik
# NAME CHART VERSION APP VERSION DESCRIPTION
# traefik/traefik 38.0.2 v3.6.6 A Traefik based Kubernetes ingress controller
# Upgrade the helm chart to latest
helm upgrade web traefik/traefik --version=38.0.2
# Release "web" has been upgraded. Happy Helming!
# NAME: web
# LAST DEPLOYED: Tue Jan 20 15:23:03 2026
# NAMESPACE: backend
# STATUS: deployed
# REVISION: 2
# TEST SUITE: None
# NOTES:
# confirm
helm ls
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# web backend 2 2026-01-20 15:23:03.64113228 -0500 EST deployed traefik-38.0.2 v3.6.6
upgrade:
helm upgrade RELEASE CHART_VERSIONhelm history RELEASEhelm rollback RELEASE RELEASE_VERSION
On the cluster, the team has installed multiple helm charts on a different namespace. By mistake, those deployed resources include one of the vulnerable images called kodekloud/webapp-color:v1. Find out the release name and uninstall it.
# check one by one
helm get manifest RELEASE -n NAMESPACE | grep -i webapp-color:v1
# uninstall
heml uninstall RELEASE -n NAMESPACE
One application, webpage-server-01, is currently deployed on the Kubernetes cluster using Helm. A new version of the application is available in a Helm chart located at ~/webpage-server.
Validate this new Helm chart, then install it as a new release named webpage-server-02. After confirming the new release is installed, uninstall the old release webpage-server-01.
helm create ~/webpage-server
# Creating /home/ubuntuadmin/webpage-server
helm install webpage-server-01 ~/webpage-server -n default
# NAME: webpage-server-01
# LAST DEPLOYED: Wed Jan 21 20:19:36 2026
# NAMESPACE: default
# STATUS: deployed
# REVISION: 1
# update value
sed -i 's/repository: nginx/repository: redis/g' ~/webpage-server/values.yaml
sed -i 's/version: 0.1.0/version: 0.2.0/g' ~/webpage-server/Chart.yaml
# validate
helm lint ~/webpage-server
# ==> Linting /home/ubuntuadmin/webpage-server
# [INFO] Chart.yaml: icon is recommended
# 1 chart(s) linted, 0 chart(s) failed
helm list
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# webpage-server-01 default 1 2026-01-21 20:19:36.623440716 -0500 EST deployed webpage-server-0.1.0 1.16.0
helm install webpage-server-02 ~/webpage-server -n default
# NAME: webpage-server-02
# LAST DEPLOYED: Wed Jan 21 20:26:17 2026
# NAMESPACE: default
# STATUS: deployed
# REVISION: 1
# confirm
helm list
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# webpage-server-01 default 1 2026-01-21 20:19:36.623440716 -0500 EST deployed webpage-server-0.1.0 1.16.0
# webpage-server-02 default 1 2026-01-21 20:26:17.429373573 -0500 EST deployed webpage-server-0.2.0 1.16.0
helm uninstall webpage-server-01
# release "webpage-server-01" uninstalled
# confirm
helm list
# NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
# webpage-server-02 default 1 2026-01-21 20:26:17.429373573 -0500 EST deployed webpage-server-0.2.0 1.16.0
You have base manifests for an app in ~/kustomize/base. Use Kustomize to deploy a production variant of this app:
. The production variant should add the label environment: production to all resources. . It should prefix resource names with `prod-‘ . It should use Nginx image tag 1.21 instead of the base’s 1.19
# base manifest
mkdir -pv ~/kustomize/base
tee ~/kustomize/base/deployment.yaml<<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
labels:
app: hello
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: nginx:1.19
ports:
- containerPort: 80
EOF
# create kustomization
tee ~/kustomize/base/kustomization.yaml<<'EOF'
resources:
- deployment.yaml
EOF
Solution
ref: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/
# create overlay dir
sudo mkdir -pv ~/kustomize/overlay-prod
# mkdir: created directory '~/kustomize/overlay-prod'
# edit yaml
sudo vi ~/kustomize/overlay-prod/kustomization.yaml
# resources:
# - ../base
# namePrefix: prod-
# labels:
# - pairs:
# environment: production
# images:
# - name: nginx
# newTag: "1.21"
# apply kustomize
kubectl apply -k ~/kustomize/overlay-prod/
# deployment.apps/prod-hello-app created
# confirm
kubectl get deploy
# NAME READY UP-TO-DATE AVAILABLE AGE
# prod-hello-app 1/1 1 1 25s
kubectl describe deploy prod-hello-app
# Name: prod-hello-app
# Labels: app=hello
# environment=production
# Pod Template:
# Labels: app=hello
# Containers:
# hello:
# Image: nginx:1.21
Task:
Verify the cert-manager application which has been deployed in the cluster.
Create a list of all tiger Custom Resource Definitions (CRDs) and save it to ~/tiger.yaml. make sure kubectl’s default output format and use kubectl to list CRD’s
Do not set an output format. Failure to do so will result in a reduced score. Using kubectl, extract the documentation for the subject specification field of the Certificate Custom Resource and save it to ~/subject.vaml. You may use any output format that kubecl supports.
# get the list of crd with app
k get crds | grep tiger
# apiservers.operator.tigera.io 2026-01-17T05:05:50Z
# gatewayapis.operator.tigera.io 2026-01-17T05:05:51Z
# goldmanes.operator.tigera.io 2026-01-17T05:05:51Z
# imagesets.operator.tigera.io 2026-01-17T05:05:51Z
# installations.operator.tigera.io 2026-01-17T05:05:51Z
# managementclusterconnections.operator.tigera.io 2026-01-17T05:05:51Z
# tigerastatuses.operator.tigera.io 2026-01-17T05:05:51Z
# whiskers.operator.tigera.io 2026-01-17T05:05:51Z
# output the list
k get crds | grep tiger > ~/tiger.yaml
# confirm
cat ~/tiger.yaml
# apiservers.operator.tigera.io 2026-01-17T05:05:50Z
# gatewayapis.operator.tigera.io 2026-01-17T05:05:51Z
# goldmanes.operator.tigera.io 2026-01-17T05:05:51Z
# imagesets.operator.tigera.io 2026-01-17T05:05:51Z
# installations.operator.tigera.io 2026-01-17T05:05:51Z
# managementclusterconnections.operator.tigera.io 2026-01-17T05:05:51Z
# #################
kubectl explain certificate.spec.subject > ~/subject.yaml
kubectl explain certificate.spec.subject --format=plain > ~/subject.yaml
On controlplane node, identify all CRDs related to VerticalPodAutoscaler and save their names into the file ~/vpa-crds.txt.
k get crds -A | grep -i VerticalPodAutoscaler
# verticalpodautoscalercheckpoints.autoscaling.k8s.io 2026-01-17T22:26:07Z
# verticalpodautoscalers.autoscaling.k8s.io 2026-01-17T22:26:07Z
k get crds -A | grep -i VerticalPodAutoscaler > ~/vpa-crds.txt
# confirm
sudo cat /root/vpa-crds.txt
Create a new PriorityClass named high-priority for user-workloads with a value that is one less than the highest existing user-defined priority class value.
Patch the existing Deployment busybox-logger running in the priority1 namespace to use the high-priority priority class.
Ensure that the busybox-logger Deployment rolls out successfully with the new priority class set.
It is expected that Pods from other Deployments running in the priority namespace are evicted.
Do not modify other Deployments running in the priority namespace.
Failure to do so may result in a reduced score.
tee env-setup.yaml<<'EOF'
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-user
value: 1000
globalDefault: false
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: medium-user
value: 5000
globalDefault: false
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: top-user
value: 10000
globalDefault: false
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: existing-app
namespace: priority1
spec:
replicas: 6
selector:
matchLabels:
app: existing-app
template:
metadata:
labels:
app: existing-app
spec:
priorityClassName: low-user
containers:
- name: stress
image: busybox:1.36
command: ["sh", "-c", "while true; do echo filler-low; sleep 5; done"]
resources:
requests:
cpu: "250m"
memory: "128Mi"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-logger
namespace: priority1
spec:
replicas: 1
selector:
matchLabels:
app: busybox-logger
template:
metadata:
labels:
app: busybox-logger
spec:
containers:
- name: busybox
image: busybox:1.36
command: ["sh", "-c", "while true; do echo $(date) busybox-logger; sleep 2; done"]
resources:
requests:
cpu: "200m"
memory: "128Mi"
EOF
kubectl create ns priority1
kubectl apply -f env-setup.yaml
# priorityclass.scheduling.k8s.io/low-user created
# priorityclass.scheduling.k8s.io/medium-user created
# priorityclass.scheduling.k8s.io/top-user created
# deployment.apps/busybox-logger created
# deployment.apps/existing-app created
kubectl get priorityclass
# NAME VALUE GLOBAL-DEFAULT AGE PREEMPTIONPOLICY
# low-user 1000 false 16s PreemptLowerPriority
# medium-user 5000 false 16s PreemptLowerPriority
# system-cluster-critical 2000000000 false 3d19h PreemptLowerPriority
# system-node-critical 2000001000 false 3d19h PreemptLowerPriority
# top-user 10000 false 16s PreemptLowerPriority
# collect info
kubectl get priorityclass
# NAME VALUE GLOBAL-DEFAULT AGE PREEMPTIONPOLICY
# low-user 1000 false 16s PreemptLowerPriority
# medium-user 5000 false 16s PreemptLowerPriority
# system-cluster-critical 2000000000 false 3d19h PreemptLowerPriority
# system-node-critical 2000001000 false 3d19h PreemptLowerPriority
# top-user 10000 false 16s PreemptLowerPriority
kubectl get deploy -n priority1
# NAME READY UP-TO-DATE AVAILABLE AGE
# busybox-logger 0/1 1 0 21s
# existing-app 6/6 6 6 21s
# pc.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 9999
globalDefault: false
kubectl apply -f pc.yaml
# priorityclass.scheduling.k8s.io/high-priority created
# confirm
k get pc --sort-by='value'
# NAME VALUE GLOBAL-DEFAULT AGE PREEMPTIONPOLICY
# low-user 1000 false 2m15s PreemptLowerPriority
# medium-user 5000 false 2m15s PreemptLowerPriority
# high-priority 9999 false 56s PreemptLowerPriority
# top-user 10000 false 2m15s PreemptLowerPriority
# system-cluster-critical 2000000000 false 3d20h PreemptLowerPriority
# system-node-critical 2000001000 false 3d20h PreemptLowerPriority
# confirm no priority class
kubectl get deploy busybox-logger -n priority1 -o yaml
# scale down to 0
kubectl scale deploy busybox-logger -n priority1 --replicas=0
# deployment.apps/busybox-logger scaled
kubectl get deploy busybox-logger -n priority1
# NAME READY UP-TO-DATE AVAILABLE AGE
# busybox-logger 0/0 0 0 7m36s
kubectl patch deploy busybox-logger -n priority1 -p '{"spec":{"template":{"spec":{"priorityClassName":"high-priority"}}}}'
# deployment.apps/busybox-logger patched
# confirm
kubectl get deploy busybox-logger -n priority1 -o yaml
# spec:
# template:
# spec:
# priorityClassName: high-priority
# scale out
kubectl scale deploy busybox-logger -n priority1 --replicas=1
# confirm
k get deploy -n priority1
# NAME READY UP-TO-DATE AVAILABLE AGE
# busybox-logger 1/1 1 1 6m32s
# existing-app 5/6 6 5 6m32s
Create a PriorityClass named low-priority with a value of 50000. A pod named lp-pod exists in the namespace low-priority. Modify the pod to use the priority class you created. Recreate the pod if necessary.
k create ns low-priority
k run lp-pod -n low-priority --image=nginx
# create pc
kubectl create priorityclass low-priority --value=50000 -n low-priority
# priorityclass.scheduling.k8s.io/low-priority created
k describe priorityclass low-priority -n low-priority
# Name: low-priority
# Value: 50000
# GlobalDefault: false
# PreemptionPolicy: PreemptLowerPriority
# Description:
# Annotations: <none>
# Events: <none>
k edit pod lp-pod -n low-priority
# add:
# spec.priorityClassName: low-priority
# remove:
# spec.priority: 0
k get po -n low-priority
# NAME READY STATUS RESTARTS AGE
# lp-pod 1/1 Running 0 19s
# confirm
k describe pod lp-pod -n low-priority | grep -i priority
# Priority: 50000
# Priority Class Name: low-priority
In the namespace limit-test, enforce default resource limits and requests for containers: · If a container has no CPU/memory requests/limits, assign a default request of 100m CPU and 50Mi memory, and a default limit of 200m CPU and 100Mi memory. . Prevent any container in this namespace from requesting more than 500Mi memory.
kubectl create ns limit-test
Solution
ref: https://kubernetes.io/docs/concepts/policy/limit-range/
# task-limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: resource-constraint
namespace: limit-test
spec:
limits:
- type: Container
default:
cpu: 200m
memory: 100Mi
defaultRequest:
cpu: 100m
memory: 50Mi
- type: Pod
max:
memory: 500Mi
kubectl apply -f task-limitrange.yaml
# limitrange/resource-constraint created
kubectl describe ns limit-test
# Name: limit-test
# Labels: kubernetes.io/metadata.name=limit-test
# Annotations: <none>
# Status: Active
# No resource quota.
# Resource Limits
# Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
# ---- -------- --- --- --------------- ------------- -----------------------
# Container cpu - - 100m 200m -
# Container memory - - 50Mi 100Mi -
# Pod memory - 500Mi - - -
CKA EXAM OBJECTIVE: Monitor cluster and application resource usage Task :
k create ns integration
k run intensive --image=busybox -l app=intensive -n integration -- sleep infinity
k top pod -n integration -l app=intensive --sort-by='cpu'
设置配置环境: [candidate@node-1] $ kubectl config use-context k8s
Task 通过 pod label name=cpu-loader,找到运行时占用大量 CPU 的 pod, 并将占用 CPU 最高的 pod 名称写入文件 /tmp/cka/cpu-loader.txt(已存在)。Copy
k create deploy cpu-loader --image=busybox --replicas=4 -- sleep infinity
k label pod -l app=cpu-loader name=cpu-loader
k get pod -l name=cpu-loader
# NAME READY STATUS RESTARTS AGE
# cpu-loader-546f8f548c-jc4wf 1/1 Running 0 110s
kubectl top pod -l name=cpu-loader --sort-by=cpu -A
# NAMESPACE NAME CPU(cores) MEMORY(bytes)
# default cpu-loader-546f8f548c-kmzzf 0m 0Mi
# default cpu-loader-546f8f548c-pmn87 0m 0Mi
# default cpu-loader-546f8f548c-wbfsx 0m 0Mi
# default cpu-loader-546f8f548c-xxb7x 0m 0Mi
Task
A WordPress application with 3 replicas in the relative-fawn namespace consists of: cpu 1 memory 2015360ki
Adjust all Pod resource requests as follows: Divide node resources evenly across all 3 pods. Give each Pod a fair share of CPU and memory. Add enough overhead to keep the node stable. Use the exact same requests for both containers and init containers.
You are not required to change any resource limits. It may help to temporarily scale the WordPress Deployment to 0 replicas while updating the resource requests.
kubectl label node node02 wpnode=true
kubectl create ns relative-fawn
tee env-deploy.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: relative-fawn
spec:
replicas: 3
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
nodeSelector:
wpnode: "true"
initContainers:
- name: init-container
image: busybox
command: ["sh", "-c", "sleep 30"]
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
kubectl apply -f env-deploy.yaml
# find the
kubectl -n relative-fawn get pod -o wide
# NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
# wordpress-6b77fdbd49-6gd4q 1/1 Running 0 5m19s 10.244.2.22 node02 <none> <none>
# wordpress-6b77fdbd49-8d2qh 1/1 Running 0 5m19s 10.244.2.23 node02 <none> <none>
# wordpress-6b77fdbd49-hkw7z 1/1 Running 0 5m19s 10.244.2.24 node02 <none> <none>
# get the resource
kubectl describe node node2
# Allocatable:
# cpu: 1
# memory: 1863088Ki
# Allocated resources:
# Resource Requests Limits
# -------- -------- ------
# cpu 400m (40%) 400m (40%)
# memory 1060Mi (58%) 1670Mi (91%)
# scale down
kubectl scale deploy wordpress -n relative-fawn --replicas=0
k get deploy -n relative-fawn
# NAME READY UP-TO-DATE AVAILABLE AGE
# wordpress 0/0 0 0 13m
# update resource
k edit deploy wordpress -n relative-fawn
# confirm
k get deploy wordpress -n relative-fawn -o yaml
containers:
name: nginx
resources:
requests:
cpu: 250m
memory: 800Mi
initContainers:
name: init-container
resources:
requests:
cpu: 250m
memory: 800Mi
# scale out
kubectl scale deploy wordpress -n relative-fawn --replicas=3
# confirm
k get deploy wordpress -n relative-fawn
# NAME READY UP-TO-DATE AVAILABLE AGE
# wordpress 3/3 3 3 9m35s
kubectl set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi