Skip to content

Kubernetes

Awesome Ref

kubectl

Network

CPU limit

Volume, PV, PVC, StorageClass

  • Day 13 -【Storage】:Volume 的三種基本應用 --- emptyDir、hostPath、configMap & secret
    • emptyDir 就是一個空的目錄,當 Pod 被刪除時,emptyDir 中的資料也會被刪除,所以它的目的並不是保存資料,而是讓 Pod 中的多個 container 共享資料
      • 「用 emptyDir 來共享 Pod 中的容器資料」是一種常見的應用
    • hostPath 就是指定 Node 上的某目錄掛載到 Pod 中讓 container 存取
      • 這裡的「host」指的是執行 Pod 的 Node
      • 要特別注意的是,指定的 hostPath 並不一定在每個 Node 上都有,如果 scheduler 把 Pod 調度其他 Node 上,就會造成資料無法讀取的情況。
      • 因此,hostPath 通常是用來測試 single-node cluster 的環境
  • Day 14 -【Storage】: PV、PVC & StorageClass
    • 當開發者的 Pod 需要儲存空間時,僅需提出需求(例如「要幾G」),剩下直接交給 k8s 管理員來提供實際的 storage,不但能減輕開發者的負擔,也能讓 storage 與 pod 的生命週期解耦
      • Persistent Volume (PV):擁有獨立生命週期的 storage。
      • Persistent Volume Claim (PVC):儲存空間的需求。
    • K8s 管理員的工作就是當有 PVC 提出後,提供相對應的 PV 給 PVC。提供 PV 的方式稱為「provisioning」,可以分為以下兩種模式
      • Static:由管理員自行定義並建置,例如管理使用 yaml 來建立 PV
      • Dynamic: 管理員配置 「StorageClass」,後續由 storageClass 來自動建置 PV
    • 當建立 PVC 後,K8s 會自動尋找符合條件的 PV,如果符合條件的 PV 不存在,則 PVC 會一直處於 Pending 狀態,直到符合條件的PV被建立
    • PV 與 PVC 是一對一關係,PV 一旦和某個 PVC 綁定後,綁定期間就不能再被其他 PVC 使用
    • hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi-node cluster; consider using local volume instead)
    • storageClass 的完整效果:使用者只需要建立 PVC,storageClass 會負責管理 PV 的建立與刪除
  • Day26 了解 K8S 的 Volumes & StorageClass

NFS provisioner

  • Kubernetes-note/on-prem/nfs.md at master · michaelchen1225/Kubernetes-note
    helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
    helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.122.222 --set nfs.path=/srv/nfs-share --set storageClass.name=nfs-storage
    

cloud-native storage orchestrator

GPU

RBAC

Role 是用來定義在某個命名空間底下的角色,而 ClusterRole 則是屬於叢集通用的角色 ClusterRole 除了可以配置 Role 的權限內容外,由於是 Cluster 範圍的角色,因此還能夠配置

  • 叢集範圍的資源,例如 Nodes
  • 非資源類型的 endpoints,例如 /healthz
  • 跨命名空間的資源,例如 kubectl get pods --all-namespace

Resource Quota

Deployment

Use KVM + Cockpit + kubespray or kubeadm

KVM

# KVM
sudo apt-get -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon libvirt-daemon-system qemu qemu-kvm
sudo kvm-ok
lsmod | grep kvm
modinfo kvm
modinfo kvm_amd
systemctl status libvirtd

Cockpit

# Web Console for VM
sudo apt-get install cockpit cockpit-machines
sudo systemctl start cockpit
sudo systemctl status cockpit
sudo ufw allow 9090/tcp
sudo ufw reload

Launch the Ubuntu Guest OS

ssh-keygen -t ed25519 -C "KVM VM Instance" -f ~/.ssh/kvm-vm-instance

# Launch the Ubuntu Guest OS
wget -O /var/lib/libvirt/images/jammy-server-cloudimg-amd64.img https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img

cat <<EOF >> /root/meta-data
# meta-data
instance-id: k8s-ubuntu-vm
local-hostname: k8s-ubuntu-vm
EOF

# ubuntu/foo@123
# note: shell is interpreting the $ characters in your hashed password as variable expansions. so use single quote 'EOF'

cat <<'EOF' >> /root/user-data
#cloud-config
ssh_pwauth: true
users:
  - name: root
    lock_passwd: false
    hashed_passwd: $6$P2LlhzIYWXC4XyCa$JzSWM6UBQ3BNLtQXO2jKTIhkBQyNl8DuhJ6tx8kovCtiick0mXMWE6z12HGBhTzAPW0EpDglWn7j.W9XZoaBl0
  - name: ubuntu
    lock_passwd: false
    hashed_passwd: $6$P2LlhzIYWXC4XyCa$JzSWM6UBQ3BNLtQXO2jKTIhkBQyNl8DuhJ6tx8kovCtiick0mXMWE6z12HGBhTzAPW0EpDglWn7j.W9XZoaBl0
    shell: /bin/bash
    sudo: ALL=(ALL) NOPASSWD:ALL
    #ssh_authorized_keys:
    #  - ssh-ed25519 AAAAC3NzaC1lZDI1NT... your-public-key-here
EOF

qemu-img create -F qcow2 -b /var/lib/libvirt/images/jammy-server-cloudimg-amd64.img -f qcow2 /var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img 30G

virt-install --name k8s-worker-2 --ram 4096 --vcpus 4 --import --disk path=/var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img,format=qcow2 --cloud-init root-password-generate=on,disable=on,meta-data=/root/meta-data,user-data=/root/user-data --os-variant ubuntu22.04 --network bridge=virbr0 --graphics vnc,listen=0.0.0.0 --console pty,target_type=serial --noautoconsole

(optional): if bridge interface br0 is configed on this environment

virt-install --name k8s-control-1 --ram 4096 --vcpus 4 --import --disk path=/var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img,format=qcow2 --cloud-init root-password-generate=on,disable=on,meta-data=/root/meta-data,user-data=/root/user-data --os-variant ubuntu22.04 --network bridge=br0 --network bridge=virbr0 --graphics vnc,listen=0.0.0.0 --console pty,target_type=serial --noautoconsole

(optional): prepare key access

# prepare key access
ssh-keygen -t ed25519 -C "KVM VM Instance" -f ~/.ssh/kvm-vm-instance
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.131
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.200
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.188

# test
ssh -i ~/.ssh/kvm-vm-instance ubuntu@192.168.122.188

(scenario 1)kubeadm - Day 03 -【Basic Concept】:建立 Kubeadm Cluster + Bonus Tips

(optional)would like to use legacy k8s version, ex. 1.25.x

echo 'deb [trusted=yes] https://pkgs.k8s.io/core:/stable:/v1.25/deb/ /'  | sudo tee /etc/apt/sources.list.d/kubernetes.list

(scenario 2)kubespray - Day 06 使用 Kubespray 建立自己的 K8S(一) - iT 邦幫忙 - Day 07 使用 Kubespray 建立自己的 K8S(二) - iT 邦幫忙

apt install -y python3-venv
git clone --depth 1 --branch v2.28.0 https://github.com/kubernetes-sigs/kubespray.git
# (optional) for legacy kubernetes which kubespray supports
# git clone --depth 1 --branch v2.23.2 https://github.com/kubernetes-sigs/kubespray.git kubespray-2.23
python3 -m venv kubespray-venv
source kubespray-venv/bin/activate
cd kubespray
pip install -U -r requirements.txt
cp -rfp inventory/sample inventory/mycluster

kubespray/inventory/mycluster/inventory.ini

[kube_control_plane]
k8s-ubuntu-vm-control-1 ansible_host=192.168.122.188 ansible_user=ubuntu

[etcd:children]
kube_control_plane

[kube_node]
k8s-ubuntu-vm-worker-1 ansible_host=192.168.122.200 ansible_user=ubuntu
k8s-ubuntu-vm-worker-2 ansible_host=192.168.122.131 ansible_user=ubuntu

Use the kubespray to manage the K8S cluster

# deploy
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml

# (optional)deploy specific version, ex.1.25.6
# kubespray require 2.23.2 because 2.24.0 drop support for Kubernetes 1.25.x
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml -e kube_version=v1.25.6

# add the nodes
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root scale.yml

# remove the nodes
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root remove-node.yml --extra-vars "node=k8s-ubuntu-vm-worker-3"

# install the app like dashborad, helm
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml --tags=apps

Dashborad

dashborad with service account and NodePort In newer Kubernetes versions (1.24+), secrets for service accounts aren't created automatically.

kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
kubectl create token dashboard-admin-sa

dashboard quick test and skip login

kubectl edit deployment kubernetes-dashboard -n kube-system

containers:
      - args:
        - --namespace=kube-system
        - --auto-generate-certificates
        - --enable-skip-login
        image: docker.io/kubernetesui/dashboard:v2.7.0
kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
# http://172.19.30.115:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login

management tools

kubectl

# version v1.32.5
curl -LO "https://dl.k8s.io/release/v1.32.5/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.32.5/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256)  kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.

helm

# Method 1
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Method 2
curl https://mirror.openshift.com/pub/openshift-v4/clients/helm/3.18.4/helm-linux-amd64
sudo install -o root -g root -m 0755 helm-linux-amd64 /usr/local/bin/helm

minikube

kind

Rancher

kubespray

Helm

helm version
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
helm install my-nginx bitnami/nginx
# helm upgrade -n poc-scope --install  my-nginx bitnami/nginx
helm list -A
helm list -n poc-scope
helm status my-nginx
helm history my-nginx
helm uninstall my-nginx

如果要覆寫,values.yaml 的設定可以透過 -f 方式指定覆蓋預設 values.yaml 配置或是使用 --set 覆寫

Operator

KubeVirt

Kubeflow

KubeRay

kueue

Yunikorn

Volcano

hkube

MetalLB

install by helm

helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -n metallb-system --create-namespace

config IP Pool metallb-ipaddresspool.yaml

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool # IP Pool 的名字
  namespace: metallb-system # Namespace
spec:
  addresses:
  - 192.168.122.201-192.168.122.209

config L2 Mode Advertisement l2-advertisement.yaml

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system
spec: # 沒有填寫 Spec 就是對於全部的 namespace 下的 IPAddressPool 都套用
  ipAddressPools:
  - first-pool # 對於 first-pool 會套用廣播

done, and we can use Service with type: LoadBalancer

Nginx Ingress

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx   --namespace ingress-nginx   --create-namespace

Cert Manager

kube-prometheus-stack

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm search repo prometheus-community

helm show values prometheus-community/kube-prometheus-stack --version 75.12.0 > values.yaml
helm install -f values.yaml prometheus-stack prometheus-community/kube-prometheus-stack -n prometheus-stack --version 75.12.0 --create-namespace
kubectl --namespace prometheus-stack get secrets prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo

# kubectl port-forward service/prometheus-stack-prometheus 9090:9090 -n prometheus-stack

# kubectl port-forward service/prometheus-stack-grafana 3000:80 -n prometheus-stack

loki

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm search repo grafana

helm show values grafana/loki-stack --version 2.10.2 > loki-stack-values.yaml
helm install -f loki-stack-values.yaml loki grafana/loki-stack -n loki-stack --version 2.10.2 --create-namespace
helm uninstall loki -n loki-stack

harbor

helm repo add harbor https://helm.goharbor.io
helm repo update
helm search repo harbor --versions
helm show values harbor/harbor --version 1.11.4 > harbor-values.yaml
# type, harborAdminPassword
helm install harbor harbor/harbor --version 1.11.4 -f harbor-values.yaml -n harbor-registry  --create-namespace
kubectl get secrets harbor-core -n harbor-registry -o jsonpath='{.data.tls\.crt}' | base64 -d > harbor.ca

CKA

kubectl run multi-container-pod --image=nginx --dry-run=client -o yaml > sidecar.yaml
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
# Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379
# (This will automatically use the pod's labels as selectors)
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
# Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes
kubectl expose pod nginx --type=NodePort --port=80 --name=nginx-service --dry-run=client -o yaml

kubectl get pod <pod-name> -o yaml > pod-dump.yaml
kubectl create clusterrole --help
kubectl api-resources
kubectl run debug --image=curlimages/curl -it --rm -- sh
kubectl expose pod messaging --port=6379 --name messaging-service
helm list
helm list -A
helm get manifest <release_name> -n <namespace>
helm repo list
helm repo update
helm search repo <foo>
helm search repo <foo> --versions
helm upgrade <foo> <bar> --version <version>
helm uninstall <release_name> -n <namespace>

helm lint ./new-version
helm install webpage-server-02 ./new-version

keyword for document - CKA-note/附錄/CKA 常用的官方文件.md at main · michaelchen1225/CKA-note

sidecar container
downward API, fieldRef
kubectl expose

TBD

crd
hpa
vpa
k8s gateway
helm
kustomize
kubeadm
rolling update

Guacamole

Operations

CNI