Kubernetes
- 好文共賞 - How Kubernetes And Kafka Will Get You Fired - MyApollo
- CNCF Certified Kubernetes Administrator (CKA) 證照心得 - Jasper Sui | Home
- 從題目中學習k8s :: 第 12 屆 iThome 鐵人賽
- Head-first k8s
- Kubernetes 簡介 - Huan-Lin 學習筆記
- Ivan on Containers, Kubernetes, and Backend Development
- 改進容器化部署 - 成功導入 K8S 的經驗與挑戰.pdf - Google 雲端硬碟
- GitHub - Mozart4242/kubernetes-real-world: This is your kubernetes dream that WILL come true, a Real-World Kubernetes cluster with needed production enterprise services.
- 解決K8s網路定址缺陷 Antrea Egress從頭學(一) | 網管人
- Kubernetes 部署在虛機好還是裸機好? - 魂系架構 Phil's Workspace
- Kubernetes 概觀 - iT 邦幫忙
- Automate All the Boring Kubernetes Operations with Python | Martin Heinz | Personal Website & Blog
- 从 Helm 到 Operator:Kubernetes应用管理的进化 | crossoverJie's Blog
- 如何 Debug Kubernetes Pod - Yowko's Notes
- The Containerization Tech Stack | Medium
- Day 20- Kubernetes 中的命名空間與資源隔離 - iT 邦幫忙
- Test
- log
- Dashboard/UI
- VMWare Tanzu
- monitor
- HPC
- Orchestrating Kubernetes Clusters on HPC Infrastructure - Elia Oggian - YouTube
- The Pros and Cons of Kubernetes for HPC - HPCwire
- The Convergence of HPC, AI and Cloud
- Kubernetes and Batch
- Is there anything like SLURM for k8s/eks (machine learning GPU workloads) : r/kubernetes
- Introducing SUNK: A Slurm on Kubernetes Implementation for HPC and Large Scale AI — CoreWeave
Awesome Ref
- :star:從Software Developer的角度一起認識 Kubernetes :: 2023 iThome 鐵人賽
- :star:從Software Developer的角度一起認識 Kubernetes (二) :: 2024 iThome 鐵人賽
- Albert Weng for K8S
- :star:關於我怎麼把一年內學到的新手 IT/SRE 濃縮到 30 天筆記這檔事 :: 2022 iThome 鐵人賽
- Kubernetes 基礎教學(一)原理介紹 | Cheng-Wei Hu
- Kubernetes 基礎教學(二)實作範例:Pod、Service、Deployment、Ingress | Cheng-Wei Hu
- Kubernetes 基礎教學(三)Helm 介紹與建立 Chart | Cheng-Wei Hu
- cookbooks/kubernetes at master · thedatabaseme/cookbooks
- CKA
kubectl
Network
- ClusterIP vs NodePort vs LoadBalancer: Key Differences & When to Use
- In Kubernetes, the ClusterIP Service is used for Pod-to-Pod communication within the same cluster.
- This means that a client running outside of the cluster, such as a user accessing an application over the internet, cannot directly access a ClusterIP Service.
- The NodePort Service provides a way to expose your application to external clients.
- An external client is anyone who is trying to access your application from outside of the Kubernetes cluster.
- The NodePort Service does this by opening the port you choose (in the range of 30000 to 32767) on all worker nodes in the cluster.
- One disadvantage of the NodePort Service is that it doesn't do any kind of load balancing across multiple nodes.
- In Kubernetes, the ClusterIP Service is used for Pod-to-Pod communication within the same cluster.
- 理解 K8s 如何處理不同類型 service 的封包來源 IP - HackMD
- How does Ingress Nginx Controller work? - HackMD
CPU limit
Volume, PV, PVC, StorageClass
- Day 13 -【Storage】:Volume 的三種基本應用 --- emptyDir、hostPath、configMap & secret
- emptyDir 就是一個空的目錄,當 Pod 被刪除時,emptyDir 中的資料也會被刪除,所以它的目的並不是保存資料,而是讓 Pod 中的多個 container 共享資料
- 「用 emptyDir 來共享 Pod 中的容器資料」是一種常見的應用
- hostPath 就是指定 Node 上的某目錄掛載到 Pod 中讓 container 存取
- 這裡的「host」指的是執行 Pod 的 Node
- 要特別注意的是,指定的 hostPath 並不一定在每個 Node 上都有,如果 scheduler 把 Pod 調度其他 Node 上,就會造成資料無法讀取的情況。
- 因此,hostPath 通常是用來測試 single-node cluster 的環境
- emptyDir 就是一個空的目錄,當 Pod 被刪除時,emptyDir 中的資料也會被刪除,所以它的目的並不是保存資料,而是讓 Pod 中的多個 container 共享資料
- Day 14 -【Storage】: PV、PVC & StorageClass
- 當開發者的 Pod 需要儲存空間時,僅需提出需求(例如「要幾G」),剩下直接交給 k8s 管理員來提供實際的 storage,不但能減輕開發者的負擔,也能讓 storage 與 pod 的生命週期解耦
- Persistent Volume (PV):擁有獨立生命週期的 storage。
- Persistent Volume Claim (PVC):儲存空間的需求。
- K8s 管理員的工作就是當有 PVC 提出後,提供相對應的 PV 給 PVC。提供 PV 的方式稱為「provisioning」,可以分為以下兩種模式
- Static:由管理員自行定義並建置,例如管理使用 yaml 來建立 PV
- Dynamic: 管理員配置 「StorageClass」,後續由 storageClass 來自動建置 PV
- 當建立 PVC 後,K8s 會自動尋找符合條件的 PV,如果符合條件的 PV 不存在,則 PVC 會一直處於 Pending 狀態,直到符合條件的PV被建立
- PV 與 PVC 是一對一關係,PV 一旦和某個 PVC 綁定後,綁定期間就不能再被其他 PVC 使用
- hostPath - HostPath volume (for single node testing only; WILL NOT WORK in a multi-node cluster; consider using local volume instead)
- storageClass 的完整效果:使用者只需要建立 PVC,storageClass 會負責管理 PV 的建立與刪除
- 當開發者的 Pod 需要儲存空間時,僅需提出需求(例如「要幾G」),剩下直接交給 k8s 管理員來提供實際的 storage,不但能減輕開發者的負擔,也能讓 storage 與 pod 的生命週期解耦
- Day26 了解 K8S 的 Volumes & StorageClass
NFS provisioner
- Kubernetes-note/on-prem/nfs.md at master · michaelchen1225/Kubernetes-note
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.122.222 --set nfs.path=/srv/nfs-share --set storageClass.name=nfs-storage
cloud-native storage orchestrator
GPU
RBAC
- Using RBAC Authorization | Kubernetes
- Kubernetes RBAC Overview:賦予安全與彈性的管理|方格子 vocus
- Day 19 - 老闆!我可以做什麼:RBAC - iT 邦幫忙
- 【從題目中學習k8s】-【Day20】第十二題 - RBAC - iT 邦幫忙
- How to bind a Kubernetes Role to a user or group | LabEx
- Limiting access to Kubernetes resources with RBAC
Role 是用來定義在某個命名空間底下的角色,而 ClusterRole 則是屬於叢集通用的角色 ClusterRole 除了可以配置 Role 的權限內容外,由於是 Cluster 範圍的角色,因此還能夠配置
- 叢集範圍的資源,例如 Nodes
- 非資源類型的 endpoints,例如 /healthz
- 跨命名空間的資源,例如 kubectl get pods --all-namespace
Resource Quota
Deployment
Use KVM + Cockpit + kubespray or kubeadm
KVM
# KVM
sudo apt-get -y install bridge-utils cpu-checker libvirt-clients libvirt-daemon libvirt-daemon-system qemu qemu-kvm
sudo kvm-ok
lsmod | grep kvm
modinfo kvm
modinfo kvm_amd
systemctl status libvirtd
Cockpit
# Web Console for VM
sudo apt-get install cockpit cockpit-machines
sudo systemctl start cockpit
sudo systemctl status cockpit
sudo ufw allow 9090/tcp
sudo ufw reload
Launch the Ubuntu Guest OS
ssh-keygen -t ed25519 -C "KVM VM Instance" -f ~/.ssh/kvm-vm-instance
# Launch the Ubuntu Guest OS
wget -O /var/lib/libvirt/images/jammy-server-cloudimg-amd64.img https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
cat <<EOF >> /root/meta-data
# meta-data
instance-id: k8s-ubuntu-vm
local-hostname: k8s-ubuntu-vm
EOF
# ubuntu/foo@123
# note: shell is interpreting the $ characters in your hashed password as variable expansions. so use single quote 'EOF'
cat <<'EOF' >> /root/user-data
#cloud-config
ssh_pwauth: true
users:
- name: root
lock_passwd: false
hashed_passwd: $6$P2LlhzIYWXC4XyCa$JzSWM6UBQ3BNLtQXO2jKTIhkBQyNl8DuhJ6tx8kovCtiick0mXMWE6z12HGBhTzAPW0EpDglWn7j.W9XZoaBl0
- name: ubuntu
lock_passwd: false
hashed_passwd: $6$P2LlhzIYWXC4XyCa$JzSWM6UBQ3BNLtQXO2jKTIhkBQyNl8DuhJ6tx8kovCtiick0mXMWE6z12HGBhTzAPW0EpDglWn7j.W9XZoaBl0
shell: /bin/bash
sudo: ALL=(ALL) NOPASSWD:ALL
#ssh_authorized_keys:
# - ssh-ed25519 AAAAC3NzaC1lZDI1NT... your-public-key-here
EOF
qemu-img create -F qcow2 -b /var/lib/libvirt/images/jammy-server-cloudimg-amd64.img -f qcow2 /var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img 30G
virt-install --name k8s-worker-2 --ram 4096 --vcpus 4 --import --disk path=/var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img,format=qcow2 --cloud-init root-password-generate=on,disable=on,meta-data=/root/meta-data,user-data=/root/user-data --os-variant ubuntu22.04 --network bridge=virbr0 --graphics vnc,listen=0.0.0.0 --console pty,target_type=serial --noautoconsole
(optional): if bridge interface br0 is configed on this environment
virt-install --name k8s-control-1 --ram 4096 --vcpus 4 --import --disk path=/var/lib/libvirt/images/k8s-ubuntu-vm-worker-2.img,format=qcow2 --cloud-init root-password-generate=on,disable=on,meta-data=/root/meta-data,user-data=/root/user-data --os-variant ubuntu22.04 --network bridge=br0 --network bridge=virbr0 --graphics vnc,listen=0.0.0.0 --console pty,target_type=serial --noautoconsole
(optional): prepare key access
# prepare key access
ssh-keygen -t ed25519 -C "KVM VM Instance" -f ~/.ssh/kvm-vm-instance
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.131
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.200
ssh-copy-id -i ~/.ssh/kvm-vm-instance.pub ubuntu@192.168.122.188
# test
ssh -i ~/.ssh/kvm-vm-instance ubuntu@192.168.122.188
(scenario 1)kubeadm - Day 03 -【Basic Concept】:建立 Kubeadm Cluster + Bonus Tips
(optional)would like to use legacy k8s version, ex. 1.25.x
echo 'deb [trusted=yes] https://pkgs.k8s.io/core:/stable:/v1.25/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
(scenario 2)kubespray - Day 06 使用 Kubespray 建立自己的 K8S(一) - iT 邦幫忙 - Day 07 使用 Kubespray 建立自己的 K8S(二) - iT 邦幫忙
apt install -y python3-venv
git clone --depth 1 --branch v2.28.0 https://github.com/kubernetes-sigs/kubespray.git
# (optional) for legacy kubernetes which kubespray supports
# git clone --depth 1 --branch v2.23.2 https://github.com/kubernetes-sigs/kubespray.git kubespray-2.23
python3 -m venv kubespray-venv
source kubespray-venv/bin/activate
cd kubespray
pip install -U -r requirements.txt
cp -rfp inventory/sample inventory/mycluster
kubespray/inventory/mycluster/inventory.ini
[kube_control_plane]
k8s-ubuntu-vm-control-1 ansible_host=192.168.122.188 ansible_user=ubuntu
[etcd:children]
kube_control_plane
[kube_node]
k8s-ubuntu-vm-worker-1 ansible_host=192.168.122.200 ansible_user=ubuntu
k8s-ubuntu-vm-worker-2 ansible_host=192.168.122.131 ansible_user=ubuntu
Use the kubespray to manage the K8S cluster
# deploy
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml
# (optional)deploy specific version, ex.1.25.6
# kubespray require 2.23.2 because 2.24.0 drop support for Kubernetes 1.25.x
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml -e kube_version=v1.25.6
# add the nodes
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root scale.yml
# remove the nodes
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root remove-node.yml --extra-vars "node=k8s-ubuntu-vm-worker-3"
# install the app like dashborad, helm
ansible-playbook -i inventory/mycluster/inventory.ini --private-key=~/.ssh/kvm-vm-instance --become --become-user=root cluster.yml --tags=apps
Dashborad
dashborad with service account and NodePort In newer Kubernetes versions (1.24+), secrets for service accounts aren't created automatically.
kubectl create serviceaccount dashboard-admin-sa
kubectl create clusterrolebinding dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=default:dashboard-admin-sa
kubectl create token dashboard-admin-sa
dashboard quick test and skip login
containers:
- args:
- --namespace=kube-system
- --auto-generate-certificates
- --enable-skip-login
image: docker.io/kubernetesui/dashboard:v2.7.0
kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
# http://172.19.30.115:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/login
management tools
kubectl
# version v1.32.5
curl -LO "https://dl.k8s.io/release/v1.32.5/bin/linux/amd64/kubectl"
curl -LO "https://dl.k8s.io/v1.32.5/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
source <(kubectl completion bash) # set up autocomplete in bash into the current shell, bash-completion package should be installed first.
echo "source <(kubectl completion bash)" >> ~/.bashrc # add autocomplete permanently to your bash shell.
helm
# Method 1
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Method 2
curl https://mirror.openshift.com/pub/openshift-v4/clients/helm/3.18.4/helm-linux-amd64
sudo install -o root -g root -m 0755 helm-linux-amd64 /usr/local/bin/helm
minikube
kind
Rancher
- Day 2 - 何謂 Rancher | hwchiu learning note
- How to use Rancher in Kubernetes
- (55) [Rancher] 用網頁建立並管理K8s Cluster - YouTube
kubespray
Helm
- Kubernetes 基礎教學(三)Helm 介紹與建立 Chart | Cheng-Wei Hu
- Day20 - 使用 Helm 管理 Kubernetes 的應用佈署 - iT 邦幫忙
- Kubernetes 遇見 Helm charts - iT 邦幫忙
- 可觀測性宇宙的第九天 - Helm 安裝包管理器介紹 - iT 邦幫忙::一起幫忙解決難題,拯救 IT 人的一天
- Day 21- Kubernetes 的套件管理工具 Helm - iT 邦幫忙
- example: Kubernetes Dashboard
- example: mysql
- Day 3 - Helm 介紹 - iT 邦幫忙
- kubernetes - HELM vs K8s Operators - Stack Overflow
- Operator vs. Helm: Finding the Best Fit for Your Kubernetes Applications | Datadog
- Helm | Cheat Sheet
- https://artifacthub.io/
- Bitnami
- repository
- https://charts.helm.sh/stable
- https://helm.ngc.nvidia.com/nvidia
- https://prometheus-community.github.io/helm-charts
helm version
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
helm install my-nginx bitnami/nginx
# helm upgrade -n poc-scope --install my-nginx bitnami/nginx
helm list -A
helm list -n poc-scope
helm status my-nginx
helm history my-nginx
helm uninstall my-nginx
如果要覆寫,values.yaml 的設定可以透過 -f
方式指定覆蓋預設 values.yaml 配置或是使用 --set
覆寫
Operator
KubeVirt
- 在 K8S 上也能跑 VM!KubeVirt 簡介與建立(介紹篇) | Gemini Open Cloud 雙子星雲端運算股份有限公司
- Re-Imagining Virtualization with Kubernetes and KubeVirt
Kubeflow
KubeRay
- High Performance Computing (HPC) on Kubernetes
- Day19 - Ray Cluster 安裝之一: 基礎環境準備 - iT 邦幫忙::一起幫忙解決難題,拯救 IT 人的一天
kueue
Yunikorn
Volcano
hkube
MetalLB
- :star:MetalLB and NGINX Ingress // Setup External Access for Kubernetes Applications - YouTube
- Day 16 MetalLB 簡介&安裝 - iT 邦幫忙
- Day 17 MetalLB 使用 - iT 邦幫忙
- 我的 K8S DevOps 實驗環境 - 服務負載均衡器
- MetalLB: K8S 叢集的網路負載平衡器. 今天跟大家分享在地端資料中心內建立Kubernetes叢集之後,如何針對網路進行… | by Albert Weng | Medium
- MetalLB 登場:無縫部署. 延續上篇的內容,在了解了MetalLB的基本概念之後,我們就進入實際上部署的動作… | by Albert Weng | Medium
install by helm
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb -n metallb-system --create-namespace
config IP Pool metallb-ipaddresspool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool # IP Pool 的名字
namespace: metallb-system # Namespace
spec:
addresses:
- 192.168.122.201-192.168.122.209
config L2 Mode Advertisement l2-advertisement.yaml
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec: # 沒有填寫 Spec 就是對於全部的 namespace 下的 IPAddressPool 都套用
ipAddressPools:
- first-pool # 對於 first-pool 會套用廣播
done, and we can use Service with type: LoadBalancer
Nginx Ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Cert Manager
kube-prometheus-stack
- Day 24 Prometheus + Grafana 監控整合工具 kube-prometheus-stack - iT 邦幫忙
- 可觀測性宇宙的第十三天 - Kube-Prometheus-Stack 實戰(一) - iT 邦幫忙
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm search repo prometheus-community
helm show values prometheus-community/kube-prometheus-stack --version 75.12.0 > values.yaml
helm install -f values.yaml prometheus-stack prometheus-community/kube-prometheus-stack -n prometheus-stack --version 75.12.0 --create-namespace
kubectl --namespace prometheus-stack get secrets prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
# kubectl port-forward service/prometheus-stack-prometheus 9090:9090 -n prometheus-stack
# kubectl port-forward service/prometheus-stack-grafana 3000:80 -n prometheus-stack
loki
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm search repo grafana
helm show values grafana/loki-stack --version 2.10.2 > loki-stack-values.yaml
helm install -f loki-stack-values.yaml loki grafana/loki-stack -n loki-stack --version 2.10.2 --create-namespace
harbor
helm repo add harbor https://helm.goharbor.io
helm repo update
helm search repo harbor --versions
helm show values harbor/harbor --version 1.11.4 > harbor-values.yaml
# type, harborAdminPassword
helm install harbor harbor/harbor --version 1.11.4 -f harbor-values.yaml -n harbor-registry --create-namespace
kubectl get secrets harbor-core -n harbor-registry -o jsonpath='{.data.tls\.crt}' | base64 -d > harbor.ca
CKA
- Certification Expiration Policy Change 2024 - Linux Foundation - Education
-
IMPORTANT: Policy Change All certification products with a 36-month certification period will change to a 24-month certification period starting for exams taken April 1, 2024, 00:00 UTC.
-
- Pass the CKA Certification Exam: The Ultimate Study Guide
kubectl run multi-container-pod --image=nginx --dry-run=client -o yaml > sidecar.yaml
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
# Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379
# (This will automatically use the pod's labels as selectors)
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
# Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes
kubectl expose pod nginx --type=NodePort --port=80 --name=nginx-service --dry-run=client -o yaml
kubectl get pod <pod-name> -o yaml > pod-dump.yaml
kubectl create clusterrole --help
kubectl api-resources
kubectl run debug --image=curlimages/curl -it --rm -- sh
kubectl expose pod messaging --port=6379 --name messaging-service
helm list
helm list -A
helm get manifest <release_name> -n <namespace>
helm repo list
helm repo update
helm search repo <foo>
helm search repo <foo> --versions
helm upgrade <foo> <bar> --version <version>
helm uninstall <release_name> -n <namespace>
helm lint ./new-version
helm install webpage-server-02 ./new-version
keyword for document - CKA-note/附錄/CKA 常用的官方文件.md at main · michaelchen1225/CKA-note
TBD