k8s pod调度基础
一、理论
基础调度依靠scheduler来实现调度。
1、pod控制器:
replication controller(RC,复制控制器,有副本)
replicaset(RS,复制集,有副本)
deployment(无状态集,有副本) - 滚动更新(镜像)、pod扩缩容
statefulset(有状态集,有副本)- 滚动更新(镜像)、pod扩缩容、级联与非级联删除。
daemonset(守护进程集,无副本)
级联删除:删除statefulset时,statefulset和它的pod都会被删除。
非级联删除:删除statefulset时,statefulset的pod不会被删除。
2、标签选择器:基于等式的、基于集合的
3、无状态服务
单独就能对外提供服务。
请求数据->不依赖他者,直接返回数据。
无依赖
web类都是无状态服务,例如:nginx、apache、tomcat
部署无状态服务时使用deployment(部署),deployment部署后,借助复制集管理pod。
4、有状态服务
借助集群对外提供服务
请求数据->依赖他者,自身有则返回,自身无借助他者返回数据。
有依赖关系
例如:数据库类(主从)、缓存类、中间件(kafka等)
部署有状态服务时使用statfulset(有状态集)
5、守护进程集 daemonset
在节点上运行一个pod副本、当有新的节点加入集群时,会为他们新增一个pod,当节点从集群被移除,这些pod会被回收,删除daemonset将会删除它创建的所有的pod。
6、cronjob
计划任务,根据设置的时间策略执行任务。
二、实践
101、102、103 导入所需镜像,因不确定会分配到哪个节点,因此所有节点都导入。
[root@k8s-master ~]# cd images/
[root@k8s-master images]# bash imp_docker_img.sh pod各种调度器1、复制控制器(为pod做副本)
101
[root@k8s-master ~]# vim replicationcontroller-nginx.yaml
apiVersion: v1
kind: ReplicationController
metadata:name: nginx
spec:replicas: 3selector:app: nginxtemplate: # pod模板,为pod设置属性metadata:name: nginxlabels:app: nginxspec:containers:- name: nginximage: nginx:1.7.9ports:- containerPort: 80[root@k8s-master ~]# ku apply -f replicationcontroller-nginx.yaml
replicationcontroller/nginx created[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
nginx-b4gcc 1/1 Running 0 39s
nginx-ckzr5 1/1 Running 0 39s
nginx-q8b9l 1/1 Running 0 39s
[root@k8s-master ~]# ku get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-b4gcc 1/1 Running 0 44s 10.244.85.196 k8s-node01 <none> <none>
nginx-ckzr5 1/1 Running 0 44s 10.244.58.193 k8s-node02 <none> <none>
nginx-q8b9l 1/1 Running 0 44s 10.244.58.194 k8s-node02 <none> <none>
[root@k8s-master ~]# ku get rc
NAME DESIRED CURRENT READY AGE
nginx 3 3 3 2m14s2、这里把103(node2)暂停,等几十秒即可查看到节点状态变为不可用。
[root@k8s-master ~]# ku get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 4d22h v1.23.0
k8s-node01 Ready <none> 4d21h v1.23.0
k8s-node02 NotReady <none> 4d21h v1.23.03、等待几分钟,发现已转移。
[root@k8s-master ~]# ku get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-b4gcc 1/1 Running 0 14m 10.244.85.196 k8s-node01 <none> <none>
nginx-ckzr5 1/1 Terminating 0 14m 10.244.58.193 k8s-node02 <none> <none>
nginx-h282s 1/1 Running 0 2m54s 10.244.85.198 k8s-node01 <none> <none>
nginx-q8b9l 1/1 Terminating 0 14m 10.244.58.194 k8s-node02 <none> <none>
nginx-zwtjs 1/1 Running 0 2m54s 10.244.85.197 k8s-node01 <none> <none>4、此时再把103(node2)启动起来,发现pod不回到node2上,还在node1上。
[root@k8s-master ~]# ku get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 4d22h v1.23.0
k8s-node01 Ready <none> 4d22h v1.23.0
k8s-node02 Ready <none> 4d22h v1.23.0
[root@k8s-master ~]# ku get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-b4gcc 1/1 Running 0 19m 10.244.85.196 k8s-node01 <none> <none>
nginx-h282s 1/1 Running 0 7m42s 10.244.85.198 k8s-node01 <none> <none>
nginx-zwtjs 1/1 Running 0 7m42s 10.244.85.197 k8s-node01 <none> <none>1、复制集(为pod做副本)
101
[root@k8s-master ~]# vim replicaset-example.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:name: frontendlabels:app: guestbooktier: frontend
spec:replicas: 3selector:matchLabels: #需精准匹配单个标签或多个标签时,使用该字段。tier: frontend # 指定键为tier 值为 frontendmatchExpressions:- {key: tier, operator: In, values: [frontend]} # 值frontend在tier中template:metadata:labels:app: guestbooktier: frontendspec:containers:- name: php-redisimage: nginx:1.7.9resources:requests:cpu: 100mmemory: 100Mienv:- name: GET_HOSTS_FROMvalue: dnsports:- containerPort: 802、创建
[root@k8s-master ~]# ku apply -f replicaset-example.yaml
replicaset.apps/frontend created
3、查看pod信息
[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
frontend-9crsg 1/1 Running 0 5s
frontend-mlvcv 1/1 Running 0 5s
frontend-tqt29 1/1 Running 0 5s[root@k8s-master ~]# ku get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
frontend-9crsg 1/1 Running 0 9s 10.244.58.195 k8s-node02 <none> <none>
frontend-mlvcv 1/1 Running 0 9s 10.244.85.199 k8s-node01 <none> <none>
frontend-tqt29 1/1 Running 0 9s 10.244.58.196 k8s-node02 <none> <none>1、deployment
[root@k8s-master ~]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentlabels:name: nginx-deployment
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.7.9ports:- name: nginxcontainerPort: 802、部署
[root@k8s-master ~]# ku apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created3、查看信息
[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6cf9b75cdd-kz8hc 1/1 Running 0 27s
nginx-deployment-6cf9b75cdd-pndqb 1/1 Running 0 27s[root@k8s-master ~]# ku get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6cf9b75cdd-kz8hc 1/1 Running 0 32s 10.244.85.200 k8s-node01 <none> <none>
nginx-deployment-6cf9b75cdd-pndqb 1/1 Running 0 32s 10.244.58.197 k8s-node02 <none> <none>4、手动滚动更新
更新到1.9.1版本
[root@k8s-master ~]# ku set image deployment nginx-deployment nginx=nginx:1.9.1 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated更新到1.12.1版本
[root@k8s-master ~]# ku set image deployment nginx-deployment nginx=nginx:1.12.1 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated查看更新过程
[root@k8s-master ~]# ku rollout status deployment.v1.apps/nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...此时rs(复制集)有新的和旧的
[root@k8s-master ~]# ku get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-55bbd8478b 2 2 1 6m
nginx-deployment-6cf9b75cdd 0 0 0 31m
nginx-deployment-7569c477b6 1 1 1 6m32s5、查看deployment的详细信息
[root@k8s-master ~]# ku describe deploy nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Tue, 01 Jul 2025 10:17:35 +0800
Labels: name=nginx-deployment
Annotations: deployment.kubernetes.io/revision: 3kubernetes.io/change-cause: kubectl set image deployment nginx-deployment nginx=nginx:1.12.1 --record=true
Selector: app=nginx
Replicas: 2 desired | 2 updated | 3 total | 2 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:Labels: app=nginxContainers:nginx:Image: nginx:1.12.1Port: 80/TCPHost Port: 0/TCPEnvironment: <none>Mounts: <none>Volumes: <none>
Conditions:Type Status Reason---- ------ ------Available True MinimumReplicasAvailableProgressing True ReplicaSetUpdated
OldReplicaSets: nginx-deployment-7569c477b6 (1/1 replicas created)
NewReplicaSet: nginx-deployment-55bbd8478b (2/2 replicas created)
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal ScalingReplicaSet 32m deployment-controller Scaled up replica set nginx-deployment-6cf9b75cdd to 2Normal ScalingReplicaSet 7m23s deployment-controller Scaled up replica set nginx-deployment-7569c477b6 to 1Normal ScalingReplicaSet 7m20s deployment-controller Scaled down replica set nginx-deployment-6cf9b75cdd to 1Normal ScalingReplicaSet 7m19s deployment-controller Scaled up replica set nginx-deployment-7569c477b6 to 2Normal ScalingReplicaSet 7m17s deployment-controller Scaled down replica set nginx-deployment-6cf9b75cdd to 0Normal ScalingReplicaSet 6m51s deployment-controller Scaled up replica set nginx-deployment-55bbd8478b to 1Normal ScalingReplicaSet 6m3s deployment-controller Scaled down replica set nginx-deployment-7569c477b6 to 1Normal ScalingReplicaSet 6m3s deployment-controller Scaled up replica set nginx-deployment-55bbd8478b to 26、查看更新历史
[root@k8s-master ~]# ku rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment nginx-deployment nginx=nginx:1.9.1 --record=true
3 kubectl set image deployment nginx-deployment nginx=nginx:1.12.1 --record=true更新deployment以增加更新历史记录
[root@k8s-master ~]# ku set image deployment nginx-deployment nginx=dotbalo/canary:v1 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated[root@k8s-master ~]# ku set image deployment nginx-deployment nginx=dotbalo/canary:v2 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginx-deployment image updated查看更新历史
[root@k8s-master ~]# ku rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment nginx-deployment nginx=nginx:1.9.1 --record=true
3 kubectl set image deployment nginx-deployment nginx=nginx:1.12.1 --record=true
4 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v1 --record=true
5 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v2 --record=true7、查看第2次更新的详情
[root@k8s-master ~]# ku rollout history deployment/nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:Labels: app=nginxpod-template-hash=7569c477b6Annotations: kubernetes.io/change-cause: kubectl set image deployment nginx-deployment nginx=nginx:1.9.1 --record=trueContainers:nginx:Image: nginx:1.9.1Port: 80/TCPHost Port: 0/TCPEnvironment: <none>Mounts: <none>Volumes: <none>8、回滚到指定版本,这里指定版本为第2个
[root@k8s-master ~]# ku rollout undo deployment/nginx-deployment --to-revision=2
deployment.apps/nginx-deployment rolled back[root@k8s-master ~]# ku rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
3 kubectl set image deployment nginx-deployment nginx=nginx:1.12.1 --record=true
4 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v1 --record=true
5 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v2 --record=true
6 kubectl set image deployment nginx-deployment nginx=nginx:1.9.1 --record=true注意:最下面的就是当前版本。回滚到上次版本
[root@k8s-master ~]# ku rollout undo deployment/nginx-deployment
deployment.apps/nginx-deployment rolled back[root@k8s-master ~]# ku rollout history deployment/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
3 kubectl set image deployment nginx-deployment nginx=nginx:1.12.1 --record=true
4 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v1 --record=true
6 kubectl set image deployment nginx-deployment nginx=nginx:1.9.1 --record=true
7 kubectl set image deployment nginx-deployment nginx=dotbalo/canary:v2 --record=true9、扩容,调整副本集数
[root@k8s-master ~]# ku scale deployment.v1.apps/nginx-deployment --replicas=3
deployment.apps/nginx-deployment scaled[root@k8s-master ~]# ku get pods
nginx-deployment-55bbd8478b-t4rk9 0/1 Terminating 0 14m
nginx-deployment-6d5f4c496c-58khj 0/1 ContainerCreating 0 2m46s
nginx-deployment-6d5f4c496c-ktqz9 0/1 Terminating 0 4m24s
nginx-deployment-6d5f4c496c-ngpjq 1/1 Running 0 2m49s
nginx-deployment-6d5f4c496c-zcpf9 1/1 Running 0 24s
nginx-deployment-7569c477b6-6mbnh 1/1 Running 0 4m22s运行的容器不会超过副本集数[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6d5f4c496c-4dc5h 1/1 Running 0 2m58s
nginx-deployment-6d5f4c496c-58khj 1/1 Running 0 7m28s
nginx-deployment-6d5f4c496c-cgs4z 1/1 Running 0 2m58s
nginx-deployment-6d5f4c496c-k8w96 1/1 Running 0 2m58s
nginx-deployment-6d5f4c496c-ngpjq 1/1 Running 0 7m31s
nginx-deployment-6d5f4c496c-zcpf9 1/1 Running 0 5m6s10、删除
[root@k8s-master ~]# ku delete -f nginx-deployment.yaml
deployment.apps "nginx-deployment" deleted
[root@k8s-master ~]# ku get pod
No resources found in default namespace.1、statefulset
[root@k8s-master ~]# vim redis-statefulset.yaml
apiVersion: v1
kind: Service
metadata:name: redis-svcspec:selector:app: redis-stsports:- port: 6379protocol: TCPtargetPort: 6379apiVersion: apps/v1
kind: StatefulSet
metadata:name: redis-stsspec:serviceName: redis-svcreplicas: 2selector:matchLabels:app: redis-ststemplate:metadata:labels:app: redis-stsspec:containers:- image: redis:5-alpinename: redisports:- containerPort: 63792、创建
[root@k8s-master ~]# ku create -f redis-statefulset.yaml
statefulset.apps/redis-sts created3、查看statefulset状态
[root@k8s-master ~]# ku get sts
NAME READY AGE
redis-sts 2/2 5s群集状态
[root@k8s-master ~]# ku get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d群集状态,序号越小表明创建越早,最早创建的做主。
[root@k8s-master ~]# ku get po -l app=redis-sts
NAME READY STATUS RESTARTS AGE
redis-sts-0 1/1 Running 0 40s
redis-sts-1 1/1 Running 0 38s4、扩容
[root@k8s-master ~]# ku scale sts redis-sts --replicas=3
statefulset.apps/redis-sts scaled
[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
redis-sts-0 1/1 Running 0 3m
redis-sts-1 1/1 Running 0 2m58s
redis-sts-2 1/1 Running 0 6s5、缩容
打开第二个终端
[root@k8s-master ~]# ku get pods -w -l app=redis-sts
NAME READY STATUS RESTARTS AGE
redis-sts-0 1/1 Running 0 3m37s
redis-sts-1 1/1 Running 0 3m35s
redis-sts-2 1/1 Running 0 43s回到第一个终端
[root@k8s-master ~]# ku patch sts redis-sts -p '{"spec":{"replicas":2}}'查看第二个终端
[root@k8s-master ~]# ku get pods -w -l app=redis-sts
NAME READY STATUS RESTARTS AGE
redis-sts-0 1/1 Running 0 3m37s
redis-sts-1 1/1 Running 0 3m35s
redis-sts-2 1/1 Running 0 43s
redis-sts-2 1/1 Terminating 0 66s
redis-sts-2 1/1 Terminating 0 67s
redis-sts-2 0/1 Terminating 0 68s
redis-sts-2 0/1 Terminating 0 68s
redis-sts-2 0/1 Terminating 0 68s6、非级联删除
[root@k8s-master ~]# ku delete statefulset redis-sts --cascade=false
warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
statefulset.apps "redis-sts" deleted查看删除结果
[root@k8s-master ~]# ku get sts
No resources found in default namespace.查看管理的pod
[root@k8s-master ~]# ku get po
NAME READY STATUS RESTARTS AGE
redis-sts-0 1/1 Running 0 7m55s
redis-sts-1 1/1 Running 0 7m53s7、级联删除
先创建statefulset
[root@k8s-master ~]# ku create -f redis-statefulset.yaml
statefulset.apps/redis-sts created级联删除
[root@k8s-master ~]# ku delete statefulset redis-sts
statefulset.apps "redis-sts" deleted
查看
[root@k8s-master ~]# ku get po
No resources found in default namespace.1、daemonset
[root@k8s-master ~]# vim daemonset-nginx.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:name: pod-controllernamespace: devlabels:controller: daemonset
spec:selector:matchLabels:app: nginx-podtemplate:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.7.9ports:- name: nginx-portcontainerPort: 80protocol: TCP2、创建命名空间及daemonset
[root@k8s-master ~]# ku create namespace dev
namespace/dev created
[root@k8s-master ~]# ku create -f daemonset-nginx.yaml
daemonset.apps/pod-controller created3、查看 ds是daemonset的缩写,-n指定命名空间
[root@k8s-master ~]# ku get ds -n dev -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
pod-controller 2 2 2 2 2 <none> 37s nginx nginx:1.7.9 app=nginx-podPS:我只有两个是因为我的k8s集群是使用kubeadm方式安装的,这种方式安装的k8s的主节点会带有污点属性,因此不会在主节点部署pod。如果是二进制方式安装的,可自定义主节点是否有污点。4、查看pod所在节点
[root@k8s-master ~]# ku get pod -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-controller-rb6ld 1/1 Running 0 2m7s 10.244.85.210 k8s-node01 <none> <none>
pod-controller-spw84 1/1 Running 0 2m7s 10.244.58.211 k8s-node02 <none> <none>5、删除
[root@k8s-master ~]# ku delete ds pod-controller -n dev
daemonset.apps "pod-controller" deleted定时任务
1、创建
[root@k8s-master ~]# vim cronjob-example.yaml
apiVersion: batch/v1 #1.21版本以上 改为batch/v1
kind: CronJob
metadata:name: hello
spec:schedule: "*/1 * * * *"jobTemplate:spec:template:spec:containers:- name: helloimage: busybox:v1args:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure[root@k8s-master ~]# ku create -f cronjob-example.yaml
cronjob.batch/hello created2、查看
[root@k8s-master ~]# ku get cj
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 <none> 2s等待一分钟会有pod[root@k8s-master ~]# ku get jobs
NAME COMPLETIONS DURATION AGE
hello-29189042 1/1 3s 14s3、查看pod日志,状态是正常的,complated是因为该pod内的容器只运行一个date和echo命令,执行完即退出。
[root@k8s-master ~]# ku get pod
NAME READY STATUS RESTARTS AGE
hello-29189042-xcdsq 0/1 Completed 0 34s
[root@k8s-master ~]# ku logs -f hello-29189042-xcdsq
Tue Jul 1 04:00:30 UTC 2025
Hello from the Kubernetes cluster4、删除
[root@k8s-master ~]# ku delete cronjob hello
cronjob.batch "hello" deleted
[root@k8s-master ~]# ku get cj
No resources found in default namespace.