当前位置: 首页 > news >正文

K8S中的Pod调度之亲和性调度

亲和性调度

亲和性调度是一种比硬性指定节点(使用 nodeNamenodeSelector)更灵活的调度策略,它允许定义一组规则,根据这些规则,调度器会尝试将 Pod 调度到最合适的节点上,但如果找不到完全匹配的节点,它仍然可以调度到其他节点上。

下面是亲和性调度的三种主要类型及其使用场景:

亲和性类型描述调度规则示例
Node Affinity定义 Pod 可以调度到哪些节点的规则。基于节点的标签选择节点,如调度到具有特定硬件配置或特定区域的节点。
- requiredDuringSchedulingIgnoredDuringExecution必须满足所有规则才能调度。
- nodeSelectorTerms节点选择列表。
- matchFields按节点字段列出的节点选择器要求列表。
- matchExpressions按节点标签列出的节点选择器要求列表(推荐)。
- preferredDuringSchedulingIgnoredDuringExecution优先调度到满足规则的节点,如果没有,也可以调度到其他节点(软限制)。
- preference节点选择器项,与权重相关联。
- weight倾向权重,范围1-100。
Pod Affinity定义 Pod 应该与哪些已存在的 Pod 调度到相同的拓扑域。适用于需要频繁交互的应用,减少通信延迟。
- requiredDuringSchedulingIgnoredDuringExecution必须与指定的 Pod 调度到相同的拓扑域。
- preferredDuringSchedulingIgnoredDuringExecution优先与指定的 Pod 调度到相同的拓扑域,如果没有,也可以调度到其他拓扑域(软限制)。
Pod Anti-Affinity定义 Pod 不应该与哪些已存在的 Pod 调度到相同的拓扑域。确保应用的多个实例分散在不同的拓扑域,提高可用性和容错性。
- requiredDuringSchedulingIgnoredDuringExecution必须不与指定的 Pod 调度到相同的拓扑域。
- preferredDuringSchedulingIgnoredDuringExecution优先不与指定的 Pod 调度到相同的拓扑域,如果没有,也可以调度到其他拓扑域(软限制)。

每种亲和性都支持两种模式:

  • RequiredDuringSchedulingIgnoredDuringExecution:在调度时必须满足的规则,如果找不到匹配的节点,Pod 将不会被调度。但如果调度后节点的标签发生变化导致不再匹配,Pod 仍然会保留在该节点上。

  • PreferredDuringSchedulingIgnoredDuringExecution:在调度时优先考虑的规则,但如果找不到匹配的节点,Pod 仍然可以被调度到其他节点。

NodeAffinity(节点亲和性)
  • NodeAffinity 允许你根据节点的标签来指定 Pod 应该或倾向于调度到哪些节点上。

  • NodeAffinity 可选配置:

    • requiredDuringSchedulingIgnoredDuringExecution 硬限制

      • nodeSelectorTerms:节点选择列表,必须满足所有指定的规则才可以调度到节点上。

      • matchFields:按节点字段列出的节点选择器要求列表。

      • matchExpressions: 按节点标签列出的节点选择器要求列表,包括:

    • key:键

      • values:值

    • operator:关系符,支持 ExistsDoesNotExistInNotInGtLt

    • preferredDuringSchedulingIgnoredDuringExecution 软限制

    • preference: 一个节点选择器项,与相应的权重相关联。

      • matchFields:按节点字段列出的节点选择器要求列表。

      • matchExpressions:按节点标签列出的节点选择器要求列表,包括:

      • key:键

        • values:值

      • operator:关系符,支持 InNotInExistsDoesNotExistGtLt

      • weight:倾向权重,在范围1-100。

硬限制配置

因为没有找到被打上test标签的node所以调度失败

# vim pod-nodeaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-nodeaffinity-requirednamespace: test
spec:containers:- name: nginximage: nginx:1.17.1affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: nodeenvoperator: Invalues: ["test","xxx"][root@k8s-master ~]# kubectl create ns test
namespace/test created
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml 
pod/pod-nodeaffinity-required created
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test
NAME                        READY   STATUS    RESTARTS   AGE
pod-nodeaffinity-required   0/1     Pending   0          22s
[root@k8s-master ~]# kubectl describe  pods pod-nodeaffinity-required -n test
Name:         pod-nodeaffinity-required
Namespace:    test
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:nginx:Image:        nginx:1.17.1Port:         <none>Host Port:    <none>Environment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5f6rd (ro)
Conditions:Type           StatusPodScheduled   False 
Volumes:kube-api-access-5f6rd:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason            Age   From               Message----     ------            ----  ----               -------Warning  FailedScheduling  34s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.Warning  FailedScheduling  33s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.

 接下来我们为node打上标签,继续测试

[root@k8s-master ~]# kubectl label nodes k8s-node1 nodeenv=dev
node/k8s-node1 labeled
[root@k8s-master ~]# kubectl label nodes k8s-node2 nodeenv=test
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl delete -f pod-nodeaffinity-required.yaml 
pod "pod-nodeaffinity-required" deleted
[root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-required.yaml 
pod/pod-nodeaffinity-required created
[root@k8s-master ~]# kubectl describe  pods pod-nodeaffinity-required -n test
Name:         pod-nodeaffinity-required
Namespace:    test
Priority:     0
Node:         k8s-node2/192.168.58.233
Start Time:   Thu, 16 Jan 2025 04:14:35 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: eb576e210ed0daf158fc97706a7858428fdcbce61d89936cd60323c184bf65d7cni.projectcalico.org/podIP: 10.244.169.130/32cni.projectcalico.org/podIPs: 10.244.169.130/32
Status:       Running
IP:           10.244.169.130
IPs:IP:  10.244.169.130
Containers:nginx:Container ID:   docker://b58aa001a6b25893a091a726ede2ea57d96e6209c11a8c17d269d78087db505eImage:          nginx:1.17.1Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort:           <none>Host Port:      <none>State:          RunningStarted:      Thu, 16 Jan 2025 04:14:38 -0500Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m5zx9 (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:kube-api-access-m5zx9:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age        From               Message----    ------     ----       ----               -------Normal  Scheduled  11s        default-scheduler  Successfully assigned test/pod-nodeaffinity-required to k8s-node2Normal  Pulled     <invalid>  kubelet            Container image "nginx:1.17.1" already present on machineNormal  Created    <invalid>  kubelet            Created container nginxNormal  Started    <invalid>  kubelet            Started container nginx
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-required -n test
NAME                        READY   STATUS    RESTARTS   AGE
pod-nodeaffinity-required   1/1     Running   0          21s
  • requiredDuringSchedulingIgnoredDuringExecution:这个字段定义了一个强制性的调度规则,即 Pod 必须在满足以下条件的节点上调度,调度过程中会考虑这个规则,但在运行过程中如果节点标签发生变化,这个规则将被忽略。

  • nodeSelectorTerms:这个字段定义了一个或多个节点选择器条件列表。Pod 只有在所有这些条件都满足的情况下才会被调度到节点上。

  • matchExpressions:这个字段定义了一个或多个匹配表达式列表。每个匹配表达式都包含一个键(key)、一个操作符(operator)和一个或多个值(values)Pod 只有在所有这些匹配表达式都满足的情况下才会被调度到节点上。

在这个配置文件中,matchExpressions 定义了一个条件:

  • keynodeenv,这意味着要匹配的节点标签的键是 nodeenv

  • operatorIn,这意味着要匹配的值必须在给定的列表中。

  • values["test","dev"],这意味着要匹配的节点标签的值必须是 testdev

  • 因此,这个 Pod 的亲和性规则要求它必须在标签 nodeenv 的值为 testdev 的节点上调度。

软限制配置

 软限制只会优先调度被打上标签的node,如果没有任然会被调度到其他节点

[root@k8s-master ~]# vim pod-nodeaffinity-preferred.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-nodeaffinity-preferrednamespace: test
spec:containers:- name: nginximage: nginx:1.17.1affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 1preference:matchExpressions:- key: nodeenvoperator: Invalues: ["xxx","yyy"][root@k8s-master ~]# kubectl apply -f pod-nodeaffinity-preferred.yaml 
pod/pod-nodeaffinity-preferred created[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test
NAME                         READY   STATUS              RESTARTS   AGE
pod-nodeaffinity-preferred   0/1     ContainerCreating   0          32s
[root@k8s-master ~]# kubectl get pods pod-nodeaffinity-preferred -n test -w
NAME                         READY   STATUS              RESTARTS   AGE
pod-nodeaffinity-preferred   0/1     ContainerCreating   0          36s
pod-nodeaffinity-preferred   1/1     Running             0          37s[root@k8s-master ~]# kubectl describe pods pod-nodeaffinity-preferred -n test 
Name:         pod-nodeaffinity-preferred
Namespace:    test
Priority:     0
Node:         k8s-node1/192.168.58.232
Start Time:   Thu, 16 Jan 2025 04:28:24 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: eab55d3f2b78987484123e4f4b21434f4f1323620026e3946e5fe77476e4a761cni.projectcalico.org/podIP: 10.244.36.71/32cni.projectcalico.org/podIPs: 10.244.36.71/32
Status:       Running
IP:           10.244.36.71
IPs:IP:  10.244.36.71
Containers:nginx:Container ID:   docker://56be94e1afb802e91e86faf21ccce1925fa7f4204b418e6c5b8ac11024f75fc2Image:          nginx:1.17.1Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort:           <none>Host Port:      <none>State:          RunningStarted:      Thu, 16 Jan 2025 04:29:00 -0500Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zv8s7 (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:kube-api-access-zv8s7:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age        From               Message----    ------     ----       ----               -------Normal  Scheduled  49s        default-scheduler  Successfully assigned test/pod-nodeaffinity-preferred to k8s-node1Normal  Pulling    <invalid>  kubelet            Pulling image "nginx:1.17.1"Normal  Pulled     <invalid>  kubelet            Successfully pulled image "nginx:1.17.1" in 32.615153362sNormal  Created    <invalid>  kubelet            Created container nginxNormal  Started    <invalid>  kubelet            Started container nginx

 

PodAffinity(Pod 亲和性)
  • PodAffinity主要实现以运行的Pod为参照,实现让新创建的Pod跟参照pod在一个区域的功能。

  • PodAffinity可选配置

    • requiredDuringSchedulingIgnoredDuringExecution 硬限制

      • namespaces:指定参照 Pod 的命名空间。

      • topologyKey:指定调度作用域,例如 kubernetes.io/hostname(以 Node 节点为区分范围)或 beta.kubernetes.io/os(以 Node 节点的操作系统类型来区分)。

      • labelSelector:标签选择器,用于匹配 Pod 的标签。

      • matchExpressions:按节点标签列出的节点选择器要求列表,包括:

        • key:键

        • values:值

        • operator:关系符,支持 InNotInExistsDoesNotExist

        • matchLabels:指多个 matchExpressions 映射的内容

    • preferredDuringSchedulingIgnoredDuringExecution 软限制

      • weight:倾向权重,在范围1-100,用于指定这个推荐规则的优先级。

      • podAffinityTerm包含:

        • namespaces

        • topologyKey

        • labelSelector

        • matchExpressions

        • key

        • values

        • operator

        • matchLabels

  • topologyKey用于指定调度时作用域,例如:

    • 如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围

    • 如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分

  • 硬限制配置

    • 创建参照Pod

# vim pod-podaffinity-target.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-podaffinity-targetnamespace: testlabels:podenv: pro
spec:containers:- name: nginximage: nginx:1.17.1nodeName: k8s-node1
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-target.yaml 
pod/pod-podaffinity-target created[root@k8s-master ~]# kubectl describe  pods pod-podaffinity-target -n test 
Name:         pod-podaffinity-target
Namespace:    test
Priority:     0
Node:         k8s-node1/192.168.58.232
Start Time:   Thu, 16 Jan 2025 04:58:54 -0500
Labels:       podenv=pro
Annotations:  cni.projectcalico.org/containerID: 48a68cbe52064a7eb4c3be9db7e24dff3176382ed16d18e9ede5d30312e6425fcni.projectcalico.org/podIP: 10.244.36.72/32cni.projectcalico.org/podIPs: 10.244.36.72/32
Status:       Running
IP:           10.244.36.72
IPs:IP:  10.244.36.72
Containers:nginx:Container ID:   docker://681c85e860b8e04189abd25d42de0e377cc297d73ef7965871631622704ecd19Image:          nginx:1.17.1Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort:           <none>Host Port:      <none>State:          RunningStarted:      Thu, 16 Jan 2025 04:58:58 -0500Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8vrrt (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:kube-api-access-8vrrt:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason   Age        From     Message----    ------   ----       ----     -------Normal  Pulled   <invalid>  kubelet  Container image "nginx:1.17.1" already present on machineNormal  Created  <invalid>  kubelet  Created container nginxNormal  Started  <invalid>  kubelet  Started container nginx
  • 创建建pod-podaffinity-required

[root@k8s-master ~]# vim pod-podaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-podaffinity-requirednamespace: test
spec:containers:- name: nginximage: nginx:1.17.1affinity:     #亲和性设置podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenv   #选择带podenv标签的Podoperator: Invalues: ["xxx","yyy"]  #匹配"xxx","yyy"标签topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml 
pod/pod-podaffinity-required created
[root@k8s-master ~]# kubectl describe pod pod-podaffinity-required -n test
Name:         pod-podaffinity-required
Namespace:    test
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:nginx:Image:        nginx:1.17.1Port:         <none>Host Port:    <none>Environment:  <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7kjw (ro)
Conditions:Type           StatusPodScheduled   False 
Volumes:kube-api-access-l7kjw:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type     Reason            Age   From               Message----     ------            ----  ----               -------Warning  FailedScheduling  39s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity rules.Warning  FailedScheduling  38s   default-scheduler  0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules, 2 node(s) didn't match pod affinity/anti-affinity rules.
[root@k8s-master ~]# vim  pod-podaffinity-required.yaml 
---
apiVersion: v1
kind: Pod
metadata:name: pod-podaffinity-requirednamespace: test
spec:containers:- name: nginximage: nginx:1.17.1affinity:     #亲和性设置podAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenv   #选择带podenv标签的Podoperator: Invalues: ["pro","yyy"]  #匹配"xxx","yyy"标签topologyKey: kubernetes.io/hostname
[root@k8s-master ~]# kubectl delete -f pod-podaffinity-required.yaml 
pod "pod-podaffinity-required" deleted
[root@k8s-master ~]# kubectl apply -f pod-podaffinity-required.yaml 
pod/pod-podaffinity-required created
[root@k8s-master ~]# kubectl describe pod pod-podaffinity-required -n test
Name:         pod-podaffinity-required
Namespace:    test
Priority:     0
Node:         k8s-node1/192.168.58.232
Start Time:   Thu, 16 Jan 2025 05:09:42 -0500
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: c459af771605b41fd74ae294344118acbdc2cd8fed3ae242982506c8eda9ad31cni.projectcalico.org/podIP: 10.244.36.73/32cni.projectcalico.org/podIPs: 10.244.36.73/32
Status:       Running
IP:           10.244.36.73
IPs:IP:  10.244.36.73
Containers:nginx:Container ID:   docker://501cb02e356ddb23e7e11fd48ac0403f83221afbba9d18c608f3415533fe4290Image:          nginx:1.17.1Image ID:       docker-pullable://nginx@sha256:b4b9b3eee194703fc2fa8afa5b7510c77ae70cfba567af1376a573a967c03dbbPort:           <none>Host Port:      <none>State:          RunningStarted:      Thu, 16 Jan 2025 05:09:45 -0500Ready:          TrueRestart Count:  0Environment:    <none>Mounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-24cmw (ro)
Conditions:Type              StatusInitialized       True Ready             True ContainersReady   True PodScheduled      True 
Volumes:kube-api-access-24cmw:Type:                    Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds:  3607ConfigMapName:           kube-root-ca.crtConfigMapOptional:       <nil>DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:Type    Reason     Age        From               Message----    ------     ----       ----               -------Normal  Scheduled  6s         default-scheduler  Successfully assigned test/pod-podaffinity-required to k8s-node1Normal  Pulled     <invalid>  kubelet            Container image "nginx:1.17.1" already present on machineNormal  Created    <invalid>  kubelet            Created container nginxNormal  Started    <invalid>  kubelet            Started container nginx
  • Taints on Nodes: 集群中有一个节点上应用了一个 taint node-role.kubernetes.io/master:, 这意味着这个节点不接受任何 Pod 的调度,除非这些 Pod 也被标记为可以容忍这个 taint。这种情况下,如果你的 Pod 没有设置相应的 toleration,就无法被调度到这个节点上。

  • Pod Affinity Rules: Pod 由于满足 requiredDuringSchedulingIgnoredDuringExecution 的 pod affinity 规则而无法被调度到任何节点上。根据事件信息,集群中有 3 个节点没有匹配 pod affinity 规则,这意味着这些节点上没有满足 Pod affinity 规则的 Pod。

PodAntiAffinity 特性(Pod 反亲和性)
  • PodAntiAffinity 是 Kubernetes 中的一种亲和性调度规则,它与 PodAffinity 相反,用于确保带有特定标签的 Pod 不会被调度到同一个节点上。这种规则特别适用于那些天然互斥的应用组件,或者对于那些需要分散以提高容错性和性能的 Pod。

  • PodAntiAffinity主要实现以运行的Pod为参照,让新创建的Pod跟参照pod不在一个区域中的功能。它的配置方式和选项跟PodAffinty是一样的。

[root@k8s-master ~]# vim pod-podantiaffinity-required.yaml
---
apiVersion: v1
kind: Pod
metadata:name: pod-podantiaffinity-requirednamespace: test
spec:containers:- name: nginximage: nginx:1.17.1affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: podenvoperator: Invalues: ["pro"]topologyKey: kubernetes.io/hostname[root@k8s-master ~]#  kubectl apply -f  pod-podantiaffinity-required.yaml
pod/pod-podantiaffinity-required created
[root@k8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide
NAME                           READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod-podantiaffinity-required   1/1     Running   0          9s    10.244.169.131   k8s-node2   <none>           <none>
[root@k8s-master ~]# kubectl get pod pod-podantiaffinity-required -n test -o wide --show-labels
NAME                           READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES   LABELS
pod-podantiaffinity-required   1/1     Running   0          19s   10.244.169.131   k8s-node2   <none>           <none>            <none>
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME         STATUS   ROLES                  AGE   VERSION    LABELS
k8s-master   Ready    control-plane,master   21d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-node1    Ready    <none>                 21d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux,nodeenv=pro
k8s-node2    Ready    <none>                 21d   v1.21.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux,nodeenv=test

 新Pod必须要与拥有标签nodeenv=pro的pod不在同一Node上

http://www.lryc.cn/news/521924.html

相关文章:

  • 高等数学学习笔记 ☞ 不定积分的积分法
  • 【HTTP】详解
  • cursor重构谷粒商城01——为何要重构谷粒商城
  • 如何在 ASP.NET Core 中实现速率限制?
  • STM32-笔记43-低功耗
  • Facebook 隐私风波:互联网时代数据安全警钟
  • Java 中的 ZoneOffset
  • amis模板语法、数据映射与表达式
  • 频域增强通道注意力机制EFCAM模型详解及代码复现
  • GitLab 国际站中国大陆等地区停服,如何将数据快速迁移到云效
  • BPG图像库和实用程序(译)
  • 简述1个业务过程:从客户端调用接口,再到调用中间件(nacos、redis、kafka、feign),数据库的过程
  • 01.02、判定是否互为字符重排
  • 什么是.NET中的反射,它有哪些应用场景
  • Linux离线部署ELK
  • 解决 chls.pro/ssl 无法进入问题
  • Rust 游戏开发框架指南
  • hadoop3.3和hive4.0安装——单节点
  • centos安装golang
  • 博图 linucx vmware
  • Service Work离线体验与性能优化
  • Unity 语音转文字 Vosk 离线库
  • VSCode连接Github的重重困难及解决方案!
  • 《AI赋能鸿蒙Next,打造极致沉浸感游戏》
  • 小白:react antd 搭建框架关于 RangePicker DatePicker 时间组件使用记录 2
  • <C++学习>C++ std 多线程教程
  • 用 Python 自动化处理日常任务
  • 《深入浅出HTTPS​​​​​​​​​​​​​​​​​》读书笔记(28):DSA数字签名
  • type 属性的用途和实现方式(图标,表单,数据可视化,自定义组件)
  • PSINS工具箱学习(四)捷联惯导更新算法