当前位置: 首页 > news >正文

k8s学习笔记

目录

一、安装前准备

二、安装

 1、安装kubelet、kubeadm、kubectl

2、使用kubeadm引导集群

1、下载各个机器需要的镜像

2、初始化主节点

3、加入node节点

3、部署dashboard

1、主节点安装

2、设置访问端口

3、创建访问账号

4、令牌访问获取token

三、实战

1、资源创建方式

2、Namespace

3、pod

命令行形式 

以yaml形式创建pod

 一个pod中运行多个容器

 测试一个pod中启动两个nginx(端口占用)

4、Deployment

 1、自愈能力

2、多副本

 3、扩缩容

 4、故障转移

5、滚动更新

6、版本回退

7、更多

5、Service


一、安装前准备

  • 一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令。
  • 每台机器 2 GB 或更多的 RAM(如果少于这个数字将会影响你应用的运行内存)。
  • CPU 2 核心及以上。
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)。
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
  • 开启机器上的某些端口。请参见这里了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,你必须禁用交换分区。
# 给集群内各个机器设置域名,避免重复
hostnamectl set-hostname xxxx# 将 SELinux 设置为 permissive 模式(相当于将其禁用)(linux系统中的安全设置)
# 临时禁用
sudo setenforce 0
# 永久禁用
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config# 查看当前内存使用情况(-m 以字节的形式展示)
free -m# 关闭swap
# 临时关闭
swapoff -a
# 永久关闭  
sed -ri 's/.*swap.*/#&/' /etc/fstab# 允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF# 使配置文件生效
sudo sysctl --system

二、安装

 1、安装kubelet、kubeadm、kubectl

# 告诉linux去哪里下载kubernetes
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF# 下载
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes# 启动kubelet并设置开机自启
# 启动后,如果使用systemctl status kubelet查看状态,会出现启动/停止无限闪亮的过程,因为它陷入了一个等待 kubeadm 指令的死循环
sudo systemctl enable --now kubelet

2、使用kubeadm引导集群

1、下载各个机器需要的镜像

# 除了kubelet以外,其余都是以镜像的方式运行,其他模块都是由kubelet来获取镜像
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOFchmod +x ./images.sh && ./images.sh

2、初始化主节点

# 所有机器添加master域名映射
echo "192.168.31.27  cluster-endpoint" >> /etc/hosts# 主节点初始化
# 只需要在主节点执行
# 第二行的ip必须是master的地址,第三行的域名必须是master的域名
kubeadm init \
--apiserver-advertise-address=192.168.31.27 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.169.0.0/16# 所有网络范围不重叠,docker安装后占用的是172.17.0.1/16

# 可在主节点使用此命令判断是否安装成功

kubectl get nodes

执行后,需要记录输出内容,后续有用

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:
#第一步,复制执行即可mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.conf#第二步,需要下载网络插件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:#增加masterkubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \--discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 \--control-plane Then you can join any number of worker nodes by running the following on each as root:#增加工作节点   此处的token24小时有效
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \--discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6
#下载calico网络插件配置文件
curl https://docs.projectcalico.org/manifests/calico.yaml -O
#kubectl version,发现版本是v1.20.9,对应的calico版本是v3.20
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O#初始化主节点时,--pod-network-cidr=192.168.0.0/16,默认为192.168网段,修改为了192.169,因此需要修改calico中的对应配置#给k8s里安装calico
kubectl apply -f calico.yaml

# 查看集群中部署了哪些应用 之前的docker叫容器,k8s叫pod
# -A 查看所有,不加默认查看default命名空间中的
# -w 可以看到输出日志,比如开始初始化某个pod
# watch -n 1 kubectl get pods -A   每一秒查看一下状态
# kubectl get pod -owide 查看更详细的pod信息 带ip
kubectl get pods -A

3、加入node节点

# 根据之前保存的内容,在工作节点机器上执行即可
kubeadm join cluster-endpoint:6443 --token 8yjd6q.r660tz3f0myr529a \
>     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6

卡住不动了,关闭主节点防火墙即可

[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03[WARNING Hostname]: hostname "k8s-node1" could not be reached[WARNING Hostname]: hostname "k8s-node1": lookup k8s-node1 on 192.168.31.1:53: no such host

令牌过期后怎么办?重新生成(在master节点执行)

kubeadm token create --print-join-command
kubeadm join cluster-endpoint:6443 --token qwzp8v.qfwfeh7x3pdc3a1r     --discovery-token-ca-cert-hash sha256:1546719bf3b2b6fa4afce5d4b8bf04602cd0287417d0d32bbe4ff63aec00afa6 

3、部署dashboard

1、主节点安装

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443selector:k8s-app: kubernetes-dashboard---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.3.1imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'spec:containers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.6ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

2、设置访问端口

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP 改为 type: NodePort

# 找到端口,在安全组放行
kubectl get svc -A |grep kubernetes-dashboard

访问: https://集群任意IP:端口(我的是30427)

https://139.198.165.238:30427  

3、创建访问账号

# 创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: admin-usernamespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: admin-user
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin
subjects:
- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard
kubectl apply -f dash.yaml

4、令牌访问获取token

# 获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"eyJhbGciOiJSUzI1NiIsImtpZCI6IklYTTRxZHNTb0lkclltRnN0aDY2OXJ3RzlhUkxucjNISG1tbW44X3VFdVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWR6aHE0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJjYzY0ODdiYy1mMWFhLTQwN2ItOTFkZC0yN2I3ODdlZGU2MjQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.d9rUEo5u0-DRYnXfUn3nRhVTncCWDsijRQYQwTmeNdL0U8Dv8k_yUrJ4W1kV2AP9VArt-pv4U3eXM2ts875CT-3L6vpg6JE42WDtJy4ama92NLiX4n7HFdugThhoowAV53Ac_6O4YaTc7o-TROplowLkHZ4hDjo9OYo1u21QhhGfq9uGkBz6jsvUhCe5oTpxFmmjimUN3_yUsUFf6nwS0dWk_d986A-de0hLfj4-wC1_soWpFVIK7j0wjHk2brQbultH07YPsXb-c_brixl0QvsUqtCka9OUxSQ1nlgCqoVVWK30RwSw7GbDkzh798zfkONu_ofHejw_srxvmeqoPw

三、实战

1、资源创建方式

  • 命令行
  • YAML

2、Namespace

名称空间用来对集群资源进行隔离划分。默认只隔离资源,不隔离网络。(想要隔离网络需配置)

# 获取k8s中的命名空间
kubectl get ns/namspace# 获取指定命名空间的pod,不加-n为default,-A为所有。创建创建的资源如果不指定命名空间的话,会创建到default空间下
kubectl get pods -n kubernetes-dashboard# 创建自定义命名空间
kubectl create ns hello# 删除自定义命名空间,系统级的不要删,default拒绝删除。删除的时候会把下面的资源也删掉。
kubectl delete ns hello

以yaml的形式创建命名空间

apiVersion: v1
kind: Namespace
metadata:name: hello

以yaml形式创建,最好也以yaml的形式删除

kubectl apply -f hello.yaml

kubectl delete -f hello.yaml

3、pod

运行中的一组容器,pod是kubernetes中应用的最小单位

(k8s将docker中的容器再封装一次,即pod,pod中可以有一个容器,也可以是多个,为一组,构成一个原子pod)

命令行形式 

# 创建pod
kubectl run mynginx --image=nginx# 查看pod描述
kubectl describe pod mynginxEvents:Type    Reason     Age   From               Message----    ------     ----  ----               -------Normal  Scheduled  13m   default-scheduler  Successfully assigned default/mynginx to k8s-node2Normal  Pulling    13m   kubelet            Pulling image "nginx"Normal  Pulled     12m   kubelet            Successfully pulled image "nginx" in 49.9333842sNormal  Created    12m   kubelet            Created container mynginxNormal  Started    12m   kubelet            Started container mynginx# 将任务分配给了node2工作节点,k8s-node2,底层还是docker容器,可以通过docker ps查看# 删除pod
kubectl delete pod mynginx
# -n 指定命名空间
#kubectl delete pod mynginx -n default
# 删除多个pod 空格分隔即可
kubectl delete pod myapp mynginx -n default

以yaml形式创建pod

apiVersion: v1
kind: Pod
metadata:labels:run: mynginxname: mynginxnamespace: default
spec:containers:- image: nginxname: mynginx

kubectl apply -f pod.yaml

kubectl delete -f pod.yaml

# 查看pod日志  只有pod有日志,所以不用加pod   -f 阻塞式追踪日志
kubectl logs mynginx
kubectl logs -f mynginx# 每个pod k8s都会给分配一个ip
# --pod-network-cidr=192.169.0.0/16  初始化主节点的配置
# 使用pod的ip + pod里面运行容器的端口即可访问
# 集群中的任意一个机器以及任意的应用都能通过Pod分配的ip来访问这个Pod
# 此时外部还不能访问
# curl 192.169.169.132
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
mynginx   1/1     Running   0          4m17s   192.169.169.132   k8s-node2   <none>           <none>#进入pod内部  还可在dashboard中点执行进入pod内部
kubectl exec -it mynginx -- /bin/bash

dashboard中创建pod,需要选择好namespace,否则需要在yaml中指定namespace

可在页面中查看日志、描述、删除、执行等操作,对应上面的各种命令

 一个pod中运行多个容器

apiVersion: v1
kind: Pod
metadata:labels:run: myappname: myapp
spec:containers:- image: nginxname: nginx- image: tomcat:8.5.68name: tomcat

# 查看ip
kubectl get pod -owide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
myapp     2/2     Running   0          3m53s   192.169.36.66     k8s-node1   <none>           <none>
mynginx   1/1     Running   0          36m     192.169.169.132   k8s-node2   <none>           <none># 访问nginx
curl 192.169.36.66# 访问tomcat
curl 192.169.36.66:8080# 内部相互访问时使用127.0.0.1即可

 

 测试一个pod中启动两个nginx(端口占用)

# myapp-2 失败
kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
myapp     2/2     Running   0          19m
myapp-2   1/2     Error     1          51s
mynginx   1/1     Running   0          51m#排错
kubectl describe pod myapp-2
# 查看日志  可以通过命令行,也可以通过dashboard
# -c 指定pod内部容器名   当有多个容器时必填
# 正常
kubectl logs -c nginx01 myapp-2
# Address already in use  k8s会一直重试
kubectl logs -c nginx02 myapp-2

思考:如果非要安装两个nginx怎么办?自定义端口?怎么自定义?

4、Deployment

控制pod,使pod拥有多副本、自愈、扩缩容等能力

 1、自愈能力

# 使用原始方法创建pod
kubectl run mynginx --image=nginx# 使用deployment方式创建pod 可简写为deploy
kubectl create deployment mytomcat --image=tomcat:8.5.68# 测试两种方式的不同  k8s的自愈能力
# kubectl delte pod mynginx后,kubectl get pod查看mynginx是真的被删除
# deployment方式创建后,名字为随机的,例:mytomcat-6f5f895f4f-668dp,删除后会立马重启一个新的,类似宕机后重启(自愈能力)# 查看deployment  可简写为deploy
# -n namespace
kubectl get deployment# 删除deployment 可简写为deploy
kubectl delete deployment -n default mytomcat

2、多副本

命令行部署 

# 一次创建三个副本
kubectl create deploy my-dep --image=nginx --replicas=3

 yaml部署

apiVersion: apps/v1
kind: Deployment
metadata:labels:app: my-depname: my-dep
spec:replicas: 3selector:matchLabels:app: my-deptemplate:metadata:labels:app: my-depspec:containers:- image: nginxname: nginx

dashboard表单创建

 3、扩缩容

# 扩容
kubectl scale deploy/my-dep --replicas=5# 缩容 会随机选择pod,关闭对应数量
kubectl scale deploy/my-dep --replicas=2# 修改yaml的方式进行扩缩容操作
# 修改sepc下的replicas即可
kubectl edit deploy my-dep

 也可以在dashboard中操作对应的缩放功能

 4、故障转移

NAME                      READY   STATUS    RESTARTS   AGE   IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running   0          28m   192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   1/1     Running   0          28m   192.169.36.70     k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running   0          28m   192.169.169.135   k8s-node2   <none>           <none># 自愈
# docker stop xxx  关闭my-dep-5b7868d854-cnlxs对应节点的容器模拟宕机
# k8s会重启一个新的,之前的通过docker ps -a可以看到,为退出状态
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m# 故障转移
# 手动关闭node1后,等待大概5分钟,会关闭cnlxs,重新开启一个k9977,在node2上,此为故障转移
# 在node1启动前,cnlxs状态一直为Terminating,启动后才会关闭成功
kubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running   0          29m
my-dep-5b7868d854-djbfq   1/1     Running   0          29m
my-dep-5b7868d854-cnlxs   0/1     Completed   0          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          29m
my-dep-5b7868d854-cnlxs   1/1     Running     1          37m
my-dep-5b7868d854-cnlxs   1/1     Terminating   1          42m
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     Pending       0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     ContainerCreating   0          9s
my-dep-5b7868d854-k9977   1/1     Running             0          11skubectl get pod -owide
NAME                      READY   STATUS        RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
my-dep-5b7868d854-5wp9t   1/1     Running       0          46m     192.169.169.134   k8s-node2   <none>           <none>
my-dep-5b7868d854-cnlxs   0/1     Terminating   1          46m     <none>            k8s-node1   <none>           <none>
my-dep-5b7868d854-djbfq   1/1     Running       0          46m     192.169.169.135   k8s-node2   <none>           <none>
my-dep-5b7868d854-k9977   1/1     Running       0          3m54s   192.169.169.137   k8s-node2   <none>           <none>

5、滚动更新

类似于不停机更新,升级版本,不直接关闭之前的pod内容器,而是启动一个关闭一个

# 以yaml格式获取deploy,- image: nginx。或者查看pod的描述也可以找到版本
kubectl get deploy my-dep -oyaml# nginx=nginx:1.16.1   - image:的值(旧版本): 新版本
# 一般使用yaml格式更新
kubectl set image deploy/my-dep nginx=nginx:1.16.1 --recordkubectl get pod -w
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-5b7868d854-5wp9t   1/1     Running   0          18h
my-dep-5b7868d854-djbfq   1/1     Running   0          18h
my-dep-5b7868d854-k9977   1/1     Running   0          17h
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     Pending   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   0/1     ContainerCreating   0          0s
my-dep-6b48cbf4f9-sgnfc   1/1     Running             0          40s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     Pending             0          0s
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   1/1     Terminating         0          17h
my-dep-6b48cbf4f9-tfpb8   0/1     ContainerCreating   0          2s
my-dep-6b48cbf4f9-tfpb8   1/1     Running             0          3s
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     Pending             0          0s
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          0s
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-djbfq   1/1     Terminating         0          18h
my-dep-6b48cbf4f9-kndkc   0/1     ContainerCreating   0          1s
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-djbfq   0/1     Terminating         0          18h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-5b7868d854-k9977   0/1     Terminating         0          17h
my-dep-6b48cbf4f9-kndkc   1/1     Running             0          17s
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   1/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h
my-dep-5b7868d854-5wp9t   0/1     Terminating         0          18h

实际使用滚动更新用的yaml,怎么写?两次的区别怎么弄?

6、版本回退

# 查看历史记录  使用了--record的都会被记录下来
kubectl rollout history deploy/my-dep# 查看某个历史详情
kubectl rollout history deploy/my-dep --revision=2# 回滚到上次 也是停一个起一个
kubectl rollout undo deploy/my-dep# 回滚到指定版本
kubectl rollout undo deploy/my-dep --to-revision=2

7、更多

除了Deployment,k8s还有 StatefulSetDaemonSetJob 等 类型资源。我们都称为 工作负载

有状态应用使用 StatefulSet 部署,无状态应用使用 Deployment 部署

工作负载资源 | Kubernetes

5、Service

Pod的服务发现与负载均衡。将一组Pods公开为网络服务的抽象方法。

# 暴露deploy 对外的端口为8000,访问pod内的端口是80
kubectl expose deploy my-dep --port=8000 --target-port=80# 查看
kubectl get service# 说明:service是根据deploy的label来筛选一组pod为service(不写默认为app:{name}),--show-labels可以看标签# curl serviceIp:servicePort

*以上内容是根据雷丰阳老师视频学习的记录,包括一些个人的理解,不对之处望指教

http://www.lryc.cn/news/1754.html

相关文章:

  • web自动化测试入门篇05——元素定位的配置管理
  • C语言预处理
  • git报错大全,你将要踩的坑我都帮你踩了系列
  • LabVIEW中使用.NET方法时出现错误1316
  • HTTP2.0 相比 HTTP1.0、HTTP1.1 有哪些重大改进?值得升级更换吗?
  • 九、Linux文件 - fopen函数和fclose函数讲解
  • 轨迹预测算法vectorNet调研报告
  • 基于STM32设计的避障寻迹小车
  • 【视觉检测】使用opencv编写一个图片缺陷检测流程
  • 3.Dockerfile 定制镜像
  • Web基础与HTTP协议
  • 【化学试剂】endo-BCN-PEG4-Pomalidomide,(1R,8S,9S)-双环[6.1.0]壬-四聚乙二醇-泊马度胺纯度95%+
  • 全板电镀与图形电镀,到底有什么区别?
  • Zabbix 构建监控告警平台(二)--
  • 开学季,关于校园防诈骗宣传,如何组织一场微信线上答题考试
  • 蓝牙单点技术实现路径介绍
  • Ubuntu22.04 用 `hwclock` 或 `timedatectl` 来设置RTC硬件时钟为本地时区
  • Node=>Express路由 学习2
  • Android 面试三部曲——你做到了几点?
  • windeployqt实现一键打包
  • ESP32S3系列--SPI主机驱动详解(二)
  • 51单片机15单片机 时钟芯片DS1302【更新中】
  • SaleSmartly(ss客服)带你了解:缩短B2B销售周期的秘诀
  • 九龙证券|A股苏州板块迎来“200+”里程碑
  • vcruntime140_1.dll无法继续执行代码,怎么解决这种问题?
  • 正大国际期货:外盘震荡行情的特征及突破信号的确立
  • 【ESP 保姆级教程】玩转emqx数据集成篇④ ——数据桥接之HTTP服务
  • 蓝桥杯算法训练合集十三 1.P06022.P07033.逗志芃的危机4.唯一的小可爱5.JOE的矩阵
  • 切换分支报错:Untracked Files Prevent Checkout
  • endo-BCN-PEG4-Palmitic,环丙烷环辛炔四聚乙二醇-Palmitic包装灵活