当前位置: 首页 > news >正文

使用playbook部署k8s集群

1.部署ansible集群
 

使用python脚本一个简单的搭建ansible集群-CSDN博客

2.ansible命令搭建k8s:


1.主机规划:

节点IP地址操作系统配置
server192.168.174.150centos7.92G2核
client1192.168.174.151centos7.92G2核
client2192.168.174.152centos7.92G2


ansible清单文件内容如下

[clients_all]
server
client1
client2
[clients_master]
server
[clients_client]
client1
client2

2.配置yum源

---
- hosts: clients_allgather_facts: novars:tasks:- name: 配置本地yum源block:- name: 挂载/dev/cdrommount:src: /dev/cdrompath: /mnt/cdromfstype: iso9660opts: defaultsstate: mounted- name: 删除\/dev\/cdrom行lineinfile:path: /etc/fstabregexp: '^\/dev\/cdrom'state: absent- name: 修改/etc/fstab配置文件lineinfile:path: /etc/fstabline: '/dev/cdrom /mnt/cdrom iso9660 defaults  0  0'insertafter: EOF- name: 创建文件并且清空文件shell: 'echo "" > /etc/yum.repos.d/centos-local.repo'- name: 修改配置文件centos-local.repoblockinfile:path: /etc/yum.repos.d/centos-local.repo# block: |表示将文本块的内容作为纯文本进行处理,不进行任何额外的格式化或处理block: |[centos7.9]name=centos7.9baseurl=file:///mnt/cdromenabled=1gpgcheck=0create: yesmarker: '#{mark} centos7.9'- name: 清理yum缓存,列出yum源列表shell: yum clean all && yum repolist- name: 配置远程阿里源block:- name: yum安装wgetyum:name: wget- name: 下载centosyum文件get_url:dest: /etc/yum.repos.d/CentOS-Base.repourl: http://mirrors.aliyun.com/repo/Centos-7.repo- name: 配置yum扩展源block:- name: yum安装epel-releaseyum:name: epel-release- name: 清理yum缓存,列出yum源列表shell: yum clean all && yum repolist

3.安装必要工具:

- hosts: clients_allgather_facts: novars:tasks:- name: 安装必要工具yum:name: bash-completion,vim,net-tools,tree,psmisc,lrzsz,dos2unix

4.禁止防火墙和selinux:

- hosts: clients_allgather_facts: novars:tasks:- name: 禁止使用selinuxselinux:state: disabled- name: 禁用firewalld服务service:name: firewalldstate: stoppedenabled: false- name: 禁用iptables服务service:name: iptablesstate: stoppedenabled: falseregister: service_statusignore_errors: True- name: 处理服务不存在的情况fail:msg: "服务不存在"when: service_status is failed

5.chrony时间同步

- hosts: clients_allgather_facts: novars:tasks:- name: 在所有节点使用yum安装chronyyum:name: chrony- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true
- hosts: clients_mastergather_facts: novars:tasks:- name: 修改master节点chrony.confblock:- name: 修改同步IP地址范围lineinfile:path: /etc/chrony.confregexp: '^#allow 192.168.0.0\/16'line: 'allow 192.168.174.0/24'backrefs: yes- name: 修改同步层数lineinfile:path: /etc/chrony.confregexp: '^#local stratum 10'line: 'local stratum 10'backrefs: yes- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true- name: 开启主节点时间同步shell: timedatectl set-ntp true
- hosts: clients_clientgather_facts: novars:tasks:- name: 修改master节点chrony.confblock:- name: 删除旧的时间同步服务器地址lineinfile:path: /etc/chrony.confregexp: '^server'state: absent- name: 添加新的时间同步服务器地址lineinfile:path: /etc/chrony.confline: 'server 192.168.174.150 iburst'insertbefore: '^.*\bline2\b.*$'- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true- name: 开启主节点时间同步shell: timedatectl set-ntp true- name: 等待5秒shell: sleep 5- name: 查看同步状况shell: chronyc sources -v | sed -n '/^\^\*/p'register: command_output# 将打印的结果重定向到command_output,并输出到控制台- name: chronyc sources -v输出debug:var: command_output.stdout_lines

6.禁用swap分区,修改linux的内核参数,配置ipvs功能

- hosts: clients_allgather_facts: novars:tasks:- name: 禁用swap分区lineinfile:path: /etc/fstabregexp: '^\/dev\/mapper\/centos-swap'line: '#/dev/mapper/centos-swap swap                    swap    defaults        0 0'backrefs: yes- name: 修改linux的内核参数block:- name: 编辑/etc/sysctl.d/kubernetes.conf,添加网桥过滤和地址转发功能blockinfile:path: /etc/sysctl.d/kubernetes.confblock: |net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1create: yesmarker: '#{mark} kubernetes'- name: 重新加载配置shell: sysctl -p- name: 加载网桥过滤模块shell: modprobe br_netfilter- name: 查看网桥过滤模块是否加载成功shell: lsmod | grep br_netfilterregister: lsmod_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: lsmod | grep br_netfilter输出debug:var: lsmod_output.stdout_lines- name: 配置ipvs功能block:- name: 安装ipset和ipvsadmyum:name: ipset,ipvsadm- name: 添加需要加载的模块写入脚本文件blockinfile:path: /etc/sysconfig/modules/ipvs.modulesblock: |#! /bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4create: yesmarker: '#{mark} ipvs'- name: 为脚本文件添加执行权限file:path: /etc/sysconfig/modules/ipvs.modulesmode: '0755'- name: 执行脚本文件script: /bin/bash /etc/sysconfig/modules/ipvs.modules- name: 查看对应的模块是否加载成功shell: lsmod | grep -e ip_vs -e nf_conntrack_ipv4register: lsmod_ipv4_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: lsmod | grep -e ip_vs -e nf_conntrack_ipv4输出debug:var: lsmod_ipv4_output.stdout_lines- name: 重启linux服务reboot:

7.安装Docker

- hosts: clients_allgather_facts: novars:tasks:- name: 修改linux的内核参数block:- name: 添加docker镜像到本地get_url:dest: /etc/yum.repos.d/docker-ce.repourl: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo- name: 然后输入命令shell: yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7- name: 修改配置文件file:path: /etc/dockerstate: directory- name: 修改/etc/docker/daemon.jsonblockinfile:path: /etc/sysconfig/modules/ipvs.modulesblock: |{"storage-driver": "devicemapper","exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://ja9e22yz.mirror.aliyuncs.com"]}create: yesmarker: '#{mark} daemon'- name: 修改/etc/sysconfig/dockerblockinfile:path: /etc/sysconfig/dockerblock:OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'create: yesmarker: '#{mark} docker'- name: 重启docker服务,并设置开机自启service:name: dockerstate: restartedenabled: true

8.安装k8s组件

- hosts: clients_allgather_facts: novars:tasks:- name: 安装k8s组件block:- name: 配置k8syum仓库blockinfile:path: /etc/yum.repos.d/kubernetes.repoblock: |[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgcreate: yesmarker: '#{mark} kubernetes'- name: 安装kubeadm、kubelet和kubectlshell: yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y- name: 编辑/etc/sysconfig/kubelet,配置kubelet的cgroupblockinfile:path: /etc/yum.repos.d/kubernetes.repoblock:KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"create: yesmarker: '#{mark} kubernetes'- name: 设置kubelet开机自启service:name: kubeletstate: startedenabled: true

9.准备集群镜像

- hosts: clients_allgather_facts: novars:images:                 # 定义了一个镜像列表- kube-apiserver:v1.17.4- kube-controller-manager:v1.17.4- kube-scheduler:v1.17.4- kube-proxy:v1.17.4- pause:3.1- etcd:3.4.3-0- coredns:1.6.5tasks:- name: 拉取镜像shell: docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: "{{ images }}"          # 循环定义的列表- name: 给镜像打标签shell: docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }} k8s.gcr.io/{{ item }}with_items: "{{ images }}"- name: 删除镜像shell: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: "{{ images }}"

10.集群初始化

- hosts: clients_mastergather_facts: novars:- apiserver_address: 192.168.174.150tasks:- name: 在 master 节点,集群初始化block:- name: kubeadm initshell:kubeadm init \--kubernetes-version=v1.17.4 \--pod-network-cidr=10.244.0.0/16 \--service-cidr=10.96.0.0/12 \--apiserver-advertise-address={{ apiserver_address }} | \grep -Eo '(--token.*|--discovery-token-ca-cert-hash.*)' | awk -F ' ' '{print \$2}'# 将打印的结果重定向到lsmod_outputregister: kubeadm_init_output- name: 提取token和discovery-token-ca-cert-hash值set_fact:token: "{{ kubeadm_init_output.stdout_lines[0] }}"discovery_token_ca_cert_hash: "{{ kubeadm_init_output.stdout_lines[1] }}"- name: 将token和discovery_token_ca_cert_hash写入到文件中blockinfile:path: $HOME/kubeadm_init.yamlblock: |token: {{ token }}discovery_token_ca_cert_hash: {{ discovery_token_ca_cert_hash }}create: yesmarker: '#{mark} file'- name: 将token和discovery_token_ca_cert_hash的文件给ansible主机fetch:src: $HOME/kubeadm_init.yamldest: $HOME- name: kubeadm init输出debug:var: token,discovery_token_ca_cert_hash- name: 创建必要目录$HOME/.kubefile:path: $HOME/.kubestate: directory- name: 复制/etc/kubernetes/admin.confshell: cp -i /etc/kubernetes/admin.conf $HOME/.kube/config- name: 获取当前用户的id值shell: id -unregister: current_user- name: 获取当前用户组的id值shell: id -gnregister: current_group- name: 修改文件$HOME/.kube/config的所属主和组file:path: $HOME/.kube/configstate: touch# 这里register的值需要stdout方法,不然输出会加上[]owner: "{{ current_user.stdout }}"group: "{{ current_group.stdout }}"
# 在 node 节点,将node节点加入集群
- hosts: clients_clientvars:apiserver_address: 192.168.174.150vars_files: $HOME/server/root/kubeadm_init.yamltasks:- name: 在 node 节点,将node节点加入集群# 如果想获取到上一个项目中定义的fact变量需要开启gather_facts收集shell:kubeadm join {{ apiserver_address }}:6443 --token {{ token }} \--discovery-token-ca-cert-hash {{ discovery_token_ca_cert_hash }}
# 在 master 节点,在查看集群状态
- hosts: clients_mastergather_facts: novars:tasks:- name: 在 master 节点,在查看集群状态shell: kubectl get nodesregister: kubectl_nodes_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get nodes输出debug:var: kubectl_nodes_output.stdout_lines

11.安装网络插件

- hosts: clients_mastergather_facts: novars:tasks:- name: 安装网络插件block:- name: 获取fannel的配置文件get_url:dest: $HOMEurl: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml- name: 部署flannel网络shell: kubectl apply -f $HOME/kube-flannel.yml && sleep 60- name: 过一分钟左右查看各节点状态,变为Ready说明网络打通了shell: kubectl get nodesregister: kubectl_nodes_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get nodes输出debug:var: kubectl_nodes_output.stdout_lines- name: 查看所有pod是否变为Runningshell: kubectl get pod --all-namespacesregister: kubectl_pod_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get pod --all-namespaces输出debug:var: kubectl_pod_output.stdout_lines

12.完整yaml文件:
 

# 1.配置yum源
---
- hosts: clients_allgather_facts: novars:tasks:- name: 配置本地yum源block:- name: 挂载/dev/cdrommount:src: /dev/cdrompath: /mnt/cdromfstype: iso9660opts: defaultsstate: mounted- name: 删除\/dev\/cdrom行lineinfile:path: /etc/fstabregexp: '^\/dev\/cdrom'state: absent- name: 修改/etc/fstab配置文件lineinfile:path: /etc/fstabline: '/dev/cdrom /mnt/cdrom iso9660 defaults  0  0'insertafter: EOF- name: 创建文件并且清空文件shell: 'echo "" > /etc/yum.repos.d/centos-local.repo'- name: 修改配置文件centos-local.repoblockinfile:path: /etc/yum.repos.d/centos-local.repo# block: |表示将文本块的内容作为纯文本进行处理,不进行任何额外的格式化或处理block: |[centos7.9]name=centos7.9baseurl=file:///mnt/cdromenabled=1gpgcheck=0create: yesmarker: '#{mark} centos7.9'- name: 清理yum缓存,列出yum源列表shell: yum clean all && yum repolist- name: 配置远程阿里源block:- name: yum安装wgetyum:name: wget- name: 下载centosyum文件get_url:dest: /etc/yum.repos.d/CentOS-Base.repourl: http://mirrors.aliyun.com/repo/Centos-7.repo- name: 配置yum扩展源block:- name: yum安装epel-releaseyum:name: epel-release- name: 清理yum缓存,列出yum源列表shell: yum clean all && yum repolist
# 2.安装必要工具:
- hosts: clients_allgather_facts: novars:tasks:- name: 安装必要工具yum:name: bash-completion,vim,net-tools,tree,psmisc,lrzsz,dos2unix
# 3.禁止防火墙和selinux:
- hosts: clients_allgather_facts: novars:tasks:- name: 禁止使用selinuxselinux:state: disabled- name: 禁用firewalld服务service:name: firewalldstate: stoppedenabled: false- name: 禁用iptables服务service:name: iptablesstate: stoppedenabled: falseregister: service_statusignore_errors: True- name: 处理服务不存在的情况fail:msg: "服务不存在"when: service_status is failed
# 4.chrony时间同步
- hosts: clients_allgather_facts: novars:tasks:- name: 在所有节点使用yum安装chronyyum:name: chrony- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true
- hosts: clients_mastergather_facts: novars:tasks:- name: 修改master节点chrony.confblock:- name: 修改同步IP地址范围lineinfile:path: /etc/chrony.confregexp: '^#allow 192.168.0.0\/16'line: 'allow 192.168.174.0/24'backrefs: yes- name: 修改同步层数lineinfile:path: /etc/chrony.confregexp: '^#local stratum 10'line: 'local stratum 10'backrefs: yes- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true- name: 开启主节点时间同步shell: timedatectl set-ntp true
- hosts: clients_clientgather_facts: novars:tasks:- name: 修改master节点chrony.confblock:- name: 删除旧的时间同步服务器地址lineinfile:path: /etc/chrony.confregexp: '^server'state: absent- name: 添加新的时间同步服务器地址lineinfile:path: /etc/chrony.confline: 'server 192.168.174.150 iburst'insertbefore: '^.*\bline2\b.*$'- name: 开机自启动,并重启chrony服务service:name: chronydstate: restartedenabled: true- name: 开启主节点时间同步shell: timedatectl set-ntp true- name: 等待5秒shell: sleep 5- name: 查看同步状况shell: chronyc sources -v | sed -n '/^\^\*/p'register: command_output# 将打印的结果重定向到command_output,并输出到控制台- name: chronyc sources -v输出debug:var: command_output.stdout_lines
# 5.禁用swap分区,修改linux的内核参数,配置ipvs功能
- hosts: clients_allgather_facts: novars:tasks:- name: 禁用swap分区lineinfile:path: /etc/fstabregexp: '^\/dev\/mapper\/centos-swap'line: '#/dev/mapper/centos-swap swap                    swap    defaults        0 0'backrefs: yes- name: 修改linux的内核参数block:- name: 编辑/etc/sysctl.d/kubernetes.conf,添加网桥过滤和地址转发功能blockinfile:path: /etc/sysctl.d/kubernetes.confblock: |net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1create: yesmarker: '#{mark} kubernetes'- name: 重新加载配置shell: sysctl -p- name: 加载网桥过滤模块shell: modprobe br_netfilter- name: 查看网桥过滤模块是否加载成功shell: lsmod | grep br_netfilterregister: lsmod_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: lsmod | grep br_netfilter输出debug:var: lsmod_output.stdout_lines- name: 配置ipvs功能block:- name: 安装ipset和ipvsadmyum:name: ipset,ipvsadm- name: 添加需要加载的模块写入脚本文件blockinfile:path: /etc/sysconfig/modules/ipvs.modulesblock: |#! /bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4create: yesmarker: '#{mark} ipvs'- name: 为脚本文件添加执行权限file:path: /etc/sysconfig/modules/ipvs.modulesmode: '0755'- name: 执行脚本文件script: /bin/bash /etc/sysconfig/modules/ipvs.modules- name: 查看对应的模块是否加载成功shell: lsmod | grep -e ip_vs -e nf_conntrack_ipv4register: lsmod_ipv4_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: lsmod | grep -e ip_vs -e nf_conntrack_ipv4输出debug:var: lsmod_ipv4_output.stdout_lines
#        - name: 重启linux服务
#          reboot:
# 6.安装Docker
- hosts: clients_allgather_facts: novars:tasks:- name: 修改linux的内核参数block:- name: 添加docker镜像到本地get_url:dest: /etc/yum.repos.d/docker-ce.repourl: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo- name: 然后输入命令shell: yum install -y --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7- name: 修改配置文件file:path: /etc/dockerstate: directory- name: 修改/etc/docker/daemon.jsonblockinfile:path: /etc/sysconfig/modules/ipvs.modulesblock: |{"storage-driver": "devicemapper","exec-opts": ["native.cgroupdriver=systemd"],"registry-mirrors": ["https://ja9e22yz.mirror.aliyuncs.com"]}create: yesmarker: '#{mark} daemon'- name: 修改/etc/sysconfig/dockerblockinfile:path: /etc/sysconfig/dockerblock:OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'create: yesmarker: '#{mark} docker'- name: 重启docker服务,并设置开机自启service:name: dockerstate: restartedenabled: true
# 7.安装k8s组件
- hosts: clients_allgather_facts: novars:tasks:- name: 安装k8s组件block:- name: 配置k8syum仓库blockinfile:path: /etc/yum.repos.d/kubernetes.repoblock: |[kubernetes]name=Kubernetesbaseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgcreate: yesmarker: '#{mark} kubernetes'- name: 安装kubeadm、kubelet和kubectlshell: yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 -y- name: 编辑/etc/sysconfig/kubelet,配置kubelet的cgroupblockinfile:path: /etc/yum.repos.d/kubernetes.repoblock:KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"KUBE_PROXY_MODE="ipvs"create: yesmarker: '#{mark} kubernetes'- name: 设置kubelet开机自启service:name: kubeletstate: startedenabled: true
# 8.准备集群镜像
- hosts: clients_allgather_facts: novars:images:                 # 定义了一个镜像列表- kube-apiserver:v1.17.4- kube-controller-manager:v1.17.4- kube-scheduler:v1.17.4- kube-proxy:v1.17.4- pause:3.1- etcd:3.4.3-0- coredns:1.6.5tasks:- name: 拉取镜像shell: docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: "{{ images }}"          # 循环定义的列表- name: 给镜像打标签shell: docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }} k8s.gcr.io/{{ item }}with_items: "{{ images }}"- name: 删除镜像shell: docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}with_items: "{{ images }}"
# 9.集群初始化
- hosts: clients_mastergather_facts: novars:- apiserver_address: 192.168.174.150tasks:- name: 在 master 节点,集群初始化block:- name: kubeadm initshell:kubeadm init \--kubernetes-version=v1.17.4 \--pod-network-cidr=10.244.0.0/16 \--service-cidr=10.96.0.0/12 \--apiserver-advertise-address={{ apiserver_address }} | \grep -Eo '(--token.*|--discovery-token-ca-cert-hash.*)' | awk -F ' ' '{print \$2}'# 将打印的结果重定向到lsmod_outputregister: kubeadm_init_output- name: 提取token和discovery-token-ca-cert-hash值set_fact:token: "{{ kubeadm_init_output.stdout_lines[0] }}"discovery_token_ca_cert_hash: "{{ kubeadm_init_output.stdout_lines[1] }}"- name: 将token和discovery_token_ca_cert_hash写入到文件中blockinfile:path: $HOME/kubeadm_init.yamlblock: |token: {{ token }}discovery_token_ca_cert_hash: {{ discovery_token_ca_cert_hash }}create: yesmarker: '#{mark} file'- name: 将token和discovery_token_ca_cert_hash的文件给ansible主机fetch:src: $HOME/kubeadm_init.yamldest: $HOME- name: kubeadm init输出debug:var: token,discovery_token_ca_cert_hash- name: 创建必要目录$HOME/.kubefile:path: $HOME/.kubestate: directory- name: 复制/etc/kubernetes/admin.confshell: cp -i /etc/kubernetes/admin.conf $HOME/.kube/config- name: 获取当前用户的id值shell: id -unregister: current_user- name: 获取当前用户组的id值shell: id -gnregister: current_group- name: 修改文件$HOME/.kube/config的所属主和组file:path: $HOME/.kube/configstate: touch# 这里register的值需要stdout方法,不然输出会加上[]owner: "{{ current_user.stdout }}"group: "{{ current_group.stdout }}"
# 在 node 节点,将node节点加入集群
- hosts: clients_clientvars:apiserver_address: 192.168.174.150vars_files: $HOME/server/root/kubeadm_init.yamltasks:- name: 在 node 节点,将node节点加入集群# 如果想获取到上一个项目中定义的fact变量需要开启gather_facts收集shell:kubeadm join {{ apiserver_address }}:6443 --token {{ token }} \--discovery-token-ca-cert-hash {{ discovery_token_ca_cert_hash }}
# 在 master 节点,在查看集群状态
- hosts: clients_mastergather_facts: novars:tasks:- name: 在 master 节点,在查看集群状态shell: kubectl get nodesregister: kubectl_nodes_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get nodes输出debug:var: kubectl_nodes_output.stdout_lines
# 9.安装网络插件
- hosts: clients_mastergather_facts: novars:tasks:- name: 安装网络插件block:- name: 获取fannel的配置文件get_url:dest: $HOMEurl: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml- name: 部署flannel网络shell: kubectl apply -f $HOME/kube-flannel.yml && sleep 60- name: 过一分钟左右查看各节点状态,变为Ready说明网络打通了shell: kubectl get nodesregister: kubectl_nodes_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get nodes输出debug:var: kubectl_nodes_output.stdout_lines- name: 查看所有pod是否变为Runningshell: kubectl get pod --all-namespacesregister: kubectl_pod_output# 将打印的结果重定向到lsmod_output,并输出到控制台- name: kubectl get pod --all-namespaces输出debug:var: kubectl_pod_output.stdout_lines

http://www.lryc.cn/news/261524.html

相关文章:

  • Python基础入门第四节,第五节课笔记
  • 基于Java SSM框架实现智能停车场系统项目【项目源码+论文说明】
  • React系列:useEffect的使用
  • Ps:形状工具 - 描边选项
  • C#基础知识 - 变量、常量与数据类型篇
  • Java面向对象思想以及原理以及内存图解
  • Gitbook----基于 Windows 10 系统本地安装配置 Gitbook 编写属于自己的电子书
  • springMVC-Restful风格
  • 【OS】操作系统总复习笔记
  • powerbuilder游标的使⽤
  • docker创建镜像 Dockerfile
  • C++共享和保护——(2)生存期
  • 你好,C++(3)2.1 一个C++程序的自白
  • 【INTEL(ALTERA)】Agilex7 FPGA Development Kit DK-DEV-AGI027R1BES编程/烧录/烧写/下载步骤
  • 大文件分块上传的代码,C++转delphi,由delphi实现。
  • MongoDB表的主键可以重复?!MongoDB的坑
  • C++初阶-list类的模拟实现
  • RecyclerView中的设计模式解读
  • ACwing算法备战蓝桥杯——Day30——树状数组
  • elementui + vue2实现表格行的上下移动
  • 2、快速搞定Kafka术语
  • CSS新手入门笔记整理:CSS3选择器
  • D34|不同路径
  • 【运维】Kafka高可用: KRaft(不依赖zookeeper)集群搭建
  • Python 自动化之批量处理文件(一)
  • 力扣72. 编辑距离
  • Unity中 URP Shader 的纹理与采样器的分离定义
  • Electron学习第一天 ,启动项目
  • WebService技术--随笔1
  • 如何使用Docker将.Net6项目部署到Linux服务器(一)