k8s 搭建
需求:
搭建k8s 为后续自动部署做准备
进程:
安装至少两个ubuntu18.04系统(一个master 一到多个 node)
每个系统上都要装上docker 和 kubernetes
安装docker
sudo su
apt-get update#安装相关插件
apt-get install apt-transport-https ca-certificates curl gnupg lsb-release software-properties-common -y#获取docker 对应的key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
#修改源
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) \stable"
apt-get update#安装docker 及相关部件
apt-get install docker-ce docker-ce-cli containerd.io -y#查看docker 是否安装正常
docker --version#启动docker 并设置开机自启
sudo systemctl daemon-reload && sudo systemctl restart docker && sudo systemctl enable docker
安装kubernetes
#基于上面安装的插件,可以直接获取kubernetes 的key
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF#查看一下是否成功
cat /etc/apt/sources.list.d/kubernetes.listapt-get update
#安装kubernetes 及相关部件
apt-get install -y kubelet=1.19.2-00 kubeadm=1.19.2-00 kubectl=1.19.2-00 kubernetes-cni#启动并设置开机自启
sudo systemctl enable kubelet && systemctl start kubelet#查看是否成功
kubectl version
启动 kubernetes 可能会失败需要关闭 Swap
sudo swapoff -a #暂时关闭
nano /etc/fstab #永久关闭,注释掉swap那一行,推荐永久关闭
初始化master
# --pod-network-cidr pod 的网段
# --apiserver-advertise-address master 的ip
#记得替换成自己的
kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=192.168.197.0/16 --apiserver-advertise-address=192.168.197.135#init 成功后最后会有一段话返回
Your Kubernetes control-plane has initialized successfully!#你需要在普通权限下运行以下命令
To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/#你可以把node 通过以下命令挂到master 下
Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.197.135:6443 --token hbbn5i.u6fqjr0phforyr2q \--discovery-token-ca-cert-hash sha256:e3f40cb90a3d791deaf6b6606ec500cffc8b48d0351b085cd0d4f74a6ce0e794
配置flannel 通信
#两个文件在末尾
kubectl create -f kube-flannel-rbac.yml
kubectl create -f kube-flannel.yml#上面两条有时候不需要 master 和 node 都需要哦
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
将node 挂到 master 上
#如果上面那个join 忘了可以重新生成token
kubeadm token create --print-join-command#然后再node 上运行返回的 内容
在node 上运行可能会失败报错The connection to the server localhost:8080 was refused - did you specify the right host or port?
在master 里找到文件 /etc/kubernetes/admin.conf 拷贝到node 下
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
重新运行join
查看node 的状态
kubectl get nodes
返回的状态都为Ready 表示创建完成
拓展:
kube-flannel-rbac.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
rules:- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-system
kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:name: kube-flannellabels:pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
- apiGroups:- "networking.k8s.io"resources:- clustercidrsverbs:- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:name: flannelnamespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodeapp: flannel
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: docker.io/flannel/flannel-cni-plugin:v1.1.2#image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: docker.io/flannel/flannel:v0.20.2#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: docker.io/flannel/flannel:v0.20.2#image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate