当前位置: 首页 > news >正文

kubernates-1.26.1 kubeadm containerd 单机部署

k8s1.26 kubeadm containerd 安装

kubeadm init 时提示 containerd 错误

failed to pull image “k8s.gcr.io/pause:3.6” 报错日志显示containerd pull时找不到对应的pause版本,而不是registry.k8s.io/pause:3.9

[root@k8s-master containerd]# kubeadm init --kubernetes-version=1.26.1 --pod-network-cidr=10.244.0.0/16 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers  --ignore-preflight-errors=all
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.128.182]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.29.128.182 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

查看错误日志

[root@k8s-master ~]# journalctl -xeu kubelet

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image “k8s.gcr.io/pause:3.6”: failed to pull image “k8s.gcr.io/pause:3.6”: failed to pull and unpack image “k8s.gcr.io/pause:3.6”: failed to resolve reference “k8s.gcr.io/pause:3.6”: failed to do request: Head “https://k8s.gcr.io/v2/pause/manifests/3.6”: dial tcp 172.29.128.182:443: connect: connection refused

k8s核心服务的pod创建失败,因为获取pause镜像失败,总是从k8s.gcr.io下载。
k8s 1.26中启用了CRI sandbox(pause) image的配置支持。
之前通过kubeadm init –image-repository设置的镜像地址,不再会传递给cri运行时去下载pause镜像
而是需要在cri运行时的配置文件中设置。

解决办法:

ctr -n k8s.io image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6
ctr -n k8s.io image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6

http://www.lryc.cn/news/20908.html

相关文章:

  • 如何在 iPhone 上恢复已删除的通话记录/通话记录
  • Canonical为所有支持的Ubuntu LTS系统发布了新的Linux内核更新
  • MS9122是一款USB单芯片投屏器,内部集成了USB2 0 控制器和数据收发模块、HDMI 数据接口和音视频处理模块。MS9122可以通过USB接口显示
  • C++学习笔记-数据抽象
  • 【Android】Android开发笔记(一)
  • C语言数据结构(二)—— 受限线性表 【栈(Stack)、队列(Queue)】
  • 线程安全之synchronized和volatile
  • 量子计算对网络安全的影响
  • MyBatis——增删改查操作的实现
  • 【7】linux命令每日分享——cat查看文件内容
  • 新氧2023年财务业绩预测:退市风险大幅降低,收入增长将放缓
  • C++使用shared_ptr与weak_ptr轻松管理内存
  • Buuctf reverse [FlareOn4]IgniteMe 题解
  • ChatGPT懂SAP吗?跟ChatGPT聊完后,我暂时不担心会失业
  • Communications link failure 解决方法 !!!
  • pytorch入门2--数据预处理、线性代数的矩阵实现、求导
  • 15.消息队列RabbitMQ
  • 并发编程之死锁问题介绍
  • 【python学习笔记】:SQL常用脚本(一)
  • Spring是怎么解决循环依赖的
  • HTML创意动画代码
  • 软工第一次个人作业——阅读和提问
  • urho3d的自定义文件格式
  • spark第一章:环境安装
  • MySQL---存储过程与存储函数的相关概念
  • PMP值得考吗?
  • Quartus 报错汇总(持续更新...)
  • Netty权威指南总结(一)
  • Elasticsearch:如何轻松安全地对实时 Elasticsearch 索引重新索引你的数据
  • 【算法笔记】前缀和与差分