当前位置: 首页 > news >正文

通过sealos工具在ubuntu 24.02上安装k8s集群

一、系统准备

(1)安装openssh服务
sudo apt install openssh-server
sudo systemctl start ssh
sudo systemctl enable ssh(2)放通防火墙
sudo ufw allow ssh(3)开通root直接登录
vim /etc/ssh/sshd_config#PermitRootLogin prohibit-password修改为PermitRootLogin yes重启
systemctl daemon-reload
systemctl restart sshd

二、安装sealos工具

(1)在master01上安装sealos工具
echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealosroot@master01:~# echo "deb [trusted=yes] https://apt.fury.io/labring/ /" | sudo tee /etc/apt/sources.list.d/labring.list
sudo apt update
sudo apt install sealos
deb [trusted=yes] https://apt.fury.io/labring/ /
Hit:1 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble InRelease
Hit:2 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-updates InRelease
Hit:4 http://security.ubuntu.com/ubuntu noble-security InRelease
Hit:3 http://mirrors.tuna.tsinghua.edu.cn/ubuntu noble-backports InRelease
Ign:5 https://apt.fury.io/labring  InRelease
Ign:6 https://apt.fury.io/labring  Release
Ign:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Get:7 https://apt.fury.io/labring  Packages
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Ign:8 https://apt.fury.io/labring  Translation-en
Ign:9 https://apt.fury.io/labring  Translation-en_US
Fetched 7,953 B in 7s (1,202 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
294 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:sealos
0 upgraded, 1 newly installed, 0 to remove and 294 not upgraded.
Need to get 31.5 MB of archives.
After this operation, 94.2 MB of additional disk space will be used.
Get:1 https://apt.fury.io/labring  sealos 5.0.1 [31.5 MB]
Fetched 31.5 MB in 20s (1,546 kB/s)
Selecting previously unselected package sealos.
(Reading database ... 152993 files and directories currently installed.)
Preparing to unpack .../sealos_5.0.1_amd64.deb ...
Unpacking sealos (5.0.1) ...
Setting up sealos (5.0.1) ...
root@master01:~#

三、通过sealos安装k8s

(1)通过sealos工具安装k8s 1.29.9,网络插件选择ciliumsealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'root@master01:~#
sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4 \
registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4 \
--masters 192.168.1.98 \
--nodes 192.168.1.102,192.168.1.103 -p 'As(2dc_2saccC82'
2025-08-10T12:17:40 info Start to create a new cluster: master [192.168.1.98], worker [192.168.1.102 192.168.1.103], registry 192.168.1.98
2025-08-10T12:17:40 info Executing pipeline Check in CreateProcessor.
2025-08-10T12:17:40 info checker:hostname [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:40 info checker:timeSync [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info checker:containerd [192.168.1.98:22 192.168.1.102:22 192.168.1.103:22]
2025-08-10T12:17:41 info Executing pipeline PreProcess in CreateProcessor.
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.29.9...
Getting image source signatures
Copying blob a90669518f1a done
Copying blob 45c9d75a9656 done
Copying blob 2fbba8062b0b done
Copying blob fdc3a198d6ba done
Copying config bca192f355 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/helm:v3.9.4...
Getting image source signatures
Copying blob 7f5c52c74e5b done
Copying config 3376f68220 done
Writing manifest to image destination
Storing signatures
Trying to pull registry.cn-shanghai.aliyuncs.com/labring/cilium:v1.13.4...
Getting image source signatures
Copying blob 7ca2ee4eb38c done
Copying config 71aa52ad0a done
Writing manifest to image destination
Storing signatures
2025-08-10T12:19:52 info Executing pipeline RunConfig in CreateProcessor.
2025-08-10T12:19:52 info Executing pipeline MountRootfs in CreateProcessor.
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
2025-08-10T12:19:53 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.103:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.103:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
192.168.1.102:22es to 192025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.102:22        2025-08-10T12:20:15 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T12:20:15 info Executing pipeline MirrorRegistry in CreateProcessor.
2025-08-10T12:20:15 info trying default http mode to sync images to hosts [192.168.1.98:22]
2025-08-10T12:20:18 info Executing pipeline Bootstrap in CreateProcessorINFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.103:22         INFO [2025-08-10 12:20:24] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.102:22         INFO [2025-08-10 12:20:18] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...INFO [2025-08-10 12:20:19] >> check root,port,cri success
192.168.1.103:22         INFO [2025-08-10 12:20:25] >> check root,port,cri success
192.168.1.102:22         INFO [2025-08-10 12:20:19] >> check root,port,cri success
2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
192.168.1.103:22        2025-08-10T12:20:25 info domain sealos.hub:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:19 info domain sealos.hub:192.168.1.98 append success
Created symlink /etc/systemd/system/multi-user.target.wants/registry.service → /etc/systemd/system/registry.service.INFO [2025-08-10 12:20:20] >> Health check registry!INFO [2025-08-10 12:20:20] >> registry is runningINFO [2025-08-10 12:20:20] >> init registry success
2025-08-10T12:20:20 info domain apiserver.cluster.local:192.168.1.98 append success
192.168.1.102:22        2025-08-10T12:20:20 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.103:22        2025-08-10T12:20:26 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.102:22        2025-08-10T12:20:21 info domain lvscare.node.ip:192.168.1.102 append success
192.168.1.103:22        2025-08-10T12:20:27 info domain lvscare.node.ip:192.168.1.103 append success
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.INFO [2025-08-10 12:20:23] >> Health check containerd!INFO [2025-08-10 12:20:23] >> containerd is runningINFO [2025-08-10 12:20:23] >> init containerd success
Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> Health check containerd!
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> containerd is running
192.168.1.102:22         INFO [2025-08-10 12:20:23] >> init containerd success
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!INFO [2025-08-10 12:20:24] >> image-cri-shim is runningINFO [2025-08-10 12:20:24] >> init shim success
127.0.0.1 localhost
::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> Health check containerd!
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> containerd is running
192.168.1.103:22         INFO [2025-08-10 12:20:30] >> init containerd success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> Health check image-cri-shim!
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> image-cri-shim is running
192.168.1.102:22         INFO [2025-08-10 12:20:24] >> init shim success
192.168.1.102:22        127.0.0.1 localhost
192.168.1.102:22        ::1     ip6-localhost ip6-loopback
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> Health check image-cri-shim!
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> image-cri-shim is running
192.168.1.103:22         INFO [2025-08-10 12:20:31] >> init shim success
192.168.1.103:22        127.0.0.1 localhost
192.168.1.103:22        ::1     ip6-localhost ip6-loopback
Firewall stopped and disabled on system startup
* Applying /usr/lib/sysctl.d/10-apparmor.conf ...
* Applying /etc/sysctl.d/10-bufferbloat.conf ...
* Applying /etc/sysctl.d/10-console-messages.conf ...
* Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
* Applying /etc/sysctl.d/10-kernel-hardening.conf ...
* Applying /etc/sysctl.d/10-magic-sysrq.conf ...
* Applying /etc/sysctl.d/10-map-count.conf ...
* Applying /etc/sysctl.d/10-network-security.conf ...
* Applying /etc/sysctl.d/10-ptrace.conf ...
* Applying /etc/sysctl.d/10-zeropage.conf ...
* Applying /usr/lib/sysctl.d/30-tracker.conf ...
* Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
* Applying /usr/lib/sysctl.d/99-protect-links.conf ...
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
kernel.apparmor_restrict_unprivileged_userns = 1
net.core.default_qdisc = fq_codel
kernel.printk = 4 4 1 7
net.ipv6.conf.all.use_tempaddr = 2
net.ipv6.conf.default.use_tempaddr = 2
kernel.kptr_restrict = 1
kernel.sysrq = 176
vm.max_map_count = 1048576
net.ipv4.conf.default.rp_filter = 2
net.ipv4.conf.all.rp_filter = 2
kernel.yama.ptrace_scope = 1
vm.mmap_min_addr = 65536
fs.inotify.max_user_watches = 65536
kernel.unprivileged_userns_clone = 1
kernel.pid_max = 4194304
fs.protected_fifos = 1
fs.protected_hardlinks = 1
fs.protected_regular = 2
fs.protected_symlinks = 1
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealos
fs.file-max = 1048576 # sealos
net.bridge.bridge-nf-call-ip6tables = 1 # sealos
net.bridge.bridge-nf-call-iptables = 1 # sealos
net.core.somaxconn = 65535 # sealos
net.ipv4.conf.all.rp_filter = 0 # sealos
net.ipv4.ip_forward = 1 # sealos
net.ipv4.ip_local_port_range = 1024 65535 # sealos
net.ipv4.tcp_keepalive_intvl = 30 # sealos
net.ipv4.tcp_keepalive_time = 600 # sealos
net.ipv4.vs.conn_reuse_mode = 0 # sealos
net.ipv4.vs.conntrack = 1 # sealos
net.ipv6.conf.all.forwarding = 1 # sealos
vm.max_map_count = 2147483642 # sealosINFO [2025-08-10 12:20:25] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.102:22        Firewall stopped and disabled on system startup
Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.102:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.102:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.102:22        * Applying /etc/sysctl.conf ...
192.168.1.102:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.102:22        net.core.default_qdisc = fq_codel
192.168.1.102:22        kernel.printk = 4 4 1 7
192.168.1.102:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.102:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.102:22        kernel.kptr_restrict = 1
192.168.1.102:22        kernel.sysrq = 176
192.168.1.102:22        vm.max_map_count = 1048576
192.168.1.102:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.102:22        kernel.yama.ptrace_scope = 1
192.168.1.102:22        vm.mmap_min_addr = 65536
192.168.1.102:22        fs.inotify.max_user_watches = 65536
192.168.1.102:22        kernel.unprivileged_userns_clone = 1
192.168.1.102:22        kernel.pid_max = 4194304
192.168.1.102:22        fs.protected_fifos = 1
192.168.1.102:22        fs.protected_hardlinks = 1
192.168.1.102:22        fs.protected_regular = 2
192.168.1.102:22        fs.protected_symlinks = 1
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.102:22        fs.file-max = 1048576 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.102:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.102:22        net.core.somaxconn = 65535 # sealos
192.168.1.102:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.102:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.102:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.102:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.102:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.102:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.102:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.102:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        Firewall stopped and disabled on system startup
192.168.1.102:22         INFO [2025-08-10 12:20:26] >> pull pause image sealos.hub:5000/pause:3.9
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.103:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.103:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.103:22        * Applying /etc/sysctl.conf ...
192.168.1.103:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.103:22        net.core.default_qdisc = fq_codel
192.168.1.103:22        kernel.printk = 4 4 1 7
192.168.1.103:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.103:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.103:22        kernel.kptr_restrict = 1
192.168.1.103:22        kernel.sysrq = 176
192.168.1.103:22        vm.max_map_count = 1048576
192.168.1.103:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.103:22        kernel.yama.ptrace_scope = 1
192.168.1.103:22        vm.mmap_min_addr = 65536
192.168.1.103:22        fs.inotify.max_user_watches = 65536
192.168.1.103:22        kernel.unprivileged_userns_clone = 1
192.168.1.103:22        kernel.pid_max = 4194304
192.168.1.103:22        fs.protected_fifos = 1
192.168.1.103:22        fs.protected_hardlinks = 1
192.168.1.103:22        fs.protected_regular = 2
192.168.1.103:22        fs.protected_symlinks = 1
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22        fs.file-max = 1048576 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.103:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.103:22        net.core.somaxconn = 65535 # sealos
192.168.1.103:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.103:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.103:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.103:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.103:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.103:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.103:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.103:22        vm.max_map_count = 2147483642 # sealos
192.168.1.103:22         INFO [2025-08-10 12:20:32] >> pull pause image sealos.hub:5000/pause:3.9INFO [2025-08-10 12:20:26] >> init kubelet successINFO [2025-08-10 12:20:26] >> init rootfs success
192.168.1.102:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init kubelet success
192.168.1.102:22         INFO [2025-08-10 12:20:27] >> init rootfs success
192.168.1.103:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init kubelet success
192.168.1.103:22         INFO [2025-08-10 12:20:34] >> init rootfs success
2025-08-10T12:20:28 info Executing pipeline Init in CreateProcessor.
2025-08-10T12:20:28 info Copying kubeadm config to master0
2025-08-10T12:20:28 info start to generate cert and kubeConfig...
2025-08-10T12:20:28 info start to generate and copy certs to masters...
2025-08-10T12:20:28 info apiserver altNames : {map[apiserver.cluster.local:apiserver.cluster.local kubernetes:kubernetes kubernetes.default:kubernetes.default kubernetes.default.svc:kubernetes.default.svc kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local localhost:localhost master01:master01] map[10.103.97.2:10.103.97.2 10.96.0.1:10.96.0.1 127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98]}
2025-08-10T12:20:28 info Etcd altnames : {map[localhost:localhost master01:master01] map[127.0.0.1:127.0.0.1 192.168.1.98:192.168.1.98 ::1:::1]}, commonName : master01
2025-08-10T12:20:30 info start to copy etc pki files to masters
2025-08-10T12:20:30 info start to create kubeconfig...
2025-08-10T12:20:30 info start to copy kubeconfig files to masters
2025-08-10T12:20:30 info start to copy static files to masters
2025-08-10T12:20:30 info start to init master0...
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.29.9
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.29.9
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.29.9
[config/images] Pulled registry.k8s.io/kube-proxy:v1.29.9
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.11.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.15-0
W0810 12:20:39.357353    8594 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
[init] Using Kubernetes version: v1.29.9
[preflight] Running pre-flight checks[WARNING FileExisting-socat]: socat not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0810 12:20:39.455337    8594 checks.go:835] detected that the sandbox image "sealos.hub:5000/pause:3.9" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
W0810 12:20:40.239475    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/controller-manager.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
W0810 12:20:40.383648    8594 kubeconfig.go:273] a kubeconfig file "/etc/kubernetes/scheduler.conf" exists already but has an unexpected API Server URL: expected: https://192.168.1.98:6443, got: https://apiserver.cluster.local:6443
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.001602 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736 \--control-plane --certificate-key <value withheld>Then you can join any number of worker nodes by running the following on each as root:kubeadm join apiserver.cluster.local:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:957a2a9cbc2a717e819cd5108c7415c577baadccd95f01963d1be9a2357e1736
2025-08-10T12:20:46 info Executing pipeline Join in CreateProcessor.
2025-08-10T12:20:46 info [192.168.1.102:22 192.168.1.103:22] will be added as worker
2025-08-10T12:20:46 info start to get kubernetes token...
2025-08-10T12:20:46 info fetch certSANs from kubeadm configmap
2025-08-10T12:20:46 info start to join 192.168.1.103:22 as worker
2025-08-10T12:20:46 info start to copy kubeadm join config to node: 192.168.1.103:22
2025-08-10T12:20:46 info start to join 192.168.1.102:22 as worker
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.103:221/1, 643 it/s)
2025-08-10T12:20:47 info start to copy kubeadm join config to node: 192.168.1.102:22
192.168.1.103:22        2025-08-10T12:20:53 info Trying to add route
192.168.1.103:22        2025-08-10T12:20:53 info success to set route.(host:10.103.97.2, gateway:192.168.1.103)
2025-08-10T12:20:47 info start join node: 192.168.1.103:22
192.168.1.103:22        [preflight] Running pre-flight checks
192.168.1.103:22                [WARNING FileExisting-socat]: socat not found in system path
2025-08-10T12:20:47 info run ipvs once module: 192.168.1.102:221/1, 728 it/s)
192.168.1.102:22        2025-08-10T12:20:48 info Trying to add route
192.168.1.102:22        2025-08-10T12:20:48 info success to set route.(host:10.103.97.2, gateway:192.168.1.102)
2025-08-10T12:20:48 info start join node: 192.168.1.102:22
192.168.1.102:22        [preflight] Running pre-flight checks
192.168.1.102:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.102:22        [preflight] Reading configuration from the cluster...
192.168.1.102:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.102:22        W0810 12:21:00.215357    9534 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.102:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.102:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.102:22        [kubelet-start] Starting the kubelet
192.168.1.102:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.102:22
192.168.1.102:22        This node has joined the cluster:
192.168.1.102:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.102:22        * The Kubelet was informed of the new secure connection details.
192.168.1.102:22
192.168.1.102:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.102:22
2025-08-10T12:21:02 info succeeded in joining 192.168.1.102:22 as worker
192.168.1.103:22        [preflight] Reading configuration from the cluster...
192.168.1.103:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.103:22        W0810 12:21:11.756483    6695 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.103:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.103:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.103:22        [kubelet-start] Starting the kubelet
192.168.1.103:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.103:22
192.168.1.103:22        This node has joined the cluster:
192.168.1.103:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.103:22        * The Kubelet was informed of the new secure connection details.
192.168.1.103:22
192.168.1.103:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.103:22
2025-08-10T12:21:07 info succeeded in joining 192.168.1.103:22 as worker
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.103:22 master: [192.168.1.98:6443]
2025-08-10T12:21:07 info start to sync lvscare static pod to node: 192.168.1.102:22 master: [192.168.1.98:6443]
192.168.1.103:22        2025-08-10T12:21:14 info generator lvscare static pod is success
192.168.1.102:22        2025-08-10T12:21:08 info generator lvscare static pod is success
2025-08-10T12:21:08 info Executing pipeline RunGuest in CreateProcessor.
ℹ️  Using Cilium version 1.13.4
🔮 Auto-detected cluster name: kubernetes
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
2025-08-10T12:21:09 info succeeded in creating a new cluster, enjoy it!
2025-08-10T12:21:09 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# 

四、查看k8s服务状态

(1)查看使用到的镜像
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#(2)查看节点状态
root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
master01   Ready    control-plane   3m42s   v1.29.9
node1      Ready    <none>          3m24s   v1.29.9
node2      Ready    <none>          3m19s   v1.29.9
root@master01:~#(3)查看pod状态
root@master01:~# kubectl get pod -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   cilium-2gdqf                       1/1     Running   0          3m29s
kube-system   cilium-operator-6946ccbcc5-9w275   1/1     Running   0          3m29s
kube-system   cilium-pmhgr                       1/1     Running   0          3m29s
kube-system   cilium-wnp9r                       1/1     Running   0          3m29s
kube-system   coredns-76f75df574-nf7bd           1/1     Running   0          3m39s
kube-system   coredns-76f75df574-s89vx           1/1     Running   0          3m39s
kube-system   etcd-master01                      1/1     Running   0          3m52s
kube-system   kube-apiserver-master01            1/1     Running   0          3m54s
kube-system   kube-controller-manager-master01   1/1     Running   0          3m53s
kube-system   kube-proxy-6mlkb                   1/1     Running   0          3m39s
kube-system   kube-proxy-7jx96                   1/1     Running   0          3m32s
kube-system   kube-proxy-9k92l                   1/1     Running   0          3m37s
kube-system   kube-scheduler-master01            1/1     Running   0          3m52s
kube-system   kube-sealos-lvscare-node1          1/1     Running   0          3m17s
kube-system   kube-sealos-lvscare-node2          1/1     Running   0          3m12s
root@master01:~#(4)查看证书有效期
root@master01:~# kubeadm certs check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0810 12:26:42.829869   12731 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver                  Jul 17, 2125 04:20 UTC   99y             ca                      no
apiserver-etcd-client      Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
apiserver-kubelet-client   Jul 17, 2125 04:20 UTC   99y             ca                      no
controller-manager.conf    Jul 17, 2125 04:20 UTC   99y             ca                      no
etcd-healthcheck-client    Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-peer                  Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
etcd-server                Jul 17, 2125 04:20 UTC   99y             etcd-ca                 no
front-proxy-client         Jul 17, 2125 04:20 UTC   99y             front-proxy-ca          no
scheduler.conf             Jul 17, 2125 04:20 UTC   99y             ca                      no
super-admin.conf           Aug 10, 2026 04:20 UTC   364d            ca                      noCERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jul 17, 2125 04:20 UTC   99y             no
etcd-ca                 Jul 17, 2125 04:20 UTC   99y             no
front-proxy-ca          Jul 17, 2125 04:20 UTC   99y             no
root@master01:~#

五、在线添加node节点,ip为192.168.1.104

root@master01:~# sealos add --nodes 192.168.1.104
2025-08-10T14:55:12 info start to scale this cluster
2025-08-10T14:55:12 info Executing pipeline JoinCheck in ScaleProcessor.
2025-08-10T14:55:12 info checker:hostname [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:12 info checker:timeSync [192.168.1.98:22 192.168.1.104:22]
2025-08-10T14:55:13 info checker:containerd [192.168.1.104:22]
2025-08-10T14:55:13 info Executing pipeline PreProcess in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline PreProcessImage in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline RunConfig in ScaleProcessor.
2025-08-10T14:55:13 info Executing pipeline MountRootfs in ScaleProcessor.
192.168.1.104:22es to 192025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/config.toml from /var/lib/sealos/data/default/rootfs/etc/config.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/containerd.service from /var/lib/sealos/data/default/rootfs/etc/containerd.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/hosts.toml from /var/lib/sealos/data/default/rootfs/etc/hosts.toml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml from /var/lib/sealos/data/default/rootfs/etc/image-cri-shim.yaml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/kubelet.service from /var/lib/sealos/data/default/rootfs/etc/kubelet.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.service from /var/lib/sealos/data/default/rootfs/etc/registry.service.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry.yml from /var/lib/sealos/data/default/rootfs/etc/registry.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/registry_config.yml from /var/lib/sealos/data/default/rootfs/etc/registry_config.yml.tmpl completed
192.168.1.104:22        2025-08-10T14:55:51 info render /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf from /var/lib/sealos/data/default/rootfs/etc/systemd/system/kubelet.service.d/10-kubeadm.conf.tmpl completed
2025-08-10T14:55:46 info Executing pipeline Bootstrap in ScaleProcessor
192.168.1.104:22         INFO [2025-08-10 14:55:51] >> Check port kubelet port 10249..10259, reserved port 5050..5054 inuse. Please wait...
192.168.1.104:22         INFO [2025-08-10 14:55:53] >> check root,port,cri success
192.168.1.104:22        2025-08-10T14:55:53 info domain sealos.hub:192.168.1.98 append success
192.168.1.104:22        2025-08-10T14:55:53 info domain apiserver.cluster.local:10.103.97.2 append success
192.168.1.104:22        2025-08-10T14:55:54 info domain lvscare.node.ip:192.168.1.104 append success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service.
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> Health check containerd!
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> containerd is running
192.168.1.104:22         INFO [2025-08-10 14:55:59] >> init containerd success
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/image-cri-shim.service → /etc/systemd/system/image-cri-shim.service.
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> Health check image-cri-shim!
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> image-cri-shim is running
192.168.1.104:22         INFO [2025-08-10 14:56:00] >> init shim success
192.168.1.104:22        127.0.0.1 localhost
192.168.1.104:22        ::1     ip6-localhost ip6-loopback
192.168.1.104:22        Firewall stopped and disabled on system startup
192.168.1.104:22        * Applying /usr/lib/sysctl.d/10-apparmor.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-bufferbloat.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-console-messages.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ipv6-privacy.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-kernel-hardening.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-magic-sysrq.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-map-count.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-network-security.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-ptrace.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/10-zeropage.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/30-tracker.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-bubblewrap.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
192.168.1.104:22        * Applying /usr/lib/sysctl.d/99-protect-links.conf ...
192.168.1.104:22        * Applying /etc/sysctl.d/99-sysctl.conf ...
192.168.1.104:22        * Applying /etc/sysctl.conf ...
192.168.1.104:22        kernel.apparmor_restrict_unprivileged_userns = 1
192.168.1.104:22        net.core.default_qdisc = fq_codel
192.168.1.104:22        kernel.printk = 4 4 1 7
192.168.1.104:22        net.ipv6.conf.all.use_tempaddr = 2
192.168.1.104:22        net.ipv6.conf.default.use_tempaddr = 2
192.168.1.104:22        kernel.kptr_restrict = 1
192.168.1.104:22        kernel.sysrq = 176
192.168.1.104:22        vm.max_map_count = 1048576
192.168.1.104:22        net.ipv4.conf.default.rp_filter = 2
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 2
192.168.1.104:22        kernel.yama.ptrace_scope = 1
192.168.1.104:22        vm.mmap_min_addr = 65536
192.168.1.104:22        fs.inotify.max_user_watches = 65536
192.168.1.104:22        kernel.unprivileged_userns_clone = 1
192.168.1.104:22        kernel.pid_max = 4194304
192.168.1.104:22        fs.protected_fifos = 1
192.168.1.104:22        fs.protected_hardlinks = 1
192.168.1.104:22        fs.protected_regular = 2
192.168.1.104:22        fs.protected_symlinks = 1
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22        fs.file-max = 1048576 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-ip6tables = 1 # sealos
192.168.1.104:22        net.bridge.bridge-nf-call-iptables = 1 # sealos
192.168.1.104:22        net.core.somaxconn = 65535 # sealos
192.168.1.104:22        net.ipv4.conf.all.rp_filter = 0 # sealos
192.168.1.104:22        net.ipv4.ip_forward = 1 # sealos
192.168.1.104:22        net.ipv4.ip_local_port_range = 1024 65535 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_intvl = 30 # sealos
192.168.1.104:22        net.ipv4.tcp_keepalive_time = 600 # sealos
192.168.1.104:22        net.ipv4.vs.conn_reuse_mode = 0 # sealos
192.168.1.104:22        net.ipv4.vs.conntrack = 1 # sealos
192.168.1.104:22        net.ipv6.conf.all.forwarding = 1 # sealos
192.168.1.104:22        vm.max_map_count = 2147483642 # sealos
192.168.1.104:22         INFO [2025-08-10 14:56:03] >> pull pause image sealos.hub:5000/pause:3.9
192.168.1.104:22        Image is up to date for sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
192.168.1.104:22        Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service.
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init kubelet success
192.168.1.104:22         INFO [2025-08-10 14:56:05] >> init rootfs success
2025-08-10T14:56:00 info Executing pipeline Join in ScaleProcessor.
2025-08-10T14:56:00 info [192.168.1.104:22] will be added as worker
2025-08-10T14:56:00 info start to get kubernetes token...
2025-08-10T14:56:01 info fetch certSANs from kubeadm configmap
2025-08-10T14:56:01 info start to join 192.168.1.104:22 as worker
2025-08-10T14:56:01 info start to copy kubeadm join config to node: 192.168.1.104:22
2025-08-10T14:56:02 info run ipvs once module: 192.168.1.104:221/1, 186 it/s)
192.168.1.104:22        2025-08-10T14:56:07 info Trying to add route
192.168.1.104:22        2025-08-10T14:56:07 info success to set route.(host:10.103.97.2, gateway:192.168.1.104)
2025-08-10T14:56:02 info start join node: 192.168.1.104:22
192.168.1.104:22        [preflight] Running pre-flight checks
192.168.1.104:22                [WARNING FileExisting-socat]: socat not found in system path
192.168.1.104:22        [preflight] Reading configuration from the cluster...
192.168.1.104:22        [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
192.168.1.104:22        W0810 14:56:08.112112    6085 common.go:200] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
192.168.1.104:22        W0810 14:56:08.112331    6085 utils.go:69] The recommended value for "healthzBindAddress" in "KubeletConfiguration" is: 127.0.0.1; the provided value is: 0.0.0.0
192.168.1.104:22        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
192.168.1.104:22        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
192.168.1.104:22        [kubelet-start] Starting the kubelet
192.168.1.104:22        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
192.168.1.104:22
192.168.1.104:22        This node has joined the cluster:
192.168.1.104:22        * Certificate signing request was sent to apiserver and a response was received.
192.168.1.104:22        * The Kubelet was informed of the new secure connection details.
192.168.1.104:22
192.168.1.104:22        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
192.168.1.104:22
2025-08-10T14:56:05 info succeeded in joining 192.168.1.104:22 as worker
2025-08-10T14:56:05 info start to sync lvscare static pod to node: 192.168.1.104:22 master: [192.168.1.98:6443]
192.168.1.104:22        2025-08-10T14:56:11 info generator lvscare static pod is success
2025-08-10T14:56:06 info Executing pipeline RunGuest in ScaleProcessor.
2025-08-10T14:56:07 info succeeded in scaling this cluster
2025-08-10T14:56:07 info___           ___           ___           ___       ___           ___/\  \         /\  \         /\  \         /\__\     /\  \         /\  \/::\  \       /::\  \       /::\  \       /:/  /    /::\  \       /::\  \/:/\ \  \     /:/\:\  \     /:/\:\  \     /:/  /    /:/\:\  \     /:/\ \  \_\:\~\ \  \   /::\~\:\  \   /::\~\:\  \   /:/  /    /:/  \:\  \   _\:\~\ \  \/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/    /:/__/ \:\__\ /\ \:\ \ \__\\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/  / \:\  \    \:\  \ /:/  / \:\ \:\ \/__/\:\ \:\__\    \:\ \:\__\        \::/  /   \:\  \    \:\  /:/  /   \:\ \:\__\\:\/:/  /     \:\ \/__/        /:/  /     \:\  \    \:\/:/  /     \:\/:/  /\::/  /       \:\__\         /:/  /       \:\__\    \::/  /       \::/  /\/__/         \/__/         \/__/         \/__/     \/__/         \/__/Website: https://www.sealos.io/Address: github.com/labring/sealosVersion: 5.0.1-2b74a1281root@master01:~# root@master01:~# kubectl get nodes
NAME       STATUS   ROLES           AGE    VERSION
master01   Ready    control-plane   156m   v1.29.9
node03     Ready    <none>          86s    v1.29.9
node1      Ready    <none>          156m   v1.29.9
node2      Ready    <none>          156m   v1.29.9
root@master01:~#

六、安装sealos集群,实现k8s 图形化paas服务

(1)下载sealos-cloud镜像,并上传到master01节点
注:直接去拉阿里云的镜像,会报权限问题
docker pull docker.io/labring/sealos-cloud:latest(2)打包镜像
docker save -o sealos-cloud.tar docker.io/labring/sealos-cloud:latest(3)上传sealos-cloud.tar到/home/test目录,并使用sealos load导入镜像
root@master01:~# sealos load -i /home/test/sealos-cloud.tar
Getting image source signatures
Copying blob b63eb4a8e470 done
Copying config 8f15d6df44 done
Writing manifest to image destination
Storing signatures
Loaded image: docker.io/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                             TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes   v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                         latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium       v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm         v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~# sealos tag docker.io/labring/sealos-cloud:latest registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud:latest
root@master01:~# sealos images
REPOSITORY                                               TAG       IMAGE ID       CREATED        SIZE
registry.cn-shanghai.aliyuncs.com/labring/kubernetes     v1.29.9   bca192f35556   3 months ago   669 MB
docker.io/labring/sealos-cloud                           latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/sealos-cloud   latest    8f15d6df448e   7 months ago   1.46 GB
registry.cn-shanghai.aliyuncs.com/labring/cilium         v1.13.4   71aa52ad0a11   2 years ago    483 MB
registry.cn-shanghai.aliyuncs.com/labring/helm           v3.9.4    3376f6822067   2 years ago    46.4 MB
root@master01:~#

七、安装kubeblocks

(1)安装snapshot

(1)安装snapshot
kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
kubectl get crd volumesnapshots.snapshot.storage.k8s.io
kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.iokubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v8.2.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yamlroot@master01:~# kubectl get crd volumesnapshotclasses.snapshot.storage.k8s.io
NAME                                            CREATED AT
volumesnapshotclasses.snapshot.storage.k8s.io   2025-08-10T07:31:06Z
root@master01:~# kubectl get crd volumesnapshots.snapshot.storage.k8s.io
NAME                                      CREATED AT
volumesnapshots.snapshot.storage.k8s.io   2025-08-10T07:31:07Z
root@master01:~# kubectl get crd volumesnapshotcontents.snapshot.storage.k8s.io
NAME                                             CREATED AT
volumesnapshotcontents.snapshot.storage.k8s.io   2025-08-10T07:31:08Z
root@master01:~#
1.2 部署快照控制器
root@master01:~# helm repo add piraeus-charts https://piraeus.io/helm-charts/
"piraeus-charts" has been added to your repositories
root@master01:~# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "piraeus-charts" chart repository
Update Complete. ⎈Happy Helming!⎈
root@master01:~# helm install snapshot-controller piraeus-charts/snapshot-controller -n kb-system --create-namespace
NAME: snapshot-controller
LAST DEPLOYED: Sun Aug 10 15:35:25 2025
NAMESPACE: kb-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Volume Snapshot Controller installed.If you already have volume snapshots deployed using a CRDs before v1, you should
verify that the existing snapshots are upgradable to v1 CRDs. The snapshot controller (>= v3.0.0)
will label any invalid snapshots it can find. Use the following commands to find any invalid snapshotkubectl get volumesnapshots --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespaces
kubectl get volumesnapshotcontents --selector=snapshot.storage.kubernetes.io/invalid-snapshot-resource="" --all-namespacesIf the above commands return any items, you need to remove them before upgrading to the newer v1 CRDs.
root@master01:~#


1.3 验证部署

root@master01:~# kubectl create -f https://ghfast.top/https://github.com/apecloud/kubeblocks/releases/download/v1.0.0/kubeblocks_crds.yaml
customresourcedefinition.apiextensions.k8s.io/clusterdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/clusters.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/components.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentversions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configconstraints.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/configurations.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/servicedescriptors.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/shardingdefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/sidecardefinitions.apps.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/actionsets.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicies.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuppolicytemplates.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backuprepos.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backups.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/backupschedules.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/restores.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/storageproviders.dataprotection.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/nodecountscalers.experimental.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/addons.extensions.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsdefinitions.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/opsrequests.operations.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/componentparameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/paramconfigrenderers.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parameters.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/parametersdefinitions.parameters.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/reconciliationtraces.trace.kubeblocks.io created
customresourcedefinition.apiextensions.k8s.io/instancesets.workloads.kubeblocks.io created
root@master01:~#

http://www.lryc.cn/news/616250.html

相关文章:

  • Loki+Alloy+Grafana构建轻量级的日志分析系统
  • FFmpeg实现音视频转码
  • Spring AOP 底层实现(面试重点难点)
  • AQS(AbstractQueuedSynchronizer)底层源码实现与设计思想
  • 前端路由:Hash 模式与 History 模式深度解析
  • Java Stream流详解:从基础语法到实战应用
  • Rust 实战四 | Traui2+Vue3+Rspack 开发桌面应用:通配符掩码计算器
  • 【算法题】:和为N的连续正数序列
  • 数学建模:控制预测类问题
  • Python 获取对象信息的所有方法
  • matlab实现随机森林算法
  • Doubletrouble靶机练习
  • 点击速度测试:一款放大操作差距的互动挑战游戏
  • #Datawhale AI夏令营#第三期全球AI攻防挑战赛(AIGC技术-图像方向)
  • 如何用分析方法解决工作中的问题?
  • 检索召回率优化探究五(BGE-M3 混合检索):基于LangChain0.3 集成Milvu2.5 向量数据库构建的智能问答系统
  • Linux系统编程Day11 -- 进程属性和常见进程
  • 学习模板元编程(3)enable_if
  • 【树状数组】Dynamic Range Sum Queries
  • [激光原理与应用-221]:设计 - 皮秒紫外激光器 - 常见技术难题、原因与解决方案
  • Linux-静态配置ip地址
  • 【无标题】命名管道(Named Pipe)是一种在操作系统中用于**进程间通信(IPC)** 的机制
  • 深度解析Linux设备树(DTS):设计原理、实现框架与实例分析
  • 基于Qt/QML 5.14和YOLOv8的工业异常检测Demo:冲压点智能识别
  • 线程池的核心线程数与最大线程数怎么设置
  • 基于FFmpeg的B站视频下载处理
  • 简要介绍交叉编译工具arm-none-eabi、arm-linux-gnueabi与arm-linux-gnueabihf
  • 【iOS】JSONModel源码学习
  • 2025.8.10总结
  • mpv core_thread pipeline