当前位置: 首页 > news >正文

Docker Swarm 集群使用记录

1 初始化集群

manager主机目录:

data
├── base_data.yml
├── base_monitoring.yml
├── base_server_middleware.yml
└── docker├── consul├── elasticsearch├── filebeat├── grafana├── kib├── konga├── mongodb├── mysql├── nacos├── nginx├── portainer├── postgresql├── prometheus├── rabbitmq└── redis
1.1 先配置各个服务器之间的host名称

先配置各个主机节点名称,方便后续区分

 hostnamectl set-hostname  manager    # 管理节点hostnamectl set-hostname  node1      # work节点1hostnamectl set-hostname  node2      # work节点2

修改完主机名称后,需要 配置一下 /etc/hosts 记录,否则后续使用dns可能会出现 unable to resolve host xxxx: Temporary failure in name resolution 问题

vi /etc/hosts127.0.1.1 上面修改主机节点的名称, 如 127.0.1.1 manager 或 node1、node2等
1.2 创建集群并加入
docker swarm init --advertise-addr 10.10.6.111 --data-path-addr 10.10.6.111
docker swarm join --token SWMTKN-1-51niu3a5jh0bgj738go49re9yoo1hpzidq6nxn5ho114yx43-ekeyf6rynb6xl9rykyrx8 \
10.10.6.111:2377 --advertise-addr 172.168.1.175:2377

需要注意,如果用docker swarm 搭建的机器的服务器来源于不同网络的,比如每个服务器都是云服务器,并且云服务器之间都是只能通过公网ip进行通信的,那么node节点需要在加入服务器的时候使用 --advertise-addr参数加上当前服务器的公网ip, 如 我上面的172.168.1.175 这个ip地址是公网ip,可以与10.10.6.111进行连接,否则将集群内的服务无法通过 dns 服务发现的内网进行通信,如果没有使用 --advertise-addr进行标明加入的节点ip地址,那么docker swarm node节点默认以当前eth0网卡中的ip地址加入到集群中,eth0 网卡下的ip默认是内网ip,所以才会导致如果是不同网段下的节点无法正常通信,如果当前集群是在内网中,每个服务器的eth0 网卡下的ip可以进行互联的话,那么则不需要使用 --advertise-addr进行连接

1.2 开放端口号

1 到云服务器厂商管理后台上将 7946,4789,2377 tcp/udp端口进行开放,开放给集群内的服务器

2 在各个集群内的服务器上开放防火墙端口
ufw allow proto tcp from any to any port 7946,4789,2377
ufw allow proto udp from any to any port 7946,4789,2377

2 创建基本数据库服务

base_data.yml:

version: '3.8'networks:base_service_database-net:external: trueservices:
#mysqlmysql:# mysql:8.0.20 或其它mysql版本(自己修改)image: mysql:8.0.20# 容器名册container_name: mysql-8networks:- base_service_database-netenvironment:#密码设置- MYSQL_ROOT_PASSWORD=python- TZ=Asia/Shanghai- SET_CONTAINER_TIMEZONE=true- CONTAINER_TIMEZONE=Asia/Shanghaivolumes:# 前面宿主机目录,后面容器内目录(宿主机没有的目录会自动创建)- /data/docker/mysql/mysql8:/etc/mysql- /data/docker/mysql/mysql8/logs:/logs- /data/docker/mysql/mysql8/data:/var/lib/mysql- /etc/localtime:/etc/localtime- /data/docker/mysql/mysql8/mysql-files:/var/lib/mysql-filesdeploy:placement:constraints:- node.hostname == manager  #replicas: 1  # 单副本确保固定节点ports:# 前面宿主机目录,后面容器内目录- 3613:3306restart: alwaysprivileged: true#mongomongo:restart: alwaysimage: mongo:8.0.3container_name: mongodbnetworks:- base_service_database-netvolumes:- /data/docker/mongodb/config/mongod.conf:/etc/mongod.conf- /data/docker/mongodb/data:/data/db- /data/docker/mongodb/logs:/var/log/mongodbports:- 27017:27017environment:- MONGO_INITDB_ROOT_PASSWORD=python- MONGO_INITDB_ROOT_USERNAME=caipu_srvdeploy:placement:constraints:- node.hostname == manager  ##redisredis:image: redis:7.0.12container_name: redisrestart: alwaysnetworks:- base_service_database-netcommand: redis-server /usr/local/etc/redis/redis.conf --appendonly novolumes:- /etc/localtime:/etc/localtime- /data/docker/redis/config/redis.conf:/usr/local/etc/redis/redis.conf- /data/docker/redis/data:/data- /data/docker/redis/logs:/logsports:- 6379:6379deploy:placement:constraints:- node.hostname == manager  #kong-database:image: postgres:16container_name: kong-databaserestart: alwaysnetworks:- base_service_database-netenvironment:- POSTGRES_USER=kong- POSTGRES_DB=kong- POSTGRES_PASSWORD=kongvolumes:- /data/docker/postgresql/data:/var/lib/postgis/data- /data/docker/postgresql/data:/var/lib/postgresql/dataports:- "5348:5432"deploy:placement:constraints:- node.hostname == manager  ##kong数据库的初始化kong-migration:container_name: kong-migrationimage: kongcommand: kong migrations bootstrapnetworks:- base_service_database-netrestart: on-failureenvironment:- KONG_PG_HOST=kong-database- KONG_DATABASE=postgres- KONG_PG_USER=kong- KONG_PG_PASSWORD=kong- KONG_CASSANDRA_CONTACT_POINTS=kong-databaselinks:- kong-databasedepends_on:- kong-databasedeploy:placement:constraints:- node.hostname == manager  #elasticsearch:image: elasticsearch:7.17.7restart: alwayscontainer_name: elasticsearchnetworks:- base_service_database-netenvironment:- discovery.type=single-node- ES_JAVA_OPTS=-Xms512m -Xmx512mports:- "9200:9200"- "9300:9300"volumes:- /data/docker/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml- /data/docker/elasticsearch/data:/usr/share/elasticsearch/data- /data/docker/elasticsearch/logs:/usr/share/elasticsearch/logsdeploy:placement:constraints:- node.hostname == manager

先执行创建网络

docker network create --driver overlay base_service_database-net --attachable

然后再执行下面命令启动创建服务

docker stack deploy -c  base_data.yml  base_service_database
3 创建监控服务

注意,cadvisor的docker镜像可能无法直接下载,可以通过 下载cadvisor 链接去下载cadvisor tar镜像,然后通过 docker load -i 进行离线安装

base_monitoring.yml

version: "3.8"networks:monitoring:external: truebase_service_database-net:external: trueservices:# Prometheus 服务prometheus:image: prom/prometheus:latestports:- "9090:9090"volumes:- /data/docker/prometheus/data:/prometheus- /data/docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.ymlnetworks:- monitoring- base_service_database-netdeploy:placement:constraints:- node.role == managerenvironment:- TZ=Asia/Shanghai# Node Exporter(全局部署到所有节点)node-exporter:image: prom/node-exporter:latestcommand:- '--path.rootfs=/host'pid: hostvolumes:- '/:/host:ro,rslave'environment:- TZ=Asia/Shanghainetworks:- monitoringdeploy:mode: global# cAdvisor(全局部署到所有节点)cadvisor:image: gcr.io/cadvisor/cadvisor:v0.52.1volumes:- /:/rootfs:ro- /var/run:/var/run:ro- /sys:/sys:ro- /proc:/proc- /var/lib/docker/:/var/lib/docker:rosecurity_opt:- apparmor:unconfined  devices:- /dev/kmsg:/dev/kmsgnetworks:- monitoringdeploy:mode: globalports:- "8080:8080"# Grafana 仪表盘grafana:image: grafana/grafana:latestports:- "3000:3000"volumes:- /data/docker/grafana:/var/lib/grafanaenvironment:- TZ=Asia/Shanghainetworks:- monitoringdeploy:placement:constraints:- node.role == manager

先执行创建网络

docker network create --driver overlay --attachable monitoring

然后再执行下面命令启动创建服务

docker stack deploy -c base_monitoring.yml  monitoring

如果出现 prometheu 或者 grafana起不来的话,那么就将挂载的目录修改权限,

chmod 777  /data/docker/grafana
chmod 777  /data/docker/prometheus/data
4 创建中间件服务

base_server_middleware.yml

version: '3.8'networks:base_service_database-net:external: trueweb_app:external: truemonitoring:external: trueservices:consul:image: consul:1.15.4restart: alwayscontainer_name: consulnetworks:- web_appports:- "8500:8500"- "8600:8600"volumes:- /etc/localtime:/etc/localtime- /data/docker/consul/data:/consul/datadeploy:placement:constraints:- node.hostname == managernacos:image: qingpan/rnacos:stablecontainer_name: nacosnetworks:- web_appports:- "8848:8848"- "9848:9848"- "10848:10848"volumes:- /data/docker/nacos/logs:/home/nacos/logsrestart: alwaysenvironment:- RNACOS_HTTP_PORT=8848- RNACOS_ENABLE_NO_AUTH_CONSOLE=true- TZ=Asia/Shanghai- MODE=standalone- SPRING_DATASOURCE_PLATFORM=mysql- MYSQL_SERVICE_HOST=81.71.64.139- MYSQL_SERVICE_PORT=3306- MYSQL_SERVICE_USER=root- MYSQL_SERVICE_PASSWORD=python- MYSQL_SERVICE_DB_NAME=nacos_config- MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTCdeploy:placement:constraints:- node.hostname == managerportainer:image: 6053537/portainer-cecontainer_name: portainernetworks:- monitoringports:- "9000:9000"volumes:- /var/run/docker.sock:/var/run/docker.sock- /data/docker/portainer:/datarestart: alwaysdeploy:placement:constraints:- node.hostname == managerrabbitmq:image: rabbitmq:4.0.7-managementcontainer_name: rabbitmqnetworks:- monitoring- web_app- base_service_database-netenvironment:- RABBITMQ_DEFAULT_USER=root- RABBITMQ_DEFAULT_PASS=q123q123ports:- "5672:5672"- "15672:15672"volumes:- /data/docker/rabbitmq/data:/var/lib/rabbitmq- /data/docker/rabbitmq/logs:/var/log/rabbitmqrestart: alwaysdeploy:placement:constraints:- node.hostname == managerkonga:container_name: kongaimage: pantsel/konga:latestrestart: alwaysnetworks:- monitoring- base_service_database-netports:- "1337:1337"deploy:placement:constraints:- node.hostname == manager  #kibana:container_name: kibanaimage: kibana:7.17.7restart: alwaysvolumes:- /data/docker/kib/config/kibana.yml:/usr/share/kibana/config/kibana.ymlnetworks:- base_service_database-net- monitoringports:- "5601:5601"deploy:placement:constraints:- node.hostname == manager  #filebeat:container_name: filebeatimage: elastic/filebeat:7.17.7restart: alwaysnetworks:- base_service_database-netdeploy:mode: globalconfigs:- source: filebeat-configtarget: /usr/share/filebeat/filebeat.yml  # 配置文件挂载路径volumes:- type: bindsource: /data/logs/target: /data/logs/- type: bindsource: /var/run/docker.socktarget: /var/run/docker.sock- type: bindsource: /var/lib/docker/containerstarget: /var/lib/docker/containersread_only: trueconfigs:filebeat-config:file: /data/docker/filebeat/config/filebeat.yml  # 使用本地的 filebeat.yml 文件

先执行创建网络

docker network create --driver overlay --attachable web_app

然后再创建filebeat.yml的config配置
filebeat.yml 配置如下:

filebeat.inputs:
- type: filestreamenabled: truepaths:- /data/logs/*/*.logparsers:- multiline:type: patternpattern: '^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}'negate: truematch: aftermax_lines: 500timeout: 10sprocessors:- dissect:tokenizer: "%{log_timestamp} | %{log_level} | %{namespace} | %{file_path} | %{method} | %{track_id} | %{message}"field: "message"max_lines: 500target_prefix: ""overwrite_keys: true- timestamp:field: log_timestamplayouts:- '2006-01-02 15:04:05.000'test:- '2025-04-14 09:16:52.758'- drop_fields:fields: ["log_timestamp"]output.elasticsearch:hosts: ["http://elasticsearch:9200"]index: "caipu_srv-logs-%{+yyyy.MM.dd}"indices:- index: "caipu_srv-logs-%{+yyyy.MM.dd}"when.contains:tags: "xixi"pipeline: "xixi_processor"setup.template.enabled: false
setup.template.name: "caipu_srv"
setup.template.pattern: "caipu_srv-*"
docker config create filebeat-config  /data/docker/filebeat/config/filebeat.yml

最后再执行下面命令启动创建服务

docker stack deploy -c base_server_middleware.yml  base_server_middleware
http://www.lryc.cn/news/592013.html

相关文章:

  • 基于LiteNetLib的Server/Client Demo
  • 算法训练营day24 回溯算法③ 93.复原IP地址 、78.子集、 90.子集II
  • 零基础入门:用C++从零实现TCP Socket网络小工具
  • 人脸检测算法——SCRFD
  • Ubuntu系统下快速体验iperf3工具(网络性能测试)
  • 2G和3G网络关闭/退网状态(截止2025年7月)
  • Python技术题1
  • 【RK3576】【Android14】开发环境搭建
  • 基于现代R语言【Tidyverse、Tidymodel】的机器学习方法与案例分析
  • 用 React-Three-Fiber 实现雪花下落与堆积效果:从零开始的 3D 雪景模拟
  • 前端迟迟收不到响应,登录拦截器踩坑!
  • 小结:Spring MVC 的 XML 的经典配置方式
  • ASP.NET Core Web API 内存缓存(IMemoryCache)入门指南
  • untiy之导入插件(文件方式,适用于git克隆失败)
  • Instagram千号矩阵:亚矩阵云手机破解设备指纹检测的终极方案
  • 【.net core】支持通过属性名称索引的泛型包装类
  • 五国联动!德法意西荷 ASIN 同步上架成泛欧计划硬性门槛
  • 构建智能客服Agent:从需求分析到生产部署
  • 持续同调文章阅读(四)
  • 推荐 1 款 4.5k stars 的AI 大模型驱动的开源知识库搭建系统
  • A33-vstar笔记及资料分享:搭建交叉编译环境
  • Linux云计算基础篇(9)-文本处理工具和变量
  • 无符号乘法运算的硬件逻辑实现 ————取自《湖科大教书匠》
  • 【PTA数据结构 | C语言版】多叉堆的上下调整
  • Python MP3 归一化器和长度分割器实用工具开发指南
  • SQL映射文件
  • Android 应用保活思路
  • 树(Tree)
  • 【C++基础】--多态
  • web域名解析