web集群
项目名称
基于keepalived+nginx构建一个高可用、高性能的web集群
项目架构图
项目描述
构建一个基于nginx的7层负载均衡的web集群项目,模拟企业的业务环境达到构建一个高并发、高可用的web集群。通过压力测试来检验整个集群的性能,找出瓶颈,不断的去优化。
项目环境
Linux服务器9台(centos 7.9)、nginx-1.25.2 、ab 2.3 、nfs4 、Prometheus 2.34.0 、node_exporter-1.4.0、grafana 10.0.0 、keepalived 2.1.5 、anislbe 2.9.27、bind。
项目步骤
一、前期准备工作
1、关闭selinux和firewalld
# 防火墙并且设置防火墙开启不启动
service firewalld stop && systemctl disable firewalld# 临时关闭seLinux
setenforce 0# 永久关闭seLinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
2、配置静态ip地址
cd /etc/sysconfig/network-scripts/
vim ifcfg-ens33TYPE="Ethernet"
BOOTPROTO="none"
DEVICE="ens33"
NAME="ens33"
ONBOOT="yes"
IPADDR="192.168.0.11"
PREFIX=24
GATEWAY="192.168.0.1"
DNS1=114.114.114.114# 其他服务器按照规划好的IP地址配置静态ip
3、修改主机名
hostnamectl set-hostname web-1
hostnamectl set-hostname web-2
hostnamectl set-hostname web-3hostnamectl set-hostname LB-1
hostnamectl set-hostname LB-2hostnamectl set-hostname nfs
hostnamectl set-hostname ansible
hostnamectl set-hostname Prometheus
hostnamectl set-hostname dns
二、搭建ansible服务器,建立ssh免密通道,编写playbook通过一键安装nginx的脚本快速部署nginx集群。
1、编写一键安装nginx脚本
[root@ansible ~]# cat onekey_install_nginx.sh
#!/bin/bash#新建一个文件夹用来存放下载的nginx源码包
mkdir -p /nginx
cd /nginx#新建用户
useradd hanwei -s /sbin/nologin#下载nginx源码包
yum install wget -y
wget http://nginx.org/download/nginx-1.25.2.tar.gz#解压nginx源码包
tar xf nginx-1.25.2.tar.gz#解决依赖关系
yum -y install openssl openssl-devel pcre pcre-devel gcc autoconf automake make#编译前的配置
./configure --prefix=/usr/local/scnginx99 --user=hanwei --with-threads --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-stream#编译,开启2个进程同时编译,速度会快些
make -j 2#安装
make install#启动nginx
/usr/local/scnginx99/sbin/nginx#修改PATH变量
PATH=$PATH:/usr/local/scnginx99/sbin/
echo "PATH=$PATH:/usr/local/scnginx99/sbin/" >>/root/.bashrc#设置nginx开机启动
echo "/usr/local/scnginx99/sbin/nginx" >>/etc/rc.local
chmod +x /etc/rc.d/rc.local#关闭seLinux和firewalld
systemctl stop firewalld
#设置firewalld开机不启动
systemctl disable firewalld#临时关闭seLinux
setenforce 0
#永久关闭seLinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config
2、部署ansible服务器
1.下载ansible软件
[root@ansible ~]# yum install epel-release -y
[root@ansible ~]# yum install ansible -y2.建立ssh免密通道,在ansible服务器上生成密钥对,指定生成的密钥类型为 RSA。RSA 是一种非对称加密算法,广泛用于 SSH 认证。
[root@ansible .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:RwPsvRYZ/cRdqUv2JVlFKQovTUsDb+3B+27izeinC8c root@ansible
The key's randomart image is:
+---[RSA 2048]----+
| ..... . oO|
| .oo++.ooo|
| . .O=+*oo |
| .o=*.+* .|
| S ooooo..|
| .o .... |
| . . E . |
| o.=o |
| o**+ |
+----[SHA256]-----+
[root@ansible .ssh]# ls
id_rsa id_rsa.pub
3.上传公钥到整个web集群服务器的root用户家目录下,以实现免密码SSH登录
ssh-copy-id -i id_rsa.pub root@192.168.0.11
ssh-copy-id -i id_rsa.pub root@192.168.0.12
ssh-copy-id -i id_rsa.pub root@192.168.0.13
ssh-copy-id -i id_rsa.pub root@192.168.0.100
ssh-copy-id -i id_rsa.pub root@192.168.0.200
ssh-copy-id -i id_rsa.pub root@192.168.0.20
ssh-copy-id -i id_rsa.pub root@192.168.0.21
ssh-copy-id -i id_rsa.pub root@192.168.0.22
4.验证是否实现免密码密钥认证(远程登陆)
[root@ansible .ssh]# ssh root@192.168.0.11
[root@master ~]# exit
登出
Connection to 192.168.0.11 closed.
[root@ansible .ssh]# ssh root@192.168.0.12
[root@slave-1 ~]# exit
登出
Connection to 192.168.0.12 closed.
[root@ansible .ssh]# ssh root@192.168.0.13
[root@slave-2 ~]# exit
登出
Connection to 192.168.0.13 closed.
5.编写主机清单
[root@ansible .ssh]# cd /etc/ansible
[root@ansible ansible]# ls
ansible.cfg hosts roles
[root@ansible ansible]# vim hosts
[web]
192.168.0.11
192.168.0.12
192.168.0.13
[LB]
192.168.0.100
192.168.0.200
[dns]
192.168.0.20
[nfs]
192.168.0.21
[Prometheus]
192.168.0.22
测试,这条命令尝试对名为web的主机或主机组执行ip addr命令
[root@ansible ~]# ansible web -m shell -a "ip add"
3、编写playbook
#编写playbook
[root@ansible ~]# cat nginx.yaml
- hosts: webremote_user: roottasks:- name: mkdir /webfile: path=/web state=directory- name: cp onekey_install_nginx.sh to hostscopy: src=/root/onekey_install_nginx.sh dest=/web/onekey_install_nginx.sh - name: install nginx shell: bash /web/onekey_install_nginx.sh - hosts: LBremote_user: roottasks:- name: mkdir /webfile: path=/web state=directory- name: cp onekey_install_nginx.sh to hostscopy: src=/root/onekey_install_nginx.sh dest=/web/onekey_install_nginx.sh - name: install nginx shell: bash /web/onekey_install_nginx.sh #检查playbook的语法
[root@ansible ~]# ansible-playbook --syntax-check nginx.yaml playbook: nginx.yaml#执行playbook
[root@ansible ~]# ansible-playbook nginx.yaml
三、部署2台Linux服务器做负载均衡器,使用nginx的7/4层负载均衡功能实现,调度算法使用加权轮询。
1.负载均衡器上的配置(7层负载均衡)
vim nginx.confhttp {upstream nginx_web {server 192.168.0.11 weight=1;server 192.168.0.12 weight=2;server 192.168.0.13 weight=5;} server {listen 80;location / {proxy_pass http://nginx_web;}}
}[root@LB-1 conf]# nginx -t
nginx: the configuration file /usr/local/scnginx99/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/scnginx88/conf/nginx.conf test is successful[root@LB-1 conf]# nginx -s reload[root@LB-2 conf]# nginx -t
nginx: the configuration file /usr/local/scnginx99/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/scnginx88/conf/nginx.conf test is successful[root@LB-2 conf]# nginx -s reload
2.负载均衡器上的配置(4层负载均衡)
[root@LB-1 conf]# cat nginx.confworker_processes 2;events {worker_connections 1024;
}stream {upstream web_servers {server 192.168.0.11:80 weight=1;server 192.168.0.12:80 weight=2;server 192.168.0.13:80 weight=5;}server {listen 80 ;proxy_pass web_servers;}
}
四、搭建nfs服务器,保障nginx集群的数据一致性,并且设置提供服务的后端real-server开机自动挂载。
1、搭建nfs服务器
1.下载nfs-utils软件
[root@nfs ~]# yum install nfs-utils -y2.新建共享目录和index.html
[root@nfs ~]# mkdir /nginx
[root@nfs ~]# cd /nginx
[root@nfs nginx]# echo "hello world" >index.html
[root@nfs nginx]# ls
index.html3.设置共享目录
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/nginx 192.168.0.0/24(ro,no_root_squash,sync)4.刷新nfs或者重新输出共享目录
[root@nfs ~]# exportfs -r #输出所有共享目录
[root@nfs ~]# exportfs -v #显示输出的共享目录
/nginx 192.168.0.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,no_root_squash,no_all_squash)5.重启nfs服务并且设置nfs开机自启
[root@nfs web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.6.测试能否挂载nfs服务器共享的目录
[root@web-1 ~]# mount 192.168.0.21:/nginx /usr/local/scnginx99/html[root@web-1 ~]# df -Th|grep nfs
192.168.0.21:/nginx nfs4 17G 1.5G 16G 9% /usr/local/scnginx99/html[root@web-1 ~]# yum install nfs-utils -y[root@web-1 ~]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service[root@web-1 ~]# ps aux |grep nfs
root 87368 0.0 0.0 0 0 ? S< 16:49 0:00 [nfsd4_callbacks]
root 87374 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87375 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87376 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87377 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87378 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87379 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87380 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 87381 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
root 96648 0.0 0.0 112824 988 pts/0 S+ 17:02 0:00 grep --color=auto nfs[root@web-2 ~]# mount 192.168.0.21:/nginx /usr/local/scnginx99/html[root@web-3 ~]# mount 192.168.0.21:/nginx /usr/local/scnginx99/html
2、开机自动挂载nfs文件系统
# web-1、web-2、web-3上操作vim /etc/fstab192.168.0.21:/nginx /usr/local/scnginx99/html nfs defaults 0 0
五、搭建dns服务器,给整个web集群做域名解析,通过dns域名解析负载均衡器将2个vip绑定到一个域名给用户访问,实现将流量导入到不同的负载均衡器上。
1.安装软件bind
yum install bind* -y2.设置named服务开机启动,启动DNS服务
systemctl enable named && systemctl start named 3.查看进程和端口号
ps aux|grep namednetstat -anplut|grep named4.修改/etc/named.conf配置文件,重启服务允许其他电脑能过来查询dns域名
[root@dns ~]# vim /etc/named.conf
options {listen-on port 53 { any; }; # 修改listen-on-v6 port 53 { any; }; # 修改directory "/var/named";dump-file "/var/named/data/cache_dump.db";statistics-file "/var/named/data/named_stats.txt";memstatistics-file "/var/named/data/named_mem_stats.txt";recursing-file "/var/named/data/named.recursing";secroots-file "/var/named/data/named.secroots";allow-query { any; }; # 修改重启named服务
[root@dns ~]# service named restart
Redirecting to /bin/systemctl restart named.service5.修改配置文件,告诉named为sc.com提供域名解析
[root@dns named]# vim /etc/named.rfc1912.zones zone "sc.com" IN {type master;file "sc.com.zone";allow-update { none; };
};6.创建sc.com.zone的数据文件
[root@dns named]# pwd
/var/named[root@dns named]# ls
chroot chroot_sdb data dynamic dyndb-ldap named.ca named.empty named.localhost named.loopback [root@dns named]# cp -a named.localhost sc.com.zone
[root@dns named]# ls
chroot chroot_sdb data dynamic dyndb-ldap named.ca named.empty named.localhost named.loopback sc.com.zone 7.编写sc.com.zone
[root@dns named]# cat sc.com.zone
$TTL 1D
@ IN SOA @ rname.invalid. (0 ; serial1D ; refresh1H ; retry1W ; expire3H ) ; minimumNS @A 127.0.0.1AAAA ::1
www A 192.168.0.188
www A 192.168.0.189[root@dns named]# named-checkzone sc.com /var/named/sc.com.zone
zone sc.com/IN: loaded serial 0
OK[root@dns named]# service named restart8.所有机器上将dns服务器指向我们搭建的dns服务器
cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 192.168.0.209.使用host查看是否能进行域名解析
[root@dns named]# host www.sc.com
www.sc.com has address 192.168.0.188
www.sc.com has address 192.168.0.189
六、部署Prometheus+grafana来监控整个web集群的性能
1、部署Prometheus服务器
1.下载源码包
yum install wget -ywget https://github.com/prometheus/prometheus/releases/download/v2.44.0/prometheus-2.44.0.linux-amd64.tar.gz2.解压
[root@Prometheus prom]# tar xf prometheus-2.34.0.linux-amd64.tar.gz [root@Prometheus prom]# ls
prometheus-2.34.0.linux-amd64 prometheus-2.34.0.linux-amd64.tar.gz[root@Prometheus prom]# cd prometheus-2.34.0.linux-amd64[root@Prometheus prometheus-2.34.0.linux-amd64]# ls
console_libraries consoles LICENSE NOTICE prometheus prometheus.yml promtool3.修改环境变量
[root@Prometheus prom]# PATH=/prom:$PATH[root@Prometheus prom]# which prometheus
/prom/prometheus4.启动
[root@Prometheus prom]# nohup prometheus --config.file=/prom/prometheus.yml &
[1] 54097
[root@Prometheus prom]# nohup: 忽略输入并把输出追加到"nohup.out"[root@Prometheus prom]# netstat -anplut|grep prom
tcp6 0 0 :::9090 :::* LISTEN 54097/prometheus
tcp6 0 0 ::1:9090 ::1:40076 ESTABLISHED 54097/prometheus
tcp6 0 0 ::1:40076 ::1:9090 ESTABLISHED 54097/prometheus
2、被监控主机安装node_exporter
[root@web-1 ~]# mkdir -p /node_exporter
[root@web-1 ~]# cd /node_exporter/
[root@web-1 node_exporter]# ls
node_exporter-1.4.0-rc.0.linux-amd64.tar.gz[root@web-1 node_exporter]# tar xf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz [root@web-1 node_exporter]# ls
node_exporter-1.4.0-rc.0.linux-amd64.tar.gz node_exporter-1.4.0-rc.0.linux-amd64[root@web-1 node_exporter]# cd node_exporter-1.4.0-rc.0.linux-amd64[root@web-1 node_exporter-1.4.0-rc.0.linux-amd64]# PATH=/node_exporter/node_exporter-1.4.0-rc.0.linux-amd64:$PATH[root@web-1 node_exporter-1.4.0-rc.0.linux-amd64]# which node_exporter
/node_exporter/node_exporter-1.4.0-rc.0.linux-amd64/node_exporter[root@web-1 node_exporter-1.4.0-rc.0.linux-amd64]# node_exporter --help #查看使用手册[root@web-1 node_exporter-1.4.0-rc.0.linux-amd64]# nohup node_exporter --web.listen-address='0.0.0.0:9100' &
3、在Prometheus server里添加被监控主机
[root@Prometheus prom]# cat prometheus.yml # 添加需要监控的服务器的信息
- job_name: web-1scrape_interval: 5sstatic_configs:- targets:- 192.168.0.11:9100- job_name: web-2scrape_interval: 5sstatic_configs:- targets:- 192.168.0.12:9100- job_name: web-3scrape_interval: 5sstatic_configs:- targets:- 192.168.0.13:9100- job_name: LB-1scrape_interval: 5sstatic_configs:- targets:- 192.168.0.100:9100- job_name: LB-2scrape_interval: 5sstatic_configs:- targets:- 192.168.0.200:9100- job_name: nfsscrape_interval: 5sstatic_configs:- targets:- 192.168.0.21:9100- job_name: dnsscrape_interval: 5sstatic_configs:- targets:- 192.168.0.20:9100- job_name: ansiblescrape_interval: 5sstatic_configs:- targets:- 192.168.0.23:9100
4、安装 grafana
yum install -y https://dl.grafana.com/enterprise/release/grafana-enterprise-10.0.0-1.x86_64.rpm#让Linux系统的systemd进程指定grafana
[root@Prometheus prom]# systemctl daemon-reload # 启动grafana并且设置开启自启
[root@Prometheus prom]# systemctl start grafana-server && systemctl enable grafana-server
七、 在负载均衡器上安装keepalived软件,给负载均衡器做高可用,防止单点故障。
1、安装keepalived软件
1.安装keepalived软件,在两台负载均衡器上都安装
[root@LB-1 conf]# yum install keepalived -y
[root@LB-2 conf]# yum install keepalived -y2.修改配置文件
[root@LB-1 conf]# cd /etc/keepalived/
[root@LB-1 keepalived]# ls
keepalived.conf[root@LB-1 keepalived]# cat keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addr#vrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 58priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.188}
}vrrp_instance VI_2 {state BACKUPinterface ens33virtual_router_id 59priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.189}
}[root@LB-2 keepalived]# cat keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addr#vrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 58priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.188}
}vrrp_instance VI_2 {state MASTERinterface ens33virtual_router_id 59priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.189}
}[root@LB-1 keepalived]# service keepalived restart
Redirecting to /bin/systemctl restart keepalived.service[root@LB-22 keepalived]# service keepalived restart
Redirecting to /bin/systemctl restart keepalived.service[root@LB-2 keepalived]# ps aux|grep keepa
root 1708 0.0 0.0 123020 2032 ? Ss 16:14 0:00 /usr/sbin/keepalived -D
root 1709 0.0 0.1 133992 7892 ? S 16:14 0:00 /usr/sbin/keepalived -D
root 1712 0.0 0.1 133860 6160 ? S 16:14 0:00 /usr/sbin/keepalived -D
root 1719 0.0 0.0 112832 2392 pts/0 S+ 16:14 0:00 grep --color=auto keepa查看vip
[root@LB-1 keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:37:fb:39 brd ff:ff:ff:ff:ff:ffinet 192.168.0.100/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33valid_lft 1075sec preferred_lft 1075secinet 192.168.0.188/32 scope global ens33valid_lft forever preferred_lft forever[root@LB-2 keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:37:fb:39 brd ff:ff:ff:ff:ff:ffinet 192.168.0.200/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33valid_lft 1075sec preferred_lft 1075secinet 192.168.0.189/32 scope global ens33valid_lft forever preferred_lft forever
2、监控nginx
如果负载均衡器上的nginx程序出现问题(例如:nginx没启动),就会导致访问web集群出现问题
解决思路:如果检查到nginx进程关闭,将优先级降低30,停止它的master身份,让位给其他的机器;或者关闭keepalived服务。
第1步:编写脚本
[root@LB-1 web]# pwd
/web
[root@LB-1 web]# ls
check_nginx.sh halt_keepalived.sh[root@LB-1 web]# cat check_nginx.sh
#!/bin/bash#检测nginx是否正常运行
if /usr/sbin/pidof nginx ;thenexit 0
elseexit 1
fi[root@LB-1 web]# chmod +x check_nginx.sh [root@LB-1 web]# cat halt_keepalived.sh
#!/bin/bashservice keepalived stop[root@LB-1 web]# chmod +x halt_keepalived.sh第2步:在keepalived里定义监控脚本#定义监控脚本chk_nginx
vrrp_script chk_nginx {
#当脚本/web/check_nginx.sh脚本执行返回值为0的时候,不执行下面的weight -30的操作,只有脚本执行失败,返回值非0的时候,就执行执行权重值减30的操作
script "/web/check_nginx.sh"
interval 1
weight -30
}[root@lb-1 keepalived]# cat keepalived.conf
! Configuration File for keepalivedglobal_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.locsmtp_server 192.168.200.1smtp_connect_timeout 30router_id LVS_DEVELvrrp_skip_check_adv_addr#vrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}
#定义监控脚本chk_nginx
vrrp_script chk_nginx {
#当脚本/web/check_nginx.sh脚本执行返回值为0的时候,不执行下面的weight -30的操作,只有脚本执行失败,返回值非0的时候,就执行执行权重值减30的操作
script "/web/check_nginx.sh"
interval 1
weight -30
}vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 58priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.188}
#调用监控脚本
track_script {
chk_nginx
}#当本机成为backup的时候,立马执行下面的脚本
notify_backup "/web/halt_keepalived.sh"}
vrrp_instance VI_2 {state BACKUPinterface ens33virtual_router_id 59priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.0.189}
}
八、使用压力测试软件ab对web集群进行压力测试。
1.下载ab软件
[root@scmaster ~]# yum install ab -y2.进行压力测试
[root@scmaster ~]# ab -c 1000 -n 20000 http://192.168.0.100/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.0.100 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requestsServer Software: nginx/1.25.2
Server Hostname: 192.168.0.100
Server Port: 80Document Path: /
Document Length: 620 bytesConcurrency Level: 1000
Time taken for tests: 8.049 seconds
Complete requests: 20000
Failed requests: 2535(Connect: 0, Receive: 0, Length: 2535, Exceptions: 0)
Write errors: 0
Non-2xx responses: 35
Total transferred: 17039510 bytes
HTML transferred: 12381995 bytes
Requests per second: 2484.72 [#/sec] (mean) # 目前测试的最大并发数(吞吐量)
Time per request: 402.460 [ms] (mean)
Time per request: 0.402 [ms] (mean, across all concurrent requests)
Transfer rate: 2067.30 [Kbytes/sec] receivedConnection Times (ms)min mean[+/-sd] median max
Connect: 0 198 482.9 46 7035
Processing: 20 165 156.7 118 1822
Waiting: 3 151 152.9 104 1797
Total: 31 363 502.2 183 7194Percentage of the requests served within a certain time (ms)50% 18366% 25475% 31580% 39590% 112495% 120998% 149899% 3120100% 7194
九、尝试去优化整个web集群,提升性能(内核参数、nginx参数的调优)
1、内核参数调优
[root@web-1 ~]# ulimit -n 100001[root@web-1 ~]# ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 14826
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100001
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 14826
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
2、nginx参数调优
[root@web-1 conf]# cat nginx.conf# 根据cpu核心的数量去修改(我的cpu核心数量是2)
worker_processes 2;# 并发数量,同时可以允许多少人同时访问nginx
events {worker_connections 2048;
}http { # 65秒后nginx会主动断开连接,可以根据自己的需求修改超时时间keepalive_timeout 65;}
3、参考云服务器的参数调优
[root@aliyun ~]# sysctl -p
vm.swappiness = 0
kernel.sysrq = 1
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_slow_start_after_idle = 0