通常说的网络,都是在TCP/IP协议族的基础上运作的,HTTP协议,只是这个协议族中的一个。
一个请求:源地址:源端口 --> 网卡 --> 互联网(局域网) --> 目标服务器网卡 --> 目标服务器内部
我们要与被访问的服务建立连接,我们本地要消耗端口,端口区间1024~65535。实际上,发起方这边,能消耗的最大端口大概是1.64w
每一次通信,都会占用1个端口
netstat -ano|find "TCP" /i /c
/i:搜索时不区分大小写
/c:统计搜索结果
netstat -ano|grep "tcp"|wc -l
注册表HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
MaxUserPort为10进制的65534TcpTimedWaitDelay为10进制的30(默认为240s)KeepAlive勾选网卡的作用:把计算机数据,转换为网络传输数据
网卡的速度:绝对数据转换的速度
现在大部分网卡:千兆网卡 = Gbps
使用ping查看网络延迟
(base) 192:~ zhongxin$ ping www.baidu.com
PING www.a.shifen.com (180.101.49.12): 56 data bytes
64 bytes from 180.101.49.12: icmp_seq=0 ttl=50 time=14.450 ms
64 bytes from 180.101.49.12: icmp_seq=1 ttl=50 time=19.119 ms
64 bytes from 180.101.49.12: icmp_seq=2 ttl=50 time=14.121 ms
64 bytes from 180.101.49.12: icmp_seq=3 ttl=50 time=14.018 ms
64 bytes from 180.101.49.12: icmp_seq=4 ttl=50 time=14.843 ms
^C
--- www.a.shifen.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 14.018/15.310/19.119/1.926 ms
(base) 192:~ zhongxin$ ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.051 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.080 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.063 ms
^C
--- 127.0.0.1 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.051/0.152/0.416/0.152 ms
如果时间比较长甚至有丢包,说明网络延迟比较大
在做性能测试时,可以通过聚合报告的最后两列,查看网络是否有瓶颈
通过这两个,就能判断,是否存在网络延迟
$ ethtool 你的网卡|grep "Speed"
sysctl:用于运行时配置内核参数,这些参数位于/proc/sys目录下
linux系统启动,依次读取
/etc/sysctl.d/*.conf/run/sysctl.d/*.conf/usr/lib/sysctl.d/*.conf
用法

场景参数
ulimit用于控制shell程序的资源
ulimit -a查看当前所有的限制
oot@zx:~# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15522
max locked memory       (kbytes, -l) 65536
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 15522
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
root@zx:~# 
可以通过ulimit 参数 限制值 临时修改限制值
cat /proc/PID/limits
查看系统运行打开的最大文件数量:cat /proc/sys/fs/file-max
root@zx:~# cat /proc/sys/fs/file-max
9223372036854775807
lsof -p PID|wc -l
root@zx:~# lsof |wc -l # 查看当前系统打开的总文件数量
7213
如果服务有高并发请求,服务打开的总的文件数量超过系统运行打开的文件数量,会报无法打开文件的错误
解决办法:
lsof |wc -l统计当前系统打开的文件数量cat /proc/sys/fs/file-max查看系统运行打开的最大文件数量lsof -p PID|wc -l某个进程当前打开的文件数量/sbin/sysctl -w net.ipv4.icmp_echo_ignore_all=1
/sbin/sysctl -w net.ipv4.route.flush=1
/sbin/sysctl -p
(base) 192:~ zhongxin$ ping 123.56.13.233
PING 123.56.13.233 (123.56.13.233): 56 data bytes
64 bytes from 123.56.13.233: icmp_seq=0 ttl=50 time=35.825 ms
64 bytes from 123.56.13.233: icmp_seq=1 ttl=50 time=35.241 ms
64 bytes from 123.56.13.233: icmp_seq=2 ttl=50 time=41.471 ms
64 bytes from 123.56.13.233: icmp_seq=3 ttl=50 time=34.870 ms
64 bytes from 123.56.13.233: icmp_seq=4 ttl=50 time=64.342 ms
64 bytes from 123.56.13.233: icmp_seq=5 ttl=50 time=51.517 ms
64 bytes from 123.56.13.233: icmp_seq=6 ttl=50 time=35.040 ms
64 bytes from 123.56.13.233: icmp_seq=7 ttl=50 time=35.376 ms
^C
--- 123.56.13.233 ping statistics ---
8 packets transmitted, 8 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 34.870/41.710/64.342/10.101 ms
(base) 192:~ zhongxin$ ping 123.56.13.233
PING 123.56.13.233 (123.56.13.233): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
^C
--- 123.56.13.233 ping statistics ---
7 packets transmitted, 0 packets received, 100.0% packet loss
(base) 192:~ zhongxin$
/sbin/sysctl -w net.ipv4.icmp_echo_ignore_all=0
/sbin/sysctl -w net.ipv4.route.flush=1
/sbin/sysctl -p
/etc/security/limits.conf
root@zx:/# cat /etc/security/limits.conf 
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - a user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#        - NOTE: group and wildcard limits are not applied to root.
#          To apply a limit to the root user, <domain> must be
#          the literal username root.
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open file descriptors
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit (KB)
#        - maxlogins - max number of logins for this user
#        - maxsyslogins - max number of logins on the system
#        - priority - the priority to run user process with
#        - locks - max number of file locks the user can hold
#        - sigpending - max number of pending signals
#        - msgqueue - max memory used by POSIX message queues (bytes)
#        - nice - max nice priority allowed to raise to values: [-20, 19]
#        - rtprio - max realtime priority
#        - chroot - change root to directory (Debian-specific)
#
#<domain>      <type>  <item>         <value>
#
#*               soft    core            0
#root            hard    core            100000
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#ftp             -       chroot          /ftp
#@student        -       maxlogins       4
# End of file
root soft nofile 65535
root hard nofile 65535
* soft nofile 65535
* hard nofile 65535
root@zx:/#

参数
永久修改,需要重启后生效
当系统允许打开的文件数很大,但是查看到系统打开的文件总数量远小于系统允许打开的文件数量。
当进行性能测试的时候,发现出现不能打开更多文件的报错,这个时候的原因就是,你当前部署用的账号,打开的文件数量限制导致的。
如果通过修改系统配置文件,就可以解决这个性能问题,那么这个性能问题就可以通过测试人员的调优解决。需要把该修改同步到生成服务器中进行修改。
Content.xml文件有连接池的配置,默认为200
mysql的配置文件/etc/my.cnf默认连接池大小是151约等于最大打开文件数据/5 :1024/5
一般不会有问题
USE 「utilization saturation errors」对于所有资源,查看他的使用率,饱和度和错误
在性能测试时,有出现错误,先判断,脚本是否写的有问题
然后排查是否有服务器问题
先排查是否有硬件问题,然后配置「os、服务」,软件服务性能问题
在规定时间间隔内,资源用于服务工作的时间百分比
资源不能再服务更多额外工作的程度
错误事件的个数