我有DB错误日志文件,它将不断增长。现在,我想每隔5分钟对该文件设置一些错误监控。问题是我不想每隔5分钟扫描一次整个文件(当监视cron执行时),因为它将来可能会变得非常大。每隔5分钟扫描整个(大)文件将消耗更多的资源。因此,我只想扫描在过去5分钟间隔内插入/写入日志的行。日志中记录的每个错误都会有时间戳作为前缀,如下所示:
180418 23:45:00错误mysql得到信号11。
所以我只想在过去5分钟(不是整个文件)添加的行上搜索模式错误,并将输出放到另一个文件中。
请帮帮我。如果你需要对我的问题进行更多的澄清,请随意。
我正在使用RHEL 7,并尝试通过bash shell脚本实现上述监控
发布于 2018-04-20 00:30:58
序列化Byte Offset
这将从上一个实例停止的地方继续。如果你每5分钟运行一次,那么它将扫描5分钟的数据。
请注意,此实现可以有意地扫描在调用运行期间添加的数据两次。这有点草率,但是扫描重叠的数据两次要比根本不读要安全得多,如果依赖于cron
来按计划运行您的程序,这是一个可能运行的风险(同样,如果系统繁忙,sleep
可能会超出请求的时间)。
#!/usr/bin/env bash
file=$1; shift # first input: filename
grep_opts=( "$@" ) # remaining inputs: grep options
dir=$(dirname -- "$file") # extract directory name to use for offset storage
basename=${file##*/} # pick up file name w/o directory
size_file="$dir/.$basename.size" # generate filename to use to store offset
if [[ -s $size_file ]]; then # ...if we already have a file with an offset...
old_size=$(<"$size_file") # ...read it from that file
else
old_size=0 # ...otherwise start at the front.
fi
new_size=$(stat --format=%s -- "$file") || exit # Figure out current size
if (( new_size < old_size )); then
old_size=0 # file was truncated, so we can't trust old_size
elif (( new_size == old_size )); then
exit 0 # no new contents, so no point in trying to search
fi
# read starting at old_size and grep only that content
dd iflag=skip_bytes skip="$old_size" if="$file" | grep "${grep_opts[@]}"; grep_retval=$?
# if the read failed, don't store an updated offset
(( ${PIPESTATUS[0]} != 0 )) && exit 1
# create a new tempfile to store offset in
tempfile=$(mktemp -- "${size_file}.XXXXXX") || exit
# write to that temporary file...
printf '%s\n' "$new_size" > "$tempfile" || { rm -f "$tempfile"; exit 1; }
# ...and if that write succeeded, overwrite the last place where we serialized output.
mv -- "$tempfile" "$new_size" || exit
exit "$grep_retval"
备用模式:时间戳一分为二
请注意,如果您依赖cron
每5分钟即时调用一次代码,那么这可能会遗漏内容;因此,存储字节偏移量可能更准确。
#!/usr/bin/env bash
file=$1; shift
start_date=$(date -d 'now - 5 minutes' '+%y%m%d %H:%M:%S')
byte_offset=$(bsearch --byte-offset "$file" "$start_date")
dd iflag=skip_bytes skip="$byte_offset" if="$file" | grep "$@"
发布于 2018-04-20 03:06:36
另一种方法可以是这样的:
DB_FILE="FULL_PATH_TO_YOUR_DB_FILE"
current_db_size=$(du -b "$DB_FILE" | cut -f 1)
if [[ ! -a SOME_PATH_OF_YOUR_CHOICE/last_size_db_file ]] ; then
tail --bytes $current_db_size $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
else
if [[ $(cat last_size_db_file) -gt $current_db_size ]] ; then
previously_readed_bytes=0
else
previously_readed_bytes=$(cat last_size_db_file)
fi
new_bytes=$(($current_db_size - $previously_readed_bytes))
tail --bytes $new_bytes $DB_FILE > SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
fi
printf $current_db_size > SOME_PATH_OF_YOUR_CHOICE/last_size_db_file
这将打印之前未打印到SOME_PATH_OF_YOUR_CHOICE/log-file_$(date +%Y-%m-%d_%H-%M-%S)
的所有DB_FILE
字节
请注意,$(date +%Y-%m-%d_%H-%M-%S)
将是创建日志文件时的当前‘完整’日期
您可以将其作为脚本,并使用cron
每五分钟执行一次该脚本;如下所示:
*/5 * * * * PATH_TO_YOUR_SCRIPT
发布于 2018-04-20 08:04:49
以下是我的方法:
首先,到目前为止,只读一次整个日志。如果到达末尾,收集并读取新行的时间跨度(在我的示例中为9秒,用于更快的测试,而我的虚拟服务器每3秒向日志文件追加一次)。
在时间跨度之后,回显缓存,清除缓存(一个数组arr
),循环和休眠一段时间,这样这个过程就不会占用所有的CPU时间。
首先,我的虚拟日志文件写入器:
#!/bin/bash
#
# dummy logfile writer
#
while true
do
s=$(( $(date +%s) % 3600))
echo $s server msg
sleep 3
done >> seconds.log
通过./seconds-out.sh &
启动。
现在更复杂的部分是:
#!/bin/bash
#
# consume a logfile as written so far. Then, collect every new line
# and show it in an interval of $interval
#
interval=9 # 9 seconds
#
printf -v secnow '%(%s)T' -1
start=$(( secnow % (3600*24*365) ))
declare -a arr
init=false
while true
do
read line
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
# consume every line created in the past
if (( ! init ))
then
# assume reading a line might not take longer than a second (rounded to whole seconds)
while (( ${#line} > 0 && (now - start) < 2 ))
do
read line
start=$now
echo -n "." # for debugging purpose, remove
printf -v secnow '%(%s)T' -1
now=$(( secnow % (3600*24*365) ))
done
init=1
echo "init=$init" # for debugging purpose, remove
# collect new lines, display them every $interval seconds
else
if ((${#line} > 0 ))
then
echo -n "-" # for debugging purpose, remove
arr+=("read: $line \n")
fi
if (( (now - start) > interval ))
then
echo -e "${arr[@]]}"
arr=()
start=$now
fi
fi
sleep .1
done < seconds.log
使用日志文件生成器在3秒内输出,运行一段时间,然后启动read-secuds.sh脚本,并激活调试输出:
./read-seconds.sh
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................init=1
---read: 1688 server msg
read: 1691 server msg
read: 1694 server msg
---read: 1697 server msg
read: 1700 server msg
read: 1703 server msg
----read: 1706 server msg
read: 1709 server msg
read: 1712 server msg
read: 1715 server msg
^C
https://stackoverflow.com/questions/49925113
复制相似问题