系统信息:Ubuntu20.04软件raid 5(添加了第三个HDD并从raid 1转换)。FS是LUKS上的Ext4。
在重新启动之后,我看到了系统的减速,所以我通过proc/mdstat检查了数组状态,它显示如下:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid5 sdb[2] sdc[0] sdd[1]
7813772928 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
[==>..................] check = 14.3% (558996536/3906886464) finish=322.9min speed=172777K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices:
这是重新检查,但我不知道为什么。没有cron作业设置。下面是日志条目,在每次重新启动之后,在系统被转换为RAID5之后发生,但我不确定,它是否每次都在重新检查:
Jan 3 14:34:47 kernel: [ 3.473942] md/raid:md0: device sdb operational as raid disk 2
Jan 3 14:34:47 kernel: [ 3.475170] md/raid:md0: device sdc operational as raid disk 0
Jan 3 14:34:47 kernel: [ 3.476402] md/raid:md0: device sdd operational as raid disk 1
Jan 3 14:34:47 kernel: [ 3.478290] md/raid:md0: raid level 5 active with 3 out of 3 devices, algorithm 2
Jan 3 14:34:47 kernel: [ 3.520677] md0: detected capacity change from 0 to 8001303478272
mdadm --详细/dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Nov 25 23:06:18 2020
Raid Level : raid5
Array Size : 7813772928 (7451.79 GiB 8001.30 GB)
Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sun Jan 3 16:17:28 2021
State : clean, checking
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
Consistency Policy : bitmap
Check Status : 16% complete
Name : ubuntu-server:0
UUID :
Events : 67928
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 16 2 active sync /dev/sdb
这是一种正常的行为吗?
感谢您的任何投入
发布于 2021-05-02 07:25:53
更新18/06/2021
对于mdcheck_start.timer mdcheck_continue.timer中的svc;done停止${svc};systemctl禁用${svc};
摘自:https://a20.net/bert/2020/11/02/disable-periodic-raid-check-on-ubuntu-20-04-systemd/
更新04/05/2021开始。我先前的回答似乎没有帮助。尽管/etc/default/mdadm
发生了更改,但还是再次进行了检查。
我找到了其他的调查对象。mdcheck_start.service
mdcheck_start.timer
mdcheck_continue.service
mdcheck_continue.timer
/etc/systemd/system/mdmonitor.service.wants/mdcheck_start.timer
/etc/systemd/system/mdmonitor.service.wants/mdcheck_continue.timer
/etc/systemd/system/mdmonitor.service.wants/mdmonitor-oneshot.timer
systemctl status mdcheck_start.service
● mdcheck_start.service - MD array scrubbing
Loaded: loaded (/lib/systemd/system/mdcheck_start.service; static; vendor preset: enabled)
Active: inactive (dead)
TriggeredBy: ● mdcheck_start.timer
systemctl status mdcheck_start.timer
● mdcheck_start.timer - MD array scrubbing
Loaded: loaded (/lib/systemd/system/mdcheck_start.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Sun 2021-05-02 19:40:50 CEST; 1 day 14h ago
Trigger: Sun 2021-06-06 22:36:42 CEST; 1 months 3 days left
Triggers: ● mdcheck_start.service
May 02 19:40:50 xxx systemd[1]: Started MD array scrubbing.
systemctl status mdcheck_continue.service
● mdcheck_continue.service - MD array scrubbing - continuation
Loaded: loaded (/lib/systemd/system/mdcheck_continue.service; static; vendor preset: enabled)
Active: inactive (dead)
TriggeredBy: ● mdcheck_continue.timer
Condition: start condition failed at Tue 2021-05-04 06:38:39 CEST; 3h 26min ago
└─ ConditionPathExistsGlob=/var/lib/mdcheck/MD_UUID_* was not met
systemctl status mdcheck_continue.timer
● mdcheck_continue.timer - MD array scrubbing - continuation
Loaded: loaded (/lib/systemd/system/mdcheck_continue.timer; enabled; vendor preset: enabled)
Active: active (waiting) since Sun 2021-05-02 19:40:50 CEST; 1 day 14h ago
Trigger: Wed 2021-05-05 00:35:53 CEST; 14h left
Triggers: ● mdcheck_continue.service
May 02 19:40:50 xxx systemd[1]: Started MD array scrubbing - continuation.
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdcheck_start.timer
# This file is part of mdadm.
#
# mdadm is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
[Unit]
Description=MD array scrubbing
[Timer]
OnCalendar=Sun *-*-1..7 1:00:00
RandomizedDelaySec=24h
Persistent=true
[Install]
WantedBy=mdmonitor.service
Also=mdcheck_continue.timer
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdcheck_continue.timer
# This file is part of mdadm.
#
# mdadm is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
[Unit]
Description=MD array scrubbing - continuation
[Timer]
OnCalendar=daily
RandomizedDelaySec=12h
Persistent=true
[Install]
WantedBy=mdmonitor.service
sudo cat /etc/systemd/system/mdmonitor.service.wants/mdmonitor-oneshot.timer
# This file is part of mdadm.
#
# mdadm is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
[Unit]
Description=Reminder for degraded MD arrays
[Timer]
OnCalendar=daily
RandomizedDelaySec=24h
Persistent=true
[Install]
WantedBy= mdmonitor.service
更新04/05/2021结束。
试试sudo dpkg-reconfigure mdadm
。
请注意,我不确定上述提示是否会有帮助。我的raid5在20.04也有同样的问题。
首先,我尝试手工编辑/etc/default/mdadm
,并将AUTOCHECK=true
转换为AUTOCHECK=false
。这没什么用。
今天我使用了dpkg-reconfigure mdadm
。现在,/etc/default/mdadm
文件看起来是一样的(AUTOCHECK=false
),但是作为dpkg-reconfigure mdadm
的一部分,也有一个update-initramfs
调用。我希望这会有所帮助。
... update-initramfs: deferring update (trigger activated) ...
扩展日志:
sudo dpkg-reconfigure mdadm
update-initramfs: deferring update (trigger activated)
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-curtin-settings.cfg'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655310: /usr/sbin/grub-probe
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655310: /usr/sbin/grub-probe
Found linux image: /boot/vmlinuz-5.4.0-72-generic
Found initrd image: /boot/initrd.img-5.4.0-72-generic
Found linux image: /boot/vmlinuz-5.4.0-71-generic
Found initrd image: /boot/initrd.img-5.4.0-71-generic
Found linux image: /boot/vmlinuz-5.4.0-70-generic
Found initrd image: /boot/initrd.img-5.4.0-70-generic
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655841: /usr/sbin/grub-probe
File descriptor 3 (pipe:[897059]) leaked on vgs invocation. Parent PID 655841: /usr/sbin/grub-probe
done
Processing triggers for initramfs-tools (0.136ubuntu6.4) ...
update-initramfs: Generating /boot/initrd.img-5.4.0-72-generic
完整的/etc/default/mdadm
文件:
cat /etc/default/mdadm
# mdadm Debian configuration
#
# You can run 'dpkg-reconfigure mdadm' to modify the values in this file, if
# you want. You can also change the values here and changes will be preserved.
# Do note that only the values are preserved; the rest of the file is
# rewritten.
#
# AUTOCHECK:
# should mdadm run periodic redundancy checks over your arrays? See
# /etc/cron.d/mdadm.
AUTOCHECK=false
# AUTOSCAN:
# should mdadm check once a day for degraded arrays? See
# /etc/cron.daily/mdadm.
AUTOSCAN=true
# START_DAEMON:
# should mdadm start the MD monitoring daemon during boot?
START_DAEMON=true
# DAEMON_OPTIONS:
# additional options to pass to the daemon.
DAEMON_OPTIONS="--syslog"
# VERBOSE:
# if this variable is set to true, mdadm will be a little more verbose e.g.
# when creating the initramfs.
VERBOSE=false
https://askubuntu.com/questions/1304738
复制相似问题