我目前有一个由Slurm管理的由10个工作节点组成的集群,其中有一个主节点。在经历了一些初始问题之后,我以前成功地设置了集群,但成功地使它正常工作。我把所有的脚本和指令都放在了我的GitHub回购(https://brettchapman.github.io/Nimbus_Cluster/)上。我最近需要重新开始,以增加硬盘空间,现在似乎无法正确安装和配置它,无论我尝试了什么。
Slurmctld和slurmdbd安装并正确配置(包括活动状态和使用systemctl status命令运行),但是slurmd仍然处于失败/非活动状态。
以下是我的slurm.conf文件:
# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
SlurmctldHost=node-0
#SlurmctldHost=
#
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/cgroup
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/spool/slurmd
SlurmUser=slurm
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/spool/slurm-llnl
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/cgroup
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=600
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
#MaxMemPerCPU=0
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SelectType=select/cons_res
SelectTypeParameters=CR_Core
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/filetxt
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
JobCompHost=localhost
JobCompLoc=slurm_acct_db
JobCompPass=password
#JobCompPort=
JobCompType=jobcomp/mysql
JobCompUser=slurm
#JobContainerType=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=info
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=info
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=node-[1-10] NodeAddr=node-[1-10] CPUs=16 RealMemory=64323 Sockets=1 CoresPerSocket=8 ThreadsPerCore=2 State=UNKNOWN
PartitionName=debug Nodes=node-[1-10] Default=YES MaxTime=INFINITE State=UP
以下是我的slurmdbd.conf文件:
AuthType=auth/munge
AuthInfo=/run/munge/munge.socket.2
DbdHost=localhost
DebugLevel=info
StorageHost=localhost
StorageLoc=slurm_acct_db
StoragePass=password
StorageType=accounting_storage/mysql
StorageUser=slurm
LogFile=/var/log/slurm-llnl/slurmdbd.log
PidFile=/var/run/slurmdbd.pid
SlurmUser=slurm
在我的计算节点上运行pdsh -a sudo systemctl状态slurmd会给出以下错误:
pdsh@node-0: node-5: ssh exited with exit code 3
node-6: ● slurmd.service - Slurm node daemon
node-6: Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
node-6: Active: inactive (dead) since Tue 2020-08-11 03:52:58 UTC; 2min 45s ago
node-6: Docs: man:slurmd(8)
node-6: Process: 9068 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=0/SUCCESS)
node-6: Main PID: 8983
node-6:
node-6: Aug 11 03:34:09 node-6 systemd[1]: Starting Slurm node daemon...
node-6: Aug 11 03:34:09 node-6 systemd[1]: slurmd.service: Supervising process 8983 which is not our child. We'll most likely not notice when it exits.
node-6: Aug 11 03:34:09 node-6 systemd[1]: Started Slurm node daemon.
node-6: Aug 11 03:52:58 node-6 systemd[1]: slurmd.service: Killing process 8983 (n/a) with signal SIGKILL.
node-6: Aug 11 03:52:58 node-6 systemd[1]: slurmd.service: Killing process 8983 (n/a) with signal SIGKILL.
node-6: Aug 11 03:52:58 node-6 systemd[1]: slurmd.service: Succeeded.
pdsh@node-0: node-6: ssh exited with exit code 3
在启动和运行集群之前,我以前没有收到过这种错误,所以我不确定从现在到最后一次运行它时我做了什么或者没有做什么。我的猜测是,这与文件/文件夹权限有关,因为我已经发现,在设置文件/文件夹权限时,这是非常关键的。我可能错过了记录我以前做过的事情。这是我第二次尝试设置slurm管理的集群。
我的整个工作流程和脚本都可以从我的GitHub回购中获得。如果您需要任何其他错误输出,请询问。
谢谢你能提供的任何帮助。
布雷特
编辑:
查看节点-1中的一个并运行slurmd,我得到以下内容:
slurmd: debug: Log file re-opened
slurmd: debug2: hwloc_topology_init
slurmd: debug2: hwloc_topology_load
slurmd: debug2: hwloc_topology_export_xml
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: Message aggregation disabled
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug2: hwloc_topology_init
slurmd: debug2: xcpuinfo_hwloc_topo_load: xml file (/var/spool/slurmd/hwloc_topo_whole.xml) found
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: topology NONE plugin loaded
slurmd: route default plugin loaded
slurmd: CPU frequency setting not configured for this node
slurmd: debug: Resource spec: No specialized cores configured by default on this node
slurmd: debug: Resource spec: Reserved system memory limit not configured for this node
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug: task/cgroup: now constraining jobs allocated cores
slurmd: debug: task/cgroup/memory: total:64323M allowed:100%(enforced), swap:0%(permissive), max:100%(64323M) max+swap:100%(128646M) min:30M kmem:100%(64323M permissive) min:30M swappiness:0(unset)
slurmd: debug: task/cgroup: now constraining jobs allocated memory
slurmd: debug: task/cgroup: unable to open /etc/slurm-llnl/cgroup_allowed_devices_file.conf: No such file or directory
slurmd: debug: task/cgroup: now constraining jobs allocated devices
slurmd: debug: task/cgroup: loaded
slurmd: debug: Munge authentication plugin loaded
slurmd: debug: spank: opening plugin stack /etc/slurm-llnl/plugstack.conf
slurmd: debug: /etc/slurm-llnl/plugstack.conf: 1: include "/etc/slurm-llnl/plugstack.conf.d/*.conf"
slurmd: Munge credential signature plugin loaded
slurmd: slurmd version 19.05.5 started
slurmd: debug: Job accounting gather NOT_INVOKED plugin loaded
slurmd: debug: job_container none plugin loaded
slurmd: debug: switch NONE plugin loaded
slurmd: error: Error binding slurm stream socket: Address already in use
slurmd: error: Unable to bind listen port (*:6818): Address already in use
登录到一个不同的节点,即节点-10,我得到以下信息:
slurmd: debug: Log file re-opened
slurmd: debug2: hwloc_topology_init
slurmd: debug2: hwloc_topology_load
slurmd: debug2: hwloc_topology_export_xml
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: Message aggregation disabled
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug2: hwloc_topology_init
slurmd: debug2: xcpuinfo_hwloc_topo_load: xml file (/var/spool/slurmd/hwloc_topo_whole.xml) found
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: topology NONE plugin loaded
slurmd: route default plugin loaded
slurmd: CPU frequency setting not configured for this node
slurmd: debug: Resource spec: No specialized cores configured by default on this node
slurmd: debug: Resource spec: Reserved system memory limit not configured for this node
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug: task/cgroup: now constraining jobs allocated cores
slurmd: debug: task/cgroup/memory: total:64323M allowed:100%(enforced), swap:0%(permissive), max:100%(64323M) max+swap:100%(128646M) min:30M kmem:100%(64323M permissive) min:30M swappiness:0(unset)
slurmd: debug: task/cgroup: now constraining jobs allocated memory
slurmd: debug: task/cgroup: unable to open /etc/slurm-llnl/cgroup_allowed_devices_file.conf: No such file or directory
slurmd: debug: task/cgroup: now constraining jobs allocated devices
slurmd: debug: task/cgroup: loaded
slurmd: debug: Munge authentication plugin loaded
slurmd: debug: spank: opening plugin stack /etc/slurm-llnl/plugstack.conf
slurmd: debug: /etc/slurm-llnl/plugstack.conf: 1: include "/etc/slurm-llnl/plugstack.conf.d/*.conf"
slurmd: Munge credential signature plugin loaded
slurmd: slurmd version 19.05.5 started
slurmd: debug: Job accounting gather NOT_INVOKED plugin loaded
slurmd: debug: job_container none plugin loaded
slurmd: debug: switch NONE plugin loaded
slurmd: slurmd started on Tue, 11 Aug 2020 06:56:10 +0000
slurmd: CPUs=16 Boards=1 Sockets=1 Cores=8 Threads=2 Memory=64323 TmpDisk=297553 Uptime=756 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)
slurmd: debug: AcctGatherEnergy NONE plugin loaded
slurmd: debug: AcctGatherProfile NONE plugin loaded
slurmd: debug: AcctGatherInterconnect NONE plugin loaded
slurmd: debug: AcctGatherFilesystem NONE plugin loaded
slurmd: debug2: No acct_gather.conf file (/etc/slurm-llnl/acct_gather.conf)
slurmd: debug: _handle_node_reg_resp: slurmctld sent back 8 TRES.
另一个节点,节点-5,我得到这个,与节点-1相同:
slurmd: debug: Log file re-opened
slurmd: debug2: hwloc_topology_init
slurmd: debug2: hwloc_topology_load
slurmd: debug2: hwloc_topology_export_xml
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: Message aggregation disabled
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug2: hwloc_topology_init
slurmd: debug2: xcpuinfo_hwloc_topo_load: xml file (/var/spool/slurmd/hwloc_topo_whole.xml) found
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: topology NONE plugin loaded
slurmd: route default plugin loaded
slurmd: CPU frequency setting not configured for this node
slurmd: debug: Resource spec: No specialized cores configured by default on this node
slurmd: debug: Resource spec: Reserved system memory limit not configured for this node
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug: task/cgroup: now constraining jobs allocated cores
slurmd: debug: task/cgroup/memory: total:64323M allowed:100%(enforced), swap:0%(permissive), max:100%(64323M) max+swap:100%(128646M) min:30M kmem:100%(64323M permissive) min:30M swappiness:0(unset)
slurmd: debug: task/cgroup: now constraining jobs allocated memory
slurmd: debug: task/cgroup: unable to open /etc/slurm-llnl/cgroup_allowed_devices_file.conf: No such file or directory
slurmd: debug: task/cgroup: now constraining jobs allocated devices
slurmd: debug: task/cgroup: loaded
slurmd: debug: Munge authentication plugin loaded
slurmd: debug: spank: opening plugin stack /etc/slurm-llnl/plugstack.conf
slurmd: debug: /etc/slurm-llnl/plugstack.conf: 1: include "/etc/slurm-llnl/plugstack.conf.d/*.conf"
slurmd: Munge credential signature plugin loaded
slurmd: slurmd version 19.05.5 started
slurmd: debug: Job accounting gather NOT_INVOKED plugin loaded
slurmd: debug: job_container none plugin loaded
slurmd: debug: switch NONE plugin loaded
slurmd: error: Error binding slurm stream socket: Address already in use
slurmd: error: Unable to bind listen port (*:6818): Address already in use
节点-10之前是下降的,我挣扎着把它恢复过来,所以这个错误可能与整个问题无关。
Edit2:在杀死跨越所有节点的阻塞的slurmd进程之后,slurmd在开始时仍然失败:
slurmd.service - Slurm node daemon
Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Tue 2020-08-11 07:10:42 UTC; 3min 58s ago
Docs: man:slurmd(8)
Aug 11 07:09:11 node-1 systemd[1]: Starting Slurm node daemon...
Aug 11 07:09:11 node-1 systemd[1]: slurmd.service: Can't open PID file /run/slurmd.pid (yet?) after start: Operation not permitted
Aug 11 07:10:42 node-1 systemd[1]: slurmd.service: start operation timed out. Terminating.
Aug 11 07:10:42 node-1 systemd[1]: slurmd.service: Failed with result 'timeout'.
Aug 11 07:10:42 node-1 systemd[1]: Failed to start Slurm node daemon.
sudo slurmd -Dvvv输出在node1上:
slurmd: debug: Log file re-opened
slurmd: debug2: hwloc_topology_init
slurmd: debug2: hwloc_topology_load
slurmd: debug2: hwloc_topology_export_xml
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: Message aggregation disabled
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug2: hwloc_topology_init
slurmd: debug2: xcpuinfo_hwloc_topo_load: xml file (/var/spool/slurmd/hwloc_topo_whole.xml) found
slurmd: debug: CPUs:16 Boards:1 Sockets:1 CoresPerSocket:8 ThreadsPerCore:2
slurmd: topology NONE plugin loaded
slurmd: route default plugin loaded
slurmd: CPU frequency setting not configured for this node
slurmd: debug: Resource spec: No specialized cores configured by default on this node
slurmd: debug: Resource spec: Reserved system memory limit not configured for this node
slurmd: debug: Reading cgroup.conf file /etc/slurm-llnl/cgroup.conf
slurmd: debug: task/cgroup: now constraining jobs allocated cores
slurmd: debug: task/cgroup/memory: total:64323M allowed:100%(enforced), swap:0%(permissive), max:100%(64323M) max+swap:100%(128646M) min:30M kmem:100%(64323M permissive) min:30M swappiness:0(unset)
slurmd: debug: task/cgroup: now constraining jobs allocated memory
slurmd: debug: task/cgroup: unable to open /etc/slurm-llnl/cgroup_allowed_devices_file.conf: No such file or directory
slurmd: debug: task/cgroup: now constraining jobs allocated devices
slurmd: debug: task/cgroup: loaded
slurmd: debug: Munge authentication plugin loaded
slurmd: debug: spank: opening plugin stack /etc/slurm-llnl/plugstack.conf
slurmd: debug: /etc/slurm-llnl/plugstack.conf: 1: include "/etc/slurm-llnl/plugstack.conf.d/*.conf"
slurmd: Munge credential signature plugin loaded
slurmd: slurmd version 19.05.5 started
slurmd: debug: Job accounting gather NOT_INVOKED plugin loaded
slurmd: debug: job_container none plugin loaded
slurmd: debug: switch NONE plugin loaded
slurmd: slurmd started on Tue, 11 Aug 2020 07:14:08 +0000
slurmd: CPUs=16 Boards=1 Sockets=1 Cores=8 Threads=2 Memory=64323 TmpDisk=297553 Uptime=15897 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)
slurmd: debug: AcctGatherEnergy NONE plugin loaded
slurmd: debug: AcctGatherProfile NONE plugin loaded
slurmd: debug: AcctGatherInterconnect NONE plugin loaded
slurmd: debug: AcctGatherFilesystem NONE plugin loaded
slurmd: debug2: No acct_gather.conf file (/etc/slurm-llnl/acct_gather.conf)
slurmd: debug: _handle_node_reg_resp: slurmctld sent back 8 TRES.
Edit3:我从slurmd.log文件中获得这些调试消息,这些消息似乎表明PID无法检索,并且某些文件/文件夹无法访问:
[2020-08-11T07:38:27.973] slurmd version 19.05.5 started
[2020-08-11T07:38:27.973] debug: Job accounting gather NOT_INVOKED plugin loaded
[2020-08-11T07:38:27.973] debug: job_container none plugin loaded
[2020-08-11T07:38:27.973] debug: switch NONE plugin loaded
[2020-08-11T07:38:27.973] slurmd started on Tue, 11 Aug 2020 07:38:27 +0000
[2020-08-11T07:38:27.973] CPUs=16 Boards=1 Sockets=1 Cores=8 Threads=2 Memory=64323 TmpDisk=297553 Uptime=17357 CPUSpecList=(null) FeaturesAvail=(null) FeaturesActive=(null)
[2020-08-11T07:38:27.973] debug: AcctGatherEnergy NONE plugin loaded
[2020-08-11T07:38:27.973] debug: AcctGatherProfile NONE plugin loaded
[2020-08-11T07:38:27.974] debug: AcctGatherInterconnect NONE plugin loaded
[2020-08-11T07:38:27.974] debug: AcctGatherFilesystem NONE plugin loaded
[2020-08-11T07:38:27.974] debug2: No acct_gather.conf file (/etc/slurm-llnl/acct_gather.conf)
[2020-08-11T07:38:27.975] debug: _handle_node_reg_resp: slurmctld sent back 8 TRES.
[2020-08-11T07:38:33.496] got shutdown request
[2020-08-11T07:38:33.496] all threads complete
[2020-08-11T07:38:33.496] debug2: _file_read_uint32s: unable to open '(null)/tasks' for reading : No such file or directory
[2020-08-11T07:38:33.496] debug2: xcgroup_get_pids: unable to get pids of '(null)'
[2020-08-11T07:38:33.496] debug2: _file_read_uint32s: unable to open '(null)/tasks' for reading : No such file or directory
[2020-08-11T07:38:33.496] debug2: xcgroup_get_pids: unable to get pids of '(null)'
[2020-08-11T07:38:33.497] debug2: _file_read_uint32s: unable to open '(null)/tasks' for reading : No such file or directory
[2020-08-11T07:38:33.497] debug2: xcgroup_get_pids: unable to get pids of '(null)'
[2020-08-11T07:38:33.497] Consumable Resources (CR) Node Selection plugin shutting down ...
[2020-08-11T07:38:33.497] Munge credential signature plugin unloaded
[2020-08-11T07:38:33.497] Slurmd shutdown completing
Edit4: slurmd是活动的,但只有在运行sudo服务slurmd重新启动之后。运行停止然后启动不激活slurmd。
● slurmd.service - Slurm node daemon
Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-08-11 08:17:46 UTC; 1min 37s ago
Docs: man:slurmd(8)
Process: 28281 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 28474
Tasks: 0
Memory: 1.1M
CGroup: /system.slice/slurmd.service
Aug 11 08:17:46 node-1 systemd[1]: Starting Slurm node daemon...
Aug 11 08:17:46 node-1 systemd[1]: slurmd.service: Can't open PID file /run/slurmd.pid (yet?) after start: Operation not permitted
Aug 11 08:17:46 node-1 systemd[1]: Started Slurm node daemon.
Aug 11 08:18:41 node-1 systemd[1]: slurmd.service: Supervising process 28474 which is not our child. We'll most likely not notice when it exits.
Edit5:另一个可能相关的问题是,sacct只能与sudo一起运行,并且它抱怨对日志文件的权限。我试图将权限更改为/var/log,但由于它是一个系统文件夹,所以会出现问题:
ubuntu@node-0:/data/pangenome_cactus$ sacct
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
/var/log/slurm_jobacct.log: Permission denied
ubuntu@node-0:/data/pangenome_cactus$ sudo sacct
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
2 cactus_pa+ debug (null) 0 FAILED 127:0
3 cactus_pa+ debug (null) 0 RUNNING 0:0
3.0 singulari+ (null) 0 RUNNING 0:0
发布于 2020-08-11 08:05:58
slurmd
守护进程表示got shutdown request
,因此它被systemd
终止可能是因为Can't open PID file /run/slurmd.pid (yet?) after start
。systemd
被配置为,如果PID文件/run/slurmd.pid
存在,则slurmd
将成功启动。但是Slurm配置状态为SlurmdPidFile=/var/run/slurmd.pid
。尝试将其更改为SlurmdPidFile=/run/slurmd.pid
。
发布于 2020-11-19 19:53:08
由于我认为我的解决方案是可行的,所以我将在这个问题上补充我的一些意见。"Slurmd“似乎并不关心PidFile路径是否存在。但是,如果它不能写入给定的路径,它将在作为守护进程运行时返回错误代码。Linux服务捕获错误代码,并认为守护进程无法启动,但实际上"slurmd“已经启动。这就是为什么您在尝试重新启动它时会出现“已在使用的地址”错误。因此,解决这一问题的方法是确保机器启动时确实存在PidFile路径。
#解决方案#1
不要在/var/run下创建文件。使用其他不是"root“的目录。如果要使用/var/run,请转到解决方案2。
#解决方案#2
/var/run是在内存中创建的临时目录。它不会在重新启动之间持续存在。另一个问题是"/var/run“用于"root”用户,而不是"slurm“。这就是为什么"slurmd“没有写进去的权利。因此,我建议创建/var/run/slurm,并将所有内容放入其中。
要解决这个问题,我们可以参考“芒格”。如果您执行"ls -l /var/run/“您会注意到"/var/run/munge”有用户"munge“和组"munge”。而且,munge能够在启动时创建"/var/run/munge“目录。
要在开始时创建"/var/run“下的目录,只需在/usr/lib/tmpfiles.d/slurm.conf下创建一个文件(同样,munge就是这样做的。您可以引用/usr/lib/tmpfiles.d/munge.conf)。
d /var/run/slurm 0755 slurm slurm -
d /var/log/slurm 0755 slurm slurm -
d /var/spool/slurm 0755 slurm slurm -
然后,确保您的slurm.conf、slurmd.service、slurmctld.service将PidFile指向与上面相同的位置。
就是这个。它应该能起作用。我还遇到了另一个奇怪的问题,服务在开始时就会失败,我不得不在我的服务中添加“Restart=on”和"RestartSec=5“,这样它最终就会成功(大约10~20多个)。这并不整洁,但确实有效。
https://stackoverflow.com/questions/63351391
复制相似问题