可以创建一个非守护进程的python池吗?我希望一个池能够调用内部有另一个池的函数。
我之所以需要它,是因为守护进程不能创建进程。具体来说,它会导致以下错误:
AssertionError: daemonic processes are not allowed to have children
例如,考虑这样一个场景:function_a
有一个运行function_b
的池,而这个池又有一个运行function_c
的池。此函数链将失败,因为function_b
正在守护进程中运行,并且守护进程无法创建进程。
发布于 2012-01-23 02:46:24
multiprocessing.pool.Pool
类在其__init__
方法中创建工作进程,使其成为守护进程并启动它们,并且在它们启动之前不能将它们的daemon
属性重新设置为False
(之后就不允许这样做了)。但是您可以创建自己的multiprocesing.pool.Pool
子类(multiprocessing.Pool
只是一个包装器函数),并替换为您自己的multiprocessing.Process
子类,它总是非守护进程,用于工作进程。
这里有一个完整的例子来说明如何做到这一点。重要的部分是顶部的两个类NoDaemonProcess
和MyPool
,以及最后在MyPool
实例上调用pool.close()
和pool.join()
。
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import multiprocessing
# We must import this explicitly, it is not imported by the top-level
# multiprocessing module.
import multiprocessing.pool
import time
from random import randint
class NoDaemonProcess(multiprocessing.Process):
# make 'daemon' attribute always return False
def _get_daemon(self):
return False
def _set_daemon(self, value):
pass
daemon = property(_get_daemon, _set_daemon)
# We sub-class multiprocessing.pool.Pool instead of multiprocessing.Pool
# because the latter is only a wrapper function, not a proper class.
class MyPool(multiprocessing.pool.Pool):
Process = NoDaemonProcess
def sleepawhile(t):
print("Sleeping %i seconds..." % t)
time.sleep(t)
return t
def work(num_procs):
print("Creating %i (daemon) workers and jobs in child." % num_procs)
pool = multiprocessing.Pool(num_procs)
result = pool.map(sleepawhile,
[randint(1, 5) for x in range(num_procs)])
# The following is not really needed, since the (daemon) workers of the
# child's pool are killed when the child is terminated, but it's good
# practice to cleanup after ourselves anyway.
pool.close()
pool.join()
return result
def test():
print("Creating 5 (non-daemon) workers and jobs in main process.")
pool = MyPool(5)
result = pool.map(work, [randint(1, 5) for x in range(5)])
pool.close()
pool.join()
print(result)
if __name__ == '__main__':
test()
发布于 2019-01-22 16:34:05
在一些Python版本中,将标准池替换为自定义会引发错误:AssertionError: group argument must be None for now
。
Here我找到了一个可以帮助你的解决方案:
class NoDaemonProcess(multiprocessing.Process):
# make 'daemon' attribute always return False
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, val):
pass
class NoDaemonProcessPool(multiprocessing.pool.Pool):
def Process(self, *args, **kwds):
proc = super(NoDaemonProcessPool, self).Process(*args, **kwds)
proc.__class__ = NoDaemonProcess
return proc
发布于 2020-05-08 17:52:17
我见过有人使用celery
的multiprocessing
分支billiard (多进程池扩展)来处理这个问题,它允许守护进程派生子进程。
import billiard as multiprocessing
https://stackoverflow.com/questions/6974695
复制相似问题