thread safety - Python: Deadlock of a single lock in multiprocessing -
i'm using pyserial acquire data multiprocessing. way share data simple. so:
i have member objects in class:
self.mpmanager = mp.manager() self.shared_return_list = self.mpmanager.list() self.shared_result_lock = mp.lock()
i call multiprocessing process way:
process = mp.process(target=do_my_stuff, args=(self.shared_stopped, self.shared_return_list, self.shared_result_lock) )
where do_my_stuff
global function.
now part fills list in process function:
if len(acqbuffer) > acquisitionspecs["lengthtopass"]: shared_lock.acquire() shared_return_list.extend(acqbuffer) del acqbuffer[:] shared_lock.release()
and part takes local thread use is:
while len(self.acqbuffer) <= 0 , (not self.stopped): #copy list shared buffer , empty self.shared_result_lock.acquire() self.acqbuffer.extend(self.shared_return_list) del self.shared_return_list[:] self.shared_result_lock.release()
the problem:
although there's 1 lock, program ending in deadlock somehow! after waiting time, program freezes. after adding prints before , after locks, found freezes @ lock , reaches deadlock somehow.
if use recursive lock, rlock()
, works no problems. not sure whether should that.
how possible? doing wrong? expect if both processes try acquire lock, should block until other process unlocks lock.
without having sscce, it's difficult know if there's else going on in code or not.
one possibility there exception thrown after lock acquired. try wrapping each of locked sections in try/finally clause. eg.
try: shared_lock.acquire() shared_return_list.extend(acqbuffer) del acqbuffer[:] finally: shared_lock.release()
and:
try: self.shared_result_lock.acquire() self.acqbuffer.extend(self.shared_return_list) del self.shared_return_list[:] finally: self.shared_result_lock.release()
you add exception clauses, , log exceptions raised, if turns out issue.
Comments
Post a Comment