python - Avoid out of memory error for multiprocessing Pool -


how avoid "out of memory" exception when lot of sub processes launched using multiprocessing.pool?

first of all, program loads 5gb file object. next, parallel processing runs, each process read 5gb object.

because machine has more 30 cores, want use full of cores. however, when launching 30 sub processes, out of memory exception occurs.

probably, each process has copy of large instance (5gb). total memory 5gb * 30 core = 150gb. that's why out of memory error occurs.

i believe there workaround avoid memory error because each process read object. if each process share memory of huge object, 5gb memory enough multi processing.

please let me know workaround of memory error.

import cpickle multiprocessing import pool multiprocessing import process import multiprocessing functools import partial  open("huge_data_5gb.pickle", "rb") f     huge_instance = cpickle(f)  def run_process(i, huge_instance):     return huge_instance.get_element(i)  partial_process = partial(run_process, huge_instance=huge_instance)  p = pool(30) # machine has more 30 cores result = p.map(partial_process, range(10000)) 


Comments

Popular posts from this blog

unity3d - Rotate an object to face an opposite direction -

angular - Is it possible to get native element for formControl? -

javascript - Why jQuery Select box change event is now working? -