tensorflow - Best way to split computation too large to fit into memory? -


i have operation that's running out of memory when using batch size greater 4 (i run 32). thought clever splitting 1 operation along batch dimension, using tf.split, running on subset of batch, , recombining using tf.concat. reason doesn't work , results in oom error. clear, if run on batch size of 4 works without splitting. if instead run on batch size of 32, , if perform 32-way split each individual element run independently, still run out of memory. doesn't tf schedule separate operations not overwhelm memory? if not need explicitly set sort of conditional dependence?

i discovered functional ops, map_fn in case, address needs. setting parallel_iterations option 1 (or small number make computation fit in memory), i'm able control degree of parallelism , avoid running out of memory.


Comments

Popular posts from this blog

angular - Is it possible to get native element for formControl? -

unity3d - Rotate an object to face an opposite direction -

javascript - Why jQuery Select box change event is now working? -