parpool memory allocation per worker
Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance.Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance. Hello, I’m learning to submit batch jobs on SLURM. Realistically, I can request at most
#SBATCH -n 32
#SBATCH –mem-per-cpu=4G
In other words, I can request 32 cores and 128G of memory.
Now, I want to run a global optimization function (MultiStart) in parallel. Currently, I set the number of workers in parpool to be 32 (equal to the number of cores), but I constantly run into out of memory.
I’m curious if setting the number of workers in parpool to be, say, 16 can resolve this issue. If I’m not mistaken, if I set the number of workers in parpool to be 32, each worker has at most 4G of memory to use, whereas if I set the number of workers in parpool to be 16, each worker has at most 8G of memory to use.
I’d be grateful if you can correct me, or confirm what I wrote. Obviously, I can just try, but the problem is it takes a long time to get out of the queue, and the optimization itself takes days, so I want to make sure what I try makes sense before submitting.
Next, assuming what I wrote makes sense, what happens if I set the number of workers in parpool to be, say, 20, so 32/20 = 1.6 is not an integer.
Thank you for your guidance. parpool, numworkers, memory MATLAB Answers — New Questions