l******9 发帖数: 579 | 1 Hi,
I am trying to do parallelization for a computing intensive problem.
I am working on a Linux cluster where each node is a multicore processor.
e.g. 2 or 4 quad-core processor per node.
I want to reduce latency and improve performance as much as possible.
I plan to use multiprocessing and multithreading at the same.
Each process run on a distinct node and each process spawn many threads
on each node. This is a 2 level parallelism.
For multiprocessing, I would like to choose MPI.
For multithreading, I have two choices: openMP or boost::thread (pthread).
Which one has lower latency and higher performance ?
It seems that openMP coding is easier (without touching low level thread
managing manually).
But, it seems that openMP has higher overhead than boost::thread (pthread).
Any help is really appreciated.
thanks | x*z 发帖数: 1010 | 2 这个啊,可能你全用MPI performance还强点。
【在 l******9 的大作中提到】 : Hi, : I am trying to do parallelization for a computing intensive problem. : I am working on a Linux cluster where each node is a multicore processor. : e.g. 2 or 4 quad-core processor per node. : I want to reduce latency and improve performance as much as possible. : I plan to use multiprocessing and multithreading at the same. : Each process run on a distinct node and each process spawn many threads : on each node. This is a 2 level parallelism. : For multiprocessing, I would like to choose MPI. : For multithreading, I have two choices: openMP or boost::thread (pthread).
| l******9 发帖数: 579 | 3 Why ?
MPI has process communication overhead between processes, which
do not share the same memory space.
But, multi-threading does not have this kind of problem.
【在 x*z 的大作中提到】 : 这个啊,可能你全用MPI performance还强点。
| x*z 发帖数: 1010 | 4 Most MPI libraries have shared memory implemented, which actually has
less overhead than OpenMP or threading.
【在 l******9 的大作中提到】 : Why ? : MPI has process communication overhead between processes, which : do not share the same memory space. : But, multi-threading does not have this kind of problem.
| l******9 发帖数: 579 | 5 In MPI libraries with shared memory implemented, we have inter-process
communication or inter-thread communication ?
If it is former, why process has less overhead than thread ?
If it is later, why it has less overhead than openMP and threading?
Does MPI has some built-in advantages over them ?
Any help is really appreciated.
Thanks
【在 x*z 的大作中提到】 : Most MPI libraries have shared memory implemented, which actually has : less overhead than OpenMP or threading.
|
|