Last modified by valad on 2024/05/28 10:16

From version 9.1
edited by valad
on 2024/05/28 10:16
Change comment: There is no comment for this version
To version 7.1
edited by valad
on 2024/05/28 10:13
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -12,11 +12,11 @@
12 12  export ompi_cc='icc'
13 13  {{/code}}
14 14  
15 -Especially if you use icc, it will be better if you use the recent [[Intel compiler>>doc:Commercial Software.Intel Compiler.WebHome]].
15 +Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]].
16 16  
17 17  == Infiniband ==
18 18  
19 -The '**itp**'-, '**itp-big**'-, '**dfg-xeon**'-, '**iboga**'-, '**dreama**'- and '**barcelona**'-nodes have **Infiniband** Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
19 +The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
20 20  
21 21  = Mellanox HPCX =
22 22