From 0c2694bc6100f2667177347e376e2f863ec867c6 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Tue, 13 Aug 2024 10:13:08 +0000 Subject: [PATCH] build based on 5722fc5 --- dev/.documenter-siteinfo.json | 2 +- dev/configuration/index.html | 2 +- dev/examples/01-hello/index.html | 2 +- dev/examples/02-broadcast/index.html | 8 ++-- dev/examples/03-reduce/index.html | 4 +- dev/examples/04-sendrecv/index.html | 2 +- dev/examples/05-job_schedule/index.html | 6 +-- dev/examples/06-scatterv/index.html | 2 +- dev/examples/07-rma_active/index.html | 2 +- dev/examples/08-rma_passive/index.html | 2 +- .../09-graph_communication/index.html | 8 ++-- dev/external/index.html | 2 +- dev/index.html | 2 +- dev/knownissues/index.html | 2 +- dev/reference/advanced/index.html | 14 +++---- dev/reference/api/index.html | 2 +- dev/reference/buffers/index.html | 2 +- dev/reference/collective/index.html | 26 ++++++------ dev/reference/comm/index.html | 2 +- dev/reference/environment/index.html | 4 +- dev/reference/group/index.html | 2 +- dev/reference/io/index.html | 2 +- dev/reference/library/index.html | 2 +- dev/reference/misc/index.html | 2 +- dev/reference/mpipreferences/index.html | 4 +- dev/reference/onesided/index.html | 2 +- dev/reference/pointtopoint/index.html | 42 +++++++++---------- dev/reference/topology/index.html | 10 ++--- dev/refindex/index.html | 2 +- dev/search_index.js | 2 +- dev/usage/index.html | 2 +- 31 files changed, 84 insertions(+), 84 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index bd286396e..87c2e58cc 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-13T10:01:30","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-13T10:13:04","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/configuration/index.html b/dev/configuration/index.html index 220eb1313..633fca822 100644 --- a/dev/configuration/index.html +++ b/dev/configuration/index.html @@ -28,4 +28,4 @@ julia> MPIPreferences.use_system_binary() ~/MPI> rm test/Manifest.toml ~/MPI> julia --project -(MPI) pkg> test

Testing GPU-aware buffers

The test suite can target CUDA-aware interface with CUDA.CuArray and the ROCm-aware interface with AMDGPU.ROCArray upon selecting the corresponding test_args kwarg when calling Pkg.test.

Run Pkg.test with --backend=CUDA to test CUDA-aware MPI buffers

import Pkg; Pkg.test("MPI"; test_args=["--backend=CUDA"])

and with --backend=AMDGPU to test ROCm-aware MPI buffers

import Pkg; Pkg.test("MPI"; test_args=["--backend=AMDGPU"])
Note

The JULIA_MPI_TEST_ARRAYTYPE environment variable has no effect anymore.

Environment variables

The test suite can also be modified by the following variables:

Migration from MPI.jl v0.19 or earlier

For MPI.jl v0.20, environment variables were used to configure which MPI library to use. These have been removed and no longer have any effect. The following subsections explain how to the same effects can be achieved with v0.20 or later.

Note

Please refer to Notes to HPC cluster administrators if you want to migrate your MPI.jl preferences on a cluster with a centrally managed MPI.jl configuration.

JULIA_MPI_BINARY

Use MPIPreferences.use_system_binary to use a system-provided MPI binary as described here. To switch back or select a different JLL-provided MPI binary, use MPIPreferences.use_jll_binary as described here.

JULIA_MPI_PATH

Removed without replacement.

JULIA_MPI_LIBRARY

Use MPIPreferences.use_system_binary with keyword argument library_names to specify possible, non-standard library names. Alternatively, you can also specify the full path to the library.

JULIA_MPI_ABI

Use MPIPreferences.use_system_binary with keyword argument abi to specify which ABI to use. See MPIPreferences.abi for possible values.

JULIA_MPIEXEC

Use MPIPreferences.use_system_binary with keyword argument mpiexec to specify the MPI launcher executable.

JULIA_MPIEXEC_ARGS

Use MPIPreferences.use_system_binary with keyword argument mpiexec, and pass a Cmd object to set the MPI launcher executable and to include specific command line options.

JULIA_MPI_INCLUDE_PATH

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

JULIA_MPI_CFLAGS

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

JULIA_MPICC

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

+(MPI) pkg> test

Testing GPU-aware buffers

The test suite can target CUDA-aware interface with CUDA.CuArray and the ROCm-aware interface with AMDGPU.ROCArray upon selecting the corresponding test_args kwarg when calling Pkg.test.

Run Pkg.test with --backend=CUDA to test CUDA-aware MPI buffers

import Pkg; Pkg.test("MPI"; test_args=["--backend=CUDA"])

and with --backend=AMDGPU to test ROCm-aware MPI buffers

import Pkg; Pkg.test("MPI"; test_args=["--backend=AMDGPU"])
Note

The JULIA_MPI_TEST_ARRAYTYPE environment variable has no effect anymore.

Environment variables

The test suite can also be modified by the following variables:

Migration from MPI.jl v0.19 or earlier

For MPI.jl v0.20, environment variables were used to configure which MPI library to use. These have been removed and no longer have any effect. The following subsections explain how to the same effects can be achieved with v0.20 or later.

Note

Please refer to Notes to HPC cluster administrators if you want to migrate your MPI.jl preferences on a cluster with a centrally managed MPI.jl configuration.

JULIA_MPI_BINARY

Use MPIPreferences.use_system_binary to use a system-provided MPI binary as described here. To switch back or select a different JLL-provided MPI binary, use MPIPreferences.use_jll_binary as described here.

JULIA_MPI_PATH

Removed without replacement.

JULIA_MPI_LIBRARY

Use MPIPreferences.use_system_binary with keyword argument library_names to specify possible, non-standard library names. Alternatively, you can also specify the full path to the library.

JULIA_MPI_ABI

Use MPIPreferences.use_system_binary with keyword argument abi to specify which ABI to use. See MPIPreferences.abi for possible values.

JULIA_MPIEXEC

Use MPIPreferences.use_system_binary with keyword argument mpiexec to specify the MPI launcher executable.

JULIA_MPIEXEC_ARGS

Use MPIPreferences.use_system_binary with keyword argument mpiexec, and pass a Cmd object to set the MPI launcher executable and to include specific command line options.

JULIA_MPI_INCLUDE_PATH

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

JULIA_MPI_CFLAGS

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

JULIA_MPICC

Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.

diff --git a/dev/examples/01-hello/index.html b/dev/examples/01-hello/index.html index 53bebc70c..6deeab339 100644 --- a/dev/examples/01-hello/index.html +++ b/dev/examples/01-hello/index.html @@ -9,4 +9,4 @@ Hello world, I am rank 0 of 4 Hello world, I am rank 1 of 4 Hello world, I am rank 2 of 4 -Hello world, I am rank 3 of 4 +Hello world, I am rank 3 of 4 diff --git a/dev/examples/02-broadcast/index.html b/dev/examples/02-broadcast/index.html index aab45ad1d..681ad3f2e 100644 --- a/dev/examples/02-broadcast/index.html +++ b/dev/examples/02-broadcast/index.html @@ -42,13 +42,13 @@ Running on 4 processes rank = 0, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im] rank = 1, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im] -rank = 2, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im] rank = 3, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im] +rank = 2, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im] rank = 0, B = Dict("foo" => "bar") rank = 1, B = Dict("foo" => "bar") -rank = 2, B = Dict("foo" => "bar") rank = 3, B = Dict("foo" => "bar") +rank = 2, B = Dict("foo" => "bar") rank = 0, f(3) = 14 -rank = 3, f(3) = 14 +rank = 2, f(3) = 14 rank = 1, f(3) = 14 -rank = 2, f(3) = 14 +rank = 3, f(3) = 14 diff --git a/dev/examples/03-reduce/index.html b/dev/examples/03-reduce/index.html index 897c7a74e..61892041e 100644 --- a/dev/examples/03-reduce/index.html +++ b/dev/examples/03-reduce/index.html @@ -50,5 +50,5 @@ col_var = map(summ -> summ.var, col_summ) @show col_var end
> mpiexecjl -n 4 julia examples/03-reduce.jl
-summ.var = 22.73679457945
-col_var = [0.8792365504728255 12.210218818926581 54.41456682774361]
+summ.var = 18.551614170296823 +col_var = [1.0190455189783263 9.001082421094319 45.60033633641226] diff --git a/dev/examples/04-sendrecv/index.html b/dev/examples/04-sendrecv/index.html index bd4cbe6d6..3f619b1c5 100644 --- a/dev/examples/04-sendrecv/index.html +++ b/dev/examples/04-sendrecv/index.html @@ -34,4 +34,4 @@ 0: Received 3 -> 0 = [3.0, 3.0, 3.0, 3.0] 1: Received 0 -> 1 = [0.0, 0.0, 0.0, 0.0] 2: Received 1 -> 2 = [1.0, 1.0, 1.0, 1.0] -3: Received 2 -> 3 = [2.0, 2.0, 2.0, 2.0] +3: Received 2 -> 3 = [2.0, 2.0, 2.0, 2.0] diff --git a/dev/examples/05-job_schedule/index.html b/dev/examples/05-job_schedule/index.html index 8727a7c68..2e4f02f61 100644 --- a/dev/examples/05-job_schedule/index.html +++ b/dev/examples/05-job_schedule/index.html @@ -161,14 +161,14 @@ Root: Sent number 10 to Worker 1 Worker 1: Received number 10 from root Root: Received number 110 from Worker 1 -Worker 3: Received number 3 from root -Root: Received number 103 from Worker 3 Worker 2: Received number 2 from root Root: Received number 102 from Worker 2 +Worker 3: Received number 3 from root +Root: Received number 103 from Worker 3 Root: Finish Worker 1 Worker 1: Finish Root: Finish Worker 2 Worker 2: Finish Root: Finish Worker 3 Worker 3: Finish -Root: New data = [101, 104, 105, 106, 107, 108, 109, 110, 103, 102] +Root: New data = [101, 104, 105, 106, 107, 108, 109, 110, 102, 103] diff --git a/dev/examples/06-scatterv/index.html b/dev/examples/06-scatterv/index.html index c3055b7cd..76e9a0eed 100644 --- a/dev/examples/06-scatterv/index.html +++ b/dev/examples/06-scatterv/index.html @@ -98,4 +98,4 @@ Final matrix ================ -output = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0] +output = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0] diff --git a/dev/examples/07-rma_active/index.html b/dev/examples/07-rma_active/index.html index 54748e24d..194d5852c 100644 --- a/dev/examples/07-rma_active/index.html +++ b/dev/examples/07-rma_active/index.html @@ -68,4 +68,4 @@ After Get, Rank 2: all_ranks = [0, 1, 2, 3] After Get, Rank 3: -all_ranks = [0, 1, 2, 3] +all_ranks = [0, 1, 2, 3] diff --git a/dev/examples/08-rma_passive/index.html b/dev/examples/08-rma_passive/index.html index 0a423131a..f5498b2e5 100644 --- a/dev/examples/08-rma_passive/index.html +++ b/dev/examples/08-rma_passive/index.html @@ -39,4 +39,4 @@ # free window MPI.free(win)
> mpiexecjl -n 4 julia examples/08-rma_passive.jl
 After Put with lock / unlock, window content on rank 0:
-all_ranks = [0, 1, 2, 3]
+all_ranks = [0, 1, 2, 3] diff --git a/dev/examples/09-graph_communication/index.html b/dev/examples/09-graph_communication/index.html index f9e883606..65e68baf6 100644 --- a/dev/examples/09-graph_communication/index.html +++ b/dev/examples/09-graph_communication/index.html @@ -92,12 +92,12 @@ rank = 0: Int32[1, 2, 3] rank = 1: Int32[0, 2, 3] rank = 2: Int32[3] -rank = 3: Int32[2, 0] +rank = 3: Int32[0, 2] +rank = 0: Int32[1, 2, 3] rank = 1: Int32[0, 2, 3] rank = 2: Int32[3] -rank = 3: Int32[2, 0] -rank = 0: Int32[1, 2, 3] +rank = 3: Int32[0, 2] rank = 0: Int32[1, 2, 3] rank = 1: Int32[0, 0, 2, 2, 3, 3] rank = 2: Int32[3, 3, 3] -rank = 3: Int32[2, 2, 2, 2, 0, 0, 0, 0] +rank = 3: Int32[0, 0, 0, 0, 2, 2, 2, 2] diff --git a/dev/external/index.html b/dev/external/index.html index 19d51f4a1..34cb9aec8 100644 --- a/dev/external/index.html +++ b/dev/external/index.html @@ -19,4 +19,4 @@ finalizer(obj) do obj # call clean up function end -REFS[obj] = nothing

Externally initialized MPI

When working with non-Julia libraries or tools, MPI_Init may be invoked in another part of the execution flow and not via MPI.jl's MPI.Init function. This leaves some package-internal settings uninitialized. In this case, you need to call [MPI.run_init_hooks())(@ref) manually to fully initialize MPI.jl. You may also want to consider calling MPI.set_default_error_handler_return().

+REFS[obj] = nothing

Externally initialized MPI

When working with non-Julia libraries or tools, MPI_Init may be invoked in another part of the execution flow and not via MPI.jl's MPI.Init function. This leaves some package-internal settings uninitialized. In this case, you need to call [MPI.run_init_hooks())(@ref) manually to fully initialize MPI.jl. You may also want to consider calling MPI.set_default_error_handler_return().

diff --git a/dev/index.html b/dev/index.html index 2d64f9fdf..9a9902659 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -MPI.jl · MPI.jl

MPI.jl

This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)

If you use MPI.jl in your work, please cite the following paper:

Simon Byrne, Lucas C. Wilcox, and Valentin Churavy (2021) "MPI.jl: Julia bindings for the Message Passing Interface". JuliaCon Proceedings, 1(1), 68, doi: 10.21105/jcon.00068

+MPI.jl · MPI.jl

MPI.jl

This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)

If you use MPI.jl in your work, please cite the following paper:

Simon Byrne, Lucas C. Wilcox, and Valentin Churavy (2021) "MPI.jl: Julia bindings for the Message Passing Interface". JuliaCon Proceedings, 1(1), 68, doi: 10.21105/jcon.00068

diff --git a/dev/knownissues/index.html b/dev/knownissues/index.html index e279cbad9..f5a72b4c0 100644 --- a/dev/knownissues/index.html +++ b/dev/knownissues/index.html @@ -29,4 +29,4 @@ GetSockInterfaceAddr(370)..........: gethostbyname failed, bogon (errno 0)

A workaround is provided in the documentation of the MOOSE framework and we report it here for reference:

For further information see

UCX

UCX is a communication framework used by several MPI implementations.

Memory cache

When used with CUDA, UCX intercepts cudaMalloc so it can determine whether the pointer passed to MPI is on the host (main memory) or the device (GPU). Unfortunately, there are several known issues with how this works with Julia:

By default, MPI.jl disables this by setting

ENV["UCX_MEMTYPE_CACHE"] = "no"

at __init__ which may result in reduced performance, especially for smaller messages.

Multi-threading and signal handling

When using Julia multi-threading, the Julia garbage collector internally uses SIGSEGV to synchronize threads.

By default, UCX will error if this signal is raised (#337), resulting in a message such as:

Caught signal 11 (Segmentation fault: invalid permissions for mapped object at address 0xXXXXXXXX)

This signal interception can be controlled by setting the environment variable UCX_ERROR_SIGNALS: if not already defined, MPI.jl will set it as:

ENV["UCX_ERROR_SIGNALS"] = "SIGILL,SIGBUS,SIGFPE"

at __init__. If set externally, it should be modified to exclude SIGSEGV from the list. Note that in some cases even if UCX_ERROR_SIGNALS is not set explicitly, UCX might still take SIGSEGV as an error signal. In this case, it might be needed to explicitly set UCX_ERROR_SIGNALS with

export UCX_ERROR_SIGNALS="SIGILL,SIGBUS,SIGFPE"

before calling mpiexec.

CUDA-aware MPI

Memory pool

Using CUDA-aware MPI on multi-GPU nodes with recent CUDA.jl may trigger (see here)

The call to cuIpcGetMemHandle failed. This means the GPU RDMA protocol
 cannot be used.
-  cuIpcGetMemHandle return value:   1

in the MPI layer, or fail on a segmentation fault (see here) with

[1642930332.032032] [gcn19:4087661:0] gdr_copy_md.c:122 UCX ERROR gdr_pin_buffer failed. length :65536 ret:22

This is due to the MPI implementation using legacy cuIpc* APIs, which are incompatible with stream-ordered allocator, now default in CUDA.jl, see UCX issue #7110.

To circumvent this, one has to ensure the CUDA memory pool to be set to none:

export JULIA_CUDA_MEMORY_POOL=none

More about CUDA.jl memory environment-variables.

Hints to ensure CUDA-aware MPI to be functional

Make sure to:

After that, it may be preferred to run the Julia MPI script (as suggested here) launching it from a shell script (as suggested here).

ROCm-aware MPI

Hints to ensure ROCm-aware MPI to be functional

Make sure to:

After that, this script can be used to verify if ROCm-aware MPI is functional (modified after the CUDA-aware version from here). It may be preferred to run the Julia ROCm-aware MPI script launching it from a shell script (as suggested here).

Custom reduction operators

It is not possible to use custom reduction operators with 32-bit Microsoft MPI on Windows and on ARM CPUs with any operating system. These issues are due to due how custom operators are currently implemented in MPI.jl, that is by using closure cfunctions. However they have two limitations:

+ cuIpcGetMemHandle return value: 1

in the MPI layer, or fail on a segmentation fault (see here) with

[1642930332.032032] [gcn19:4087661:0] gdr_copy_md.c:122 UCX ERROR gdr_pin_buffer failed. length :65536 ret:22

This is due to the MPI implementation using legacy cuIpc* APIs, which are incompatible with stream-ordered allocator, now default in CUDA.jl, see UCX issue #7110.

To circumvent this, one has to ensure the CUDA memory pool to be set to none:

export JULIA_CUDA_MEMORY_POOL=none

More about CUDA.jl memory environment-variables.

Hints to ensure CUDA-aware MPI to be functional

Make sure to:

After that, it may be preferred to run the Julia MPI script (as suggested here) launching it from a shell script (as suggested here).

ROCm-aware MPI

Hints to ensure ROCm-aware MPI to be functional

Make sure to:

After that, this script can be used to verify if ROCm-aware MPI is functional (modified after the CUDA-aware version from here). It may be preferred to run the Julia ROCm-aware MPI script launching it from a shell script (as suggested here).

Custom reduction operators

It is not possible to use custom reduction operators with 32-bit Microsoft MPI on Windows and on ARM CPUs with any operating system. These issues are due to due how custom operators are currently implemented in MPI.jl, that is by using closure cfunctions. However they have two limitations:

diff --git a/dev/reference/advanced/index.html b/dev/reference/advanced/index.html index 389dd9980..dec95b9f9 100644 --- a/dev/reference/advanced/index.html +++ b/dev/reference/advanced/index.html @@ -1,21 +1,21 @@ -Advanced · MPI.jl

Advanced

Object handling

MPI.freeFunction
MPI.free(obj)

Free the MPI object handle obj. This is typically used as the finalizer, and so need not be called directly unless otherwise noted.

source

Datatype objects

MPI.DatatypeType
Datatype

A Datatype represents the layout of the data in memory.

Usage

Datatype(T)

Either return the predefined Datatype corresponding to T, or create a new Datatype for the Julia type T, calling Types.commit! so that it can be used for communication operations.

Note that this can only be called on types for which isbitstype(T) is true.

source
MPI.to_typeFunction
to_type(datatype::Datatype)

Return the Julia type corresponding to the MPI Datatype datatype, or nothing if it doesn't correspond directly.

source
MPI.Types.extentFunction
lb, extent = MPI.Types.extent(dt::MPI.Datatype)

Gets the lowerbound lb and the extent extent in bytes.

External links

source
MPI.Types.create_vectorFunction
MPI.Types.create_vector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)

Create a derived Datatype that replicates oldtype into locations that consist of equally spaced blocks.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

Example

datatype = MPI.Types.create_vector(3, 2, 5, MPI.Datatype(Int64))
+Advanced · MPI.jl

Advanced

Object handling

MPI.freeFunction
MPI.free(obj)

Free the MPI object handle obj. This is typically used as the finalizer, and so need not be called directly unless otherwise noted.

source

Datatype objects

MPI.DatatypeType
Datatype

A Datatype represents the layout of the data in memory.

Usage

Datatype(T)

Either return the predefined Datatype corresponding to T, or create a new Datatype for the Julia type T, calling Types.commit! so that it can be used for communication operations.

Note that this can only be called on types for which isbitstype(T) is true.

source
MPI.to_typeFunction
to_type(datatype::Datatype)

Return the Julia type corresponding to the MPI Datatype datatype, or nothing if it doesn't correspond directly.

source
MPI.Types.extentFunction
lb, extent = MPI.Types.extent(dt::MPI.Datatype)

Gets the lowerbound lb and the extent extent in bytes.

External links

source
MPI.Types.create_vectorFunction
MPI.Types.create_vector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)

Create a derived Datatype that replicates oldtype into locations that consist of equally spaced blocks.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

Example

datatype = MPI.Types.create_vector(3, 2, 5, MPI.Datatype(Int64))
 MPI.Types.commit!(datatype)

will create a datatype with the following layout

|<----->|  block length
 
 +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
 | X | X |   |   |   | X | X |   |   |   | X | X |   |   |   |
 +---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
 
-|<---- stride ----->|

where each segment represents an Int64.

(image by Jonathan Dursi, https://stackoverflow.com/a/10788351/392585)

External links

source
MPI.Types.create_hvectorFunction
MPI.Types.create_hvector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)

Create a derived Datatype that replicates oldtype into locations that consist of equally spaced (bytes) blocks.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

Example

datatype = MPI.Types.create_hvector(3, 2, 5, MPI.Datatype(Int64))
-MPI.Types.commit!(datatype)

External links

source
MPI.Types.create_subarrayFunction
MPI.Types.create_subarray(sizes, subsizes, offset, oldtype::Datatype;
-                          rowmajor=false)

Creates a derived Datatype describing an N-dimensional subarray of size subsizes of an N-dimensional array of size sizes and element type oldtype, with the first element offset by offset (i.e. the 0-based index of the first element).

Column-major indexing (used by Julia and Fortran) is assumed; use the keyword rowmajor=true to specify row-major layout (used by C and numpy).

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

External links

source
MPI.Types.create_resizedFunction
MPI.Types.create_resized(oldtype::Datatype, lb::Integer, extent::Integer)

Creates a new Datatype that is identical to oldtype, except that the lower bound of this new datatype is set to be lb, and its upper bound is set to be lb + extent.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

See also

External links

source

Operator objects

Info objects

MPI.InfoType
Info <: AbstractDict{Symbol,String}

MPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.

Usage

These will typically be hidden from user-facing APIs by splatting keywords, e.g.

function f(args...; kwargs...)
+|<---- stride ----->|

where each segment represents an Int64.

(image by Jonathan Dursi, https://stackoverflow.com/a/10788351/392585)

External links

source
MPI.Types.create_hvectorFunction
MPI.Types.create_hvector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)

Create a derived Datatype that replicates oldtype into locations that consist of equally spaced (bytes) blocks.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

Example

datatype = MPI.Types.create_hvector(3, 2, 5, MPI.Datatype(Int64))
+MPI.Types.commit!(datatype)

External links

source
MPI.Types.create_subarrayFunction
MPI.Types.create_subarray(sizes, subsizes, offset, oldtype::Datatype;
+                          rowmajor=false)

Creates a derived Datatype describing an N-dimensional subarray of size subsizes of an N-dimensional array of size sizes and element type oldtype, with the first element offset by offset (i.e. the 0-based index of the first element).

Column-major indexing (used by Julia and Fortran) is assumed; use the keyword rowmajor=true to specify row-major layout (used by C and numpy).

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

External links

source
MPI.Types.create_resizedFunction
MPI.Types.create_resized(oldtype::Datatype, lb::Integer, extent::Integer)

Creates a new Datatype that is identical to oldtype, except that the lower bound of this new datatype is set to be lb, and its upper bound is set to be lb + extent.

Note that MPI.Types.commit! must be used before the datatype can be used for communication.

See also

External links

source

Operator objects

Info objects

MPI.InfoType
Info <: AbstractDict{Symbol,String}

MPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.

Usage

These will typically be hidden from user-facing APIs by splatting keywords, e.g.

function f(args...; kwargs...)
     info = Info(kwargs...)
     # pass `info` object to `ccall`
 end

For manual usage, Info objects act like Julia Dict objects:

info = Info(init=true) # keyword argument is required
 info[key] = value
 x = info[key]
-delete!(info, key)

If init=false is used in the constructor (the default), a "null" Info object will be returned: no keys can be added to such an object.

source
MPI.infovalFunction
infoval(x)

Convert Julia object x to a string representation for storing in an Info object.

The MPI specification allows passing strings, Boolean values, integers, and lists.

source

Error handler objects

MPI.ErrhandlerType
MPI.Errhandler

An MPI error handler object. Currently only two are supported:

  • ERRORS_ARE_FATAL (default): program will immediately abort
  • ERRORS_RETURN: program will throw an MPIError.
source
MPI.get_errorhandlerFunction
MPI.get_errorhandler(comm::MPI.Comm)
+delete!(info, key)

If init=false is used in the constructor (the default), a "null" Info object will be returned: no keys can be added to such an object.

source
MPI.infovalFunction
infoval(x)

Convert Julia object x to a string representation for storing in an Info object.

The MPI specification allows passing strings, Boolean values, integers, and lists.

source

Error handler objects

MPI.ErrhandlerType
MPI.Errhandler

An MPI error handler object. Currently only two are supported:

  • ERRORS_ARE_FATAL (default): program will immediately abort
  • ERRORS_RETURN: program will throw an MPIError.
source
MPI.set_errorhandler!Function
MPI.set_errorhandler!(comm::MPI.Comm, errh::Errhandler)
 MPI.set_errorhandler!(win::MPI.Win, errh::Errhandler)
-MPI.set_errorhandler!(file::MPI.File.FileHandle, errh::Errhandler)

Set the Errhandler for the relevant MPI object.

See also

source
MPI.set_default_error_handler_returnFunction
MPI.set_default_error_handler_return()

Set the error handler for MPI_COMM_SELF and MPI_COMM_WORLD to MPI_ERRORS_RETURN. This will cause certain MPI errors to appear as Julia exceptions.

This function is executed automatically by MPI.Init() but may be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times.

source

Miscellaneous

MPI.API.@const_refMacro
@const_ref name T expr

Defines an constant binding

const name = Ref{T}()

and adds a hook to execute

name[] = expr

at module initialization time.

source
+MPI.set_errorhandler!(file::MPI.File.FileHandle, errh::Errhandler)

Set the Errhandler for the relevant MPI object.

See also

source
MPI.set_default_error_handler_returnFunction
MPI.set_default_error_handler_return()

Set the error handler for MPI_COMM_SELF and MPI_COMM_WORLD to MPI_ERRORS_RETURN. This will cause certain MPI errors to appear as Julia exceptions.

This function is executed automatically by MPI.Init() but may be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times.

source

Miscellaneous

MPI.API.@const_refMacro
@const_ref name T expr

Defines an constant binding

const name = Ref{T}()

and adds a hook to execute

name[] = expr

at module initialization time.

source
diff --git a/dev/reference/api/index.html b/dev/reference/api/index.html index 32cb8d38a..bc9dff200 100644 --- a/dev/reference/api/index.html +++ b/dev/reference/api/index.html @@ -1,2 +1,2 @@ -Low-level API · MPI.jl

Low-level API

The MPI.API submodule provides a low-level interface which closely matches the MPI C API. While these functions are not intended for general usage, they are useful for calling MPI routines not yet available in MPI.jl main interface, and is the basis for the high-level wrappers. The methods suffixed with _c allow MPI_count typed arguments (vs int for the standard ones). The size of MPI_count depends on the implementation, but usually allows 64bit integer offsets.

MPI.API.MPI_AccumulateMethod
MPI_Accumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
source
MPI.API.MPI_Accumulate_cMethod
MPI_Accumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
  • MPI_Accumulate_c man page: MPICH
source
MPI.API.MPI_Alltoallv_cMethod
MPI_Alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)
  • MPI_Alltoallv_c man page: MPICH
source
MPI.API.MPI_Alltoallv_init_cMethod
MPI_Alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)
  • MPI_Alltoallv_init_c man page: MPICH
source
MPI.API.MPI_Alltoallw_cMethod
MPI_Alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)
  • MPI_Alltoallw_c man page: MPICH
source
MPI.API.MPI_Alltoallw_init_cMethod
MPI_Alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)
  • MPI_Alltoallw_init_c man page: MPICH
source
MPI.API.MPI_Gatherv_cMethod
MPI_Gatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)
  • MPI_Gatherv_c man page: MPICH
source
MPI.API.MPI_Gatherv_init_cMethod
MPI_Gatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)
  • MPI_Gatherv_init_c man page: MPICH
source
MPI.API.MPI_GetMethod
MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
source
MPI.API.MPI_Get_accumulateMethod
MPI_Get_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
source
MPI.API.MPI_Get_accumulate_cMethod
MPI_Get_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
  • MPI_Get_accumulate_c man page: MPICH
source
MPI.API.MPI_Get_cMethod
MPI_Get_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
  • MPI_Get_c man page: MPICH
source
MPI.API.MPI_Ialltoallv_cMethod
MPI_Ialltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)
  • MPI_Ialltoallv_c man page: MPICH
source
MPI.API.MPI_Ialltoallw_cMethod
MPI_Ialltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)
  • MPI_Ialltoallw_c man page: MPICH
source
MPI.API.MPI_Igather_cMethod
MPI_Igather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)
  • MPI_Igather_c man page: MPICH
source
MPI.API.MPI_Igatherv_cMethod
MPI_Igatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)
  • MPI_Igatherv_c man page: MPICH
source
MPI.API.MPI_Iscatterv_cMethod
MPI_Iscatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)
  • MPI_Iscatterv_c man page: MPICH
source
MPI.API.MPI_IsendrecvMethod
MPI_Isendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)
source
MPI.API.MPI_Isendrecv_cMethod
MPI_Isendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)
  • MPI_Isendrecv_c man page: MPICH
source
MPI.API.MPI_PutMethod
MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
source
MPI.API.MPI_Put_cMethod
MPI_Put_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
  • MPI_Put_c man page: MPICH
source
MPI.API.MPI_RaccumulateMethod
MPI_Raccumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
source
MPI.API.MPI_Raccumulate_cMethod
MPI_Raccumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
  • MPI_Raccumulate_c man page: MPICH
source
MPI.API.MPI_RgetMethod
MPI_Rget(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
source
MPI.API.MPI_Rget_accumulateMethod
MPI_Rget_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
source
MPI.API.MPI_Rget_accumulate_cMethod
MPI_Rget_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
  • MPI_Rget_accumulate_c man page: MPICH
source
MPI.API.MPI_Rget_cMethod
MPI_Rget_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
  • MPI_Rget_c man page: MPICH
source
MPI.API.MPI_RputMethod
MPI_Rput(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
source
MPI.API.MPI_Rput_cMethod
MPI_Rput_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
  • MPI_Rput_c man page: MPICH
source
MPI.API.MPI_Scatterv_init_cMethod
MPI_Scatterv_init_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)
  • MPI_Scatterv_init_c man page: MPICH
source
MPI.API.MPI_SendrecvMethod
MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)
source
MPI.API.MPI_Sendrecv_cMethod
MPI_Sendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)
  • MPI_Sendrecv_c man page: MPICH
source
MPI.API.MPI_Type_create_darray_cMethod
MPI_Type_create_darray_c(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)
  • MPI_Type_create_darray_c man page: MPICH
source
MPI.API.MPI_Type_get_contents_cMethod
MPI_Type_get_contents_c(datatype, max_integers, max_addresses, max_large_counts, max_datatypes, array_of_integers, array_of_addresses, array_of_large_counts, array_of_datatypes)
  • MPI_Type_get_contents_c man page: MPICH
source
+Low-level API · MPI.jl

Low-level API

The MPI.API submodule provides a low-level interface which closely matches the MPI C API. While these functions are not intended for general usage, they are useful for calling MPI routines not yet available in MPI.jl main interface, and is the basis for the high-level wrappers. The methods suffixed with _c allow MPI_count typed arguments (vs int for the standard ones). The size of MPI_count depends on the implementation, but usually allows 64bit integer offsets.

MPI.API.MPI_AccumulateMethod
MPI_Accumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
source
MPI.API.MPI_Accumulate_cMethod
MPI_Accumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
  • MPI_Accumulate_c man page: MPICH
source
MPI.API.MPI_Alltoallv_cMethod
MPI_Alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)
  • MPI_Alltoallv_c man page: MPICH
source
MPI.API.MPI_Alltoallv_init_cMethod
MPI_Alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)
  • MPI_Alltoallv_init_c man page: MPICH
source
MPI.API.MPI_Alltoallw_cMethod
MPI_Alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)
  • MPI_Alltoallw_c man page: MPICH
source
MPI.API.MPI_Alltoallw_init_cMethod
MPI_Alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)
  • MPI_Alltoallw_init_c man page: MPICH
source
MPI.API.MPI_Gatherv_cMethod
MPI_Gatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)
  • MPI_Gatherv_c man page: MPICH
source
MPI.API.MPI_Gatherv_init_cMethod
MPI_Gatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)
  • MPI_Gatherv_init_c man page: MPICH
source
MPI.API.MPI_GetMethod
MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
source
MPI.API.MPI_Get_accumulateMethod
MPI_Get_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
source
MPI.API.MPI_Get_accumulate_cMethod
MPI_Get_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)
  • MPI_Get_accumulate_c man page: MPICH
source
MPI.API.MPI_Get_cMethod
MPI_Get_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
  • MPI_Get_c man page: MPICH
source
MPI.API.MPI_Ialltoallv_cMethod
MPI_Ialltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)
  • MPI_Ialltoallv_c man page: MPICH
source
MPI.API.MPI_Ialltoallw_cMethod
MPI_Ialltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)
  • MPI_Ialltoallw_c man page: MPICH
source
MPI.API.MPI_Igather_cMethod
MPI_Igather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)
  • MPI_Igather_c man page: MPICH
source
MPI.API.MPI_Igatherv_cMethod
MPI_Igatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)
  • MPI_Igatherv_c man page: MPICH
source
MPI.API.MPI_Iscatterv_cMethod
MPI_Iscatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)
  • MPI_Iscatterv_c man page: MPICH
source
MPI.API.MPI_IsendrecvMethod
MPI_Isendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)
source
MPI.API.MPI_Isendrecv_cMethod
MPI_Isendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)
  • MPI_Isendrecv_c man page: MPICH
source
MPI.API.MPI_PutMethod
MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
source
MPI.API.MPI_Put_cMethod
MPI_Put_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)
  • MPI_Put_c man page: MPICH
source
MPI.API.MPI_RaccumulateMethod
MPI_Raccumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
source
MPI.API.MPI_Raccumulate_cMethod
MPI_Raccumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
  • MPI_Raccumulate_c man page: MPICH
source
MPI.API.MPI_RgetMethod
MPI_Rget(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
source
MPI.API.MPI_Rget_accumulateMethod
MPI_Rget_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
source
MPI.API.MPI_Rget_accumulate_cMethod
MPI_Rget_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)
  • MPI_Rget_accumulate_c man page: MPICH
source
MPI.API.MPI_Rget_cMethod
MPI_Rget_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
  • MPI_Rget_c man page: MPICH
source
MPI.API.MPI_RputMethod
MPI_Rput(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
source
MPI.API.MPI_Rput_cMethod
MPI_Rput_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)
  • MPI_Rput_c man page: MPICH
source
MPI.API.MPI_Scatterv_init_cMethod
MPI_Scatterv_init_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)
  • MPI_Scatterv_init_c man page: MPICH
source
MPI.API.MPI_SendrecvMethod
MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)
source
MPI.API.MPI_Sendrecv_cMethod
MPI_Sendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)
  • MPI_Sendrecv_c man page: MPICH
source
MPI.API.MPI_Type_create_darray_cMethod
MPI_Type_create_darray_c(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)
  • MPI_Type_create_darray_c man page: MPICH
source
MPI.API.MPI_Type_get_contents_cMethod
MPI_Type_get_contents_c(datatype, max_integers, max_addresses, max_large_counts, max_datatypes, array_of_integers, array_of_addresses, array_of_large_counts, array_of_datatypes)
  • MPI_Type_get_contents_c man page: MPICH
source
diff --git a/dev/reference/buffers/index.html b/dev/reference/buffers/index.html index 6fef6f6d6..d2042c57b 100644 --- a/dev/reference/buffers/index.html +++ b/dev/reference/buffers/index.html @@ -1,2 +1,2 @@ -Buffers · MPI.jl

Buffers

Buffers are used for sending and receiving data. MPI.jl provides the following buffer types:

MPI.IN_PLACEConstant
MPI.IN_PLACE

A sentinel value that can be passed as a buffer argument for certain collective operations to use the same buffer for send and receive operations.

source
MPI.BufferType
MPI.Buffer

An MPI buffer for communication with a single rank. It is used for point-to-point and one-sided operations, as well as some collective operations. Operations will implicitly construct a Buffer when required via the generic constructor, but it can be advantageous to manually construct Buffers when doing so incurs additional overhead, for example when using a non-predefined MPI.Datatype.

Fields

  • data: a Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • count: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.

  • datatype: the MPI.Datatype stored in the buffer.

Usage

Buffer(data, count::Integer, datatype::Datatype)

Generic constructor.

Buffer(data)

Construct a Buffer backed by data, automatically determining the appropriate count and datatype. Methods are provided for

  • Ref
  • Array
  • CUDA.CuArray if CUDA.jl is loaded.
  • AMDGPU.ROCArray if AMDGPU.jl is loaded.
  • SubArrays of an Array, CUDA.CuArray or AMDGPU.ROCArray where the layout is contiguous, sequential or blocked.

See also

source
MPI.Buffer_sendFunction
Buffer_send(data)

Construct a Buffer object for a send operation from data, allowing cases where isbits(data).

source
MPI.UBufferType
MPI.UBuffer

An MPI buffer for chunked collective communication, where all chunks are of uniform size.

Fields

  • data: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • count: The number of elements of datatype in each chunk.

  • nchunks: The maximum number of chunks stored in the buffer. This is used only for validation, and can be set to nothing to disable checks.

  • datatype: The MPI.Datatype stored in the buffer.

Usage

UBuffer(data, count::Integer, nchunks::Union{Nothing, Integer}, datatype::Datatype)

Generic constructor.

UBuffer(data, count::Integer)

Construct a UBuffer backed by data, where count is the number of elements in each chunk.

See also

  • VBuffer: similar, but supports chunks of non-uniform sizes.
source
MPI.VBufferType
MPI.VBuffer

An MPI buffer for chunked collective communication, where chunks can be of different sizes and at different offsets.

Fields

  • data: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • counts: An array containing the length of each chunk.

  • displs: An array containing the (0-based) displacements of each chunk.

  • datatype: The MPI.Datatype stored in the buffer.

Usage

VBuffer(data, counts[, displs[, datatype]])

Construct a VBuffer backed by data, where counts[j] is the number of elements in the jth chunk, and displs[j] is the 0-based displacement. In other words, the jth chunk occurs in indices displs[j]+1:displs[j]+counts[j].

The default value for displs[j] = sum(counts[1:j-1]).

See also

  • UBuffer when chunks are all of the same size.
source
MPI.RBufferType
MPI.RBuffer

An MPI buffer for reduction operations (MPI.Reduce!, MPI.Allreduce!, MPI.Scan!, MPI.Exscan!).

Fields

  • senddata: A Julia object referencing a region of memory to be used for the send buffer. It is required that the object can be cconverted to an MPIPtr.

  • recvdata: A Julia object referencing a region of memory to be used for the receive buffer. It is required that the object can be cconverted to an MPIPtr.

  • count: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.

  • datatype: the MPI.Datatype stored in the buffer.

Usage

RBuffer(senddata, recvdata[, count, datatype])

Generic constructor.

RBuffer(senddata, recvdata)

Construct a Buffer backed by senddata and recvdata, automatically determining the appropriate count and datatype.

source
MPI.API.MPIPtrType
MPI.MPIPtr

A pointer to an MPI buffer. This type is used only as part of the implicit conversion in ccall: a Julia object can be passed to MPI by defining methods for Base.cconvert(::Type{MPIPtr}, ...)/Base.unsafe_convert(::Type{MPIPtr}, ...).

Currently supported are:

  • Ptr
  • Ref
  • Array
  • SubArray
  • CUDA.CuArray if CUDA.jl is loaded.
  • AMDGPU.ROCArray if AMDGPU.jl is loaded.

Additionally, certain sentinel values can be used, e.g. MPI_IN_PLACE or MPI_BOTTOM.

source
+Buffers · MPI.jl

Buffers

Buffers are used for sending and receiving data. MPI.jl provides the following buffer types:

MPI.IN_PLACEConstant
MPI.IN_PLACE

A sentinel value that can be passed as a buffer argument for certain collective operations to use the same buffer for send and receive operations.

source
MPI.BufferType
MPI.Buffer

An MPI buffer for communication with a single rank. It is used for point-to-point and one-sided operations, as well as some collective operations. Operations will implicitly construct a Buffer when required via the generic constructor, but it can be advantageous to manually construct Buffers when doing so incurs additional overhead, for example when using a non-predefined MPI.Datatype.

Fields

  • data: a Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • count: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.

  • datatype: the MPI.Datatype stored in the buffer.

Usage

Buffer(data, count::Integer, datatype::Datatype)

Generic constructor.

Buffer(data)

Construct a Buffer backed by data, automatically determining the appropriate count and datatype. Methods are provided for

  • Ref
  • Array
  • CUDA.CuArray if CUDA.jl is loaded.
  • AMDGPU.ROCArray if AMDGPU.jl is loaded.
  • SubArrays of an Array, CUDA.CuArray or AMDGPU.ROCArray where the layout is contiguous, sequential or blocked.

See also

source
MPI.Buffer_sendFunction
Buffer_send(data)

Construct a Buffer object for a send operation from data, allowing cases where isbits(data).

source
MPI.UBufferType
MPI.UBuffer

An MPI buffer for chunked collective communication, where all chunks are of uniform size.

Fields

  • data: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • count: The number of elements of datatype in each chunk.

  • nchunks: The maximum number of chunks stored in the buffer. This is used only for validation, and can be set to nothing to disable checks.

  • datatype: The MPI.Datatype stored in the buffer.

Usage

UBuffer(data, count::Integer, nchunks::Union{Nothing, Integer}, datatype::Datatype)

Generic constructor.

UBuffer(data, count::Integer)

Construct a UBuffer backed by data, where count is the number of elements in each chunk.

See also

  • VBuffer: similar, but supports chunks of non-uniform sizes.
source
MPI.VBufferType
MPI.VBuffer

An MPI buffer for chunked collective communication, where chunks can be of different sizes and at different offsets.

Fields

  • data: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.

  • counts: An array containing the length of each chunk.

  • displs: An array containing the (0-based) displacements of each chunk.

  • datatype: The MPI.Datatype stored in the buffer.

Usage

VBuffer(data, counts[, displs[, datatype]])

Construct a VBuffer backed by data, where counts[j] is the number of elements in the jth chunk, and displs[j] is the 0-based displacement. In other words, the jth chunk occurs in indices displs[j]+1:displs[j]+counts[j].

The default value for displs[j] = sum(counts[1:j-1]).

See also

  • UBuffer when chunks are all of the same size.
source
MPI.RBufferType
MPI.RBuffer

An MPI buffer for reduction operations (MPI.Reduce!, MPI.Allreduce!, MPI.Scan!, MPI.Exscan!).

Fields

  • senddata: A Julia object referencing a region of memory to be used for the send buffer. It is required that the object can be cconverted to an MPIPtr.

  • recvdata: A Julia object referencing a region of memory to be used for the receive buffer. It is required that the object can be cconverted to an MPIPtr.

  • count: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.

  • datatype: the MPI.Datatype stored in the buffer.

Usage

RBuffer(senddata, recvdata[, count, datatype])

Generic constructor.

RBuffer(senddata, recvdata)

Construct a Buffer backed by senddata and recvdata, automatically determining the appropriate count and datatype.

source
MPI.API.MPIPtrType
MPI.MPIPtr

A pointer to an MPI buffer. This type is used only as part of the implicit conversion in ccall: a Julia object can be passed to MPI by defining methods for Base.cconvert(::Type{MPIPtr}, ...)/Base.unsafe_convert(::Type{MPIPtr}, ...).

Currently supported are:

  • Ptr
  • Ref
  • Array
  • SubArray
  • CUDA.CuArray if CUDA.jl is loaded.
  • AMDGPU.ROCArray if AMDGPU.jl is loaded.

Additionally, certain sentinel values can be used, e.g. MPI_IN_PLACE or MPI_BOTTOM.

source
diff --git a/dev/reference/collective/index.html b/dev/reference/collective/index.html index 486ebf3a3..b093a74ba 100644 --- a/dev/reference/collective/index.html +++ b/dev/reference/collective/index.html @@ -1,34 +1,34 @@ -Collective communication · MPI.jl

Collective communication

Synchronization

MPI.BarrierFunction
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source
MPI.IbarrierFunction
Ibarrier(comm::Comm[, req::AbstractRequest = Request())

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source

Broadcast

MPI.Bcast!Function
Bcast!(buf, comm::Comm; root::Integer=0)

Broadcast the buffer buf from root to all processes in comm.

See also

External links

source
MPI.BcastFunction
Bcast(obj, root::Integer, comm::Comm)

Broadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.

source
MPI.bcastFunction
bcast(obj, comm::Comm; root::Integer=0)

Broadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.

See also

source

Gather/Scatter

Gather

MPI.Gather!Function
Gather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:

if root == MPI.Comm_rank(comm)
+Collective communication · MPI.jl

Collective communication

Synchronization

MPI.BarrierFunction
Barrier(comm::Comm)

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source
MPI.IbarrierFunction
Ibarrier(comm::Comm[, req::AbstractRequest = Request())

Blocks until comm is synchronized.

If comm is an intracommunicator, then it blocks until all members of the group have called it.

If comm is an intercommunicator, then it blocks until all members of the other group have called it.

External links

source

Broadcast

MPI.Bcast!Function
Bcast!(buf, comm::Comm; root::Integer=0)

Broadcast the buffer buf from root to all processes in comm.

See also

External links

source
MPI.BcastFunction
Bcast(obj, root::Integer, comm::Comm)

Broadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.

source
MPI.bcastFunction
bcast(obj, comm::Comm; root::Integer=0)

Broadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.

See also

source

Gather/Scatter

Gather

MPI.Gather!Function
Gather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:

if root == MPI.Comm_rank(comm)
     MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), comm; root=root)
 else
     MPI.Gather!(buf, nothing, comm; root=root)
-end

recvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather for the allocating operation.
  • Gatherv! if the number of elements varies between processes.
  • Allgather! to send the result to all processes.

External links

source
MPI.GatherFunction
Gather(sendbuf, comm::Comm; root=0)

Each process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

sendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.

See also

External links

source
MPI.gatherFunction
gather(obj, comm::Comm; root::Integer=0)

Gather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.

See also

source
MPI.Gatherv!Function
Gatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example

if root == MPI.Comm_rank(comm)
+end

recvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather for the allocating operation.
  • Gatherv! if the number of elements varies between processes.
  • Allgather! to send the result to all processes.

External links

source
MPI.GatherFunction
Gather(sendbuf, comm::Comm; root=0)

Each process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.

sendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.

See also

External links

source
MPI.gatherFunction
gather(obj, comm::Comm; root::Integer=0)

Gather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.

See also

source
MPI.Gatherv!Function
Gatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Each process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.

sendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.

On the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example

if root == MPI.Comm_rank(comm)
     Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), comm; root=root)
 else
     Gatherv!(buf, nothing, comm; root=root)
-end

recvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather! if the number of elements is the same between processes.
  • Allgatherv! to send the result to all processes.

External links

source
MPI.Allgather!Function
Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)
-Allgather!(sendrecvbuf::UBuffer, comm::Comm)

Each process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.

recvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.

If only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.

See also

  • Allgather for the allocating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.AllgatherFunction
Allgather(sendbuf, comm)

Each process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

sendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.

See also

  • Allgather! for the mutating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.Allgatherv!Function
Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)
-Allgatherv!(sendrecvbuf::VBuffer, comm::Comm)

Each process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined.

If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.

See also

  • Gatherv! to send the result to a single process

External links

source
MPI.Neighbor_allgatherv!Function
Neighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)

Perform an all-gather communication along the directed edges of the graph with variable sized data.

See also MPI.Allgatherv!.

External links

source

Scatter

MPI.Scatter!Function
Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;
+end

recvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.

See also

  • Gather! if the number of elements is the same between processes.
  • Allgatherv! to send the result to all processes.

External links

source
MPI.Allgather!Function
Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)
+Allgather!(sendrecvbuf::UBuffer, comm::Comm)

Each process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.

recvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.

If only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.

See also

  • Allgather for the allocating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.AllgatherFunction
Allgather(sendbuf, comm)

Each process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.

sendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.

See also

  • Allgather! for the mutating operation
  • Allgatherv! if the number of elements varies between processes.
  • Gather! to send only to a single root process

External links

source
MPI.Allgatherv!Function
Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)
+Allgatherv!(sendrecvbuf::VBuffer, comm::Comm)

Each process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.

sendbuf can be a Buffer object, or any object for which Buffer_send is defined.

If only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.

See also

  • Gatherv! to send the result to a single process

External links

source
MPI.Neighbor_allgatherv!Function
Neighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)

Perform an all-gather communication along the directed edges of the graph with variable sized data.

See also MPI.Allgatherv!.

External links

source

Scatter

MPI.Scatter!Function
Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;
     root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
     MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, comm; root=root)
 else
     MPI.Scatter!(nothing, buf, comm; root=root)
-end

See also

  • Scatterv! if the number of elements varies between processes.

External links

source
MPI.ScatterFunction
Scatter(sendbuf, T, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.

See also

source
MPI.scatterFunction
scatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)

Sends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.

This method can handle arbitrary data.

See also

source
MPI.Scatterv!Function
Scatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
+end

See also

  • Scatterv! if the number of elements varies between processes.

External links

source
MPI.ScatterFunction
Scatter(sendbuf, T, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.

See also

source
MPI.scatterFunction
scatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)

Sends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.

This method can handle arbitrary data.

See also

source
MPI.Scatterv!Function
Scatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)

Splits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.

sendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.

recvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:

if root == MPI.Comm_rank(comm)
     MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, comm; root=root)
 else
     MPI.Scatterv!(nothing, buf, comm; root=root)
-end

See also

  • Scatter! if the number of elements are the same for all processes

External links

source

All-to-all

MPI.Alltoall!Function
Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)
+end

See also

  • Scatter! if the number of elements are the same for all processes

External links

source

All-to-all

MPI.Alltoall!Function
Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)
 Alltoall!(sendrecvbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.

rank    send buf                        recv buf
 ----    --------                        --------
  0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
  1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
- 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If only one buffer sendrecvbuf is used, then data is overwritten.

See also

External links

source
MPI.AlltoallFunction
Alltoall(sendbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.

rank    send buf                        recv buf
+ 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

If only one buffer sendrecvbuf is used, then data is overwritten.

See also

External links

source
MPI.AlltoallFunction
Alltoall(sendbuf::UBuffer, comm::Comm)

Every process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.

rank    send buf                        recv buf
 ----    --------                        --------
  0      a,b,c,d,e,f       Alltoall      a,b,A,B,α,β
  1      A,B,C,D,E,F  ---------------->  c,d,C,D,γ,ψ
- 2      α,β,γ,ψ,η,ν                     e,f,E,F,η,ν

See also

External links

source
MPI.Neighbor_alltoall!Function
Neighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with fixed size messages.

See also MPI.Alltoall!.

External links

source
MPI.Neighbor_alltoallv!Function
Neighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with variable size messages.

See also MPI.Alltoallv!.

External links

source

Reduce/Scan

MPI.Reduce!Function
Reduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)
-Reduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored, and can be nothing.

To perform the reduction in place, provide a single buffer sendrecvbuf.

See also

  • Reduce to handle allocation of the output buffer.
  • Allreduce!/Allreduce to send reduction to all ranks.
  • Op for details on reduction operators.

External links

source
MPI.ReduceFunction
recvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

External links

source
MPI.Allreduce!Function
Allreduce!(sendbuf, recvbuf, op, comm::Comm)
-Allreduce!(sendrecvbuf, op, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.

Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

See also

  • Allreduce, to handle allocation of the output buffer.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.AllreduceFunction
recvbuf = Allreduce(sendbuf, op, comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

  • Allreduce! for mutating or in-place operations.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.Scan!Function
Scan!(sendbuf, recvbuf, op, comm::Comm)
-Scan!(sendrecvbuf, op, comm::Comm)

Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.

If only a single buffer sendrecvbuf is provided, then operations will be performed in-place.

See also

  • Scan to handle allocation of the output buffer
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ScanFunction
recvbuf = Scan(sendbuf, op, comm::Comm)

Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.

sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.

See also

  • Scan! for mutating or in-place operations
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.Exscan!Function
Exscan!(sendbuf, recvbuf, op, comm::Comm)
-Exscan!(sendrecvbuf, op, comm::Comm)

Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

If only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.

See also

  • Exscan to handle allocation of the output buffer
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ExscanFunction
recvbuf = Exscan(sendbuf, op, comm::Comm)

Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

See also

  • Exscan! for mutating and in-place operations
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
+ 2 α,β,γ,ψ,η,ν e,f,E,F,η,ν

See also

External links

source
MPI.Neighbor_alltoall!Function
Neighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with fixed size messages.

See also MPI.Alltoall!.

External links

source
MPI.Neighbor_alltoallv!Function
Neighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)

Perform an all-to-all communication along the directed edges of the graph with variable size messages.

See also MPI.Alltoallv!.

External links

source

Reduce/Scan

MPI.Reduce!Function
Reduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)
+Reduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.

On non-root processes recvbuf is ignored, and can be nothing.

To perform the reduction in place, provide a single buffer sendrecvbuf.

See also

  • Reduce to handle allocation of the output buffer.
  • Allreduce!/Allreduce to send reduction to all ranks.
  • Op for details on reduction operators.

External links

source
MPI.ReduceFunction
recvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

External links

source
MPI.Allreduce!Function
Allreduce!(sendbuf, recvbuf, op, comm::Comm)
+Allreduce!(sendrecvbuf, op, comm::Comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.

Allreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.

If only one sendrecvbuf buffer is provided, then the operation is performed in-place.

See also

  • Allreduce, to handle allocation of the output buffer.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.AllreduceFunction
recvbuf = Allreduce(sendbuf, op, comm)

Performs elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.

sendbuf can also be a scalar, in which case recvbuf will be a value of the same type.

See also

  • Allreduce! for mutating or in-place operations.
  • Reduce!/Reduce to send reduction to a single rank.
  • Op for details on reduction operators.

External links

source
MPI.Scan!Function
Scan!(sendbuf, recvbuf, op, comm::Comm)
+Scan!(sendrecvbuf, op, comm::Comm)

Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.

If only a single buffer sendrecvbuf is provided, then operations will be performed in-place.

See also

  • Scan to handle allocation of the output buffer
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ScanFunction
recvbuf = Scan(sendbuf, op, comm::Comm)

Inclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.

sendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.

See also

  • Scan! for mutating or in-place operations
  • Exscan!/Exscan for exclusive scan
  • Op for details on reduction operators.

External links

source
MPI.Exscan!Function
Exscan!(sendbuf, recvbuf, op, comm::Comm)
+Exscan!(sendrecvbuf, op, comm::Comm)

Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

If only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.

See also

  • Exscan to handle allocation of the output buffer
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
MPI.ExscanFunction
recvbuf = Exscan(sendbuf, op, comm::Comm)

Exclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.

See also

  • Exscan! for mutating and in-place operations
  • Scan!/Scan for inclusive scan
  • Op for details on reduction operators.

External links

source
diff --git a/dev/reference/comm/index.html b/dev/reference/comm/index.html index eb7cd45dc..36a67a245 100644 --- a/dev/reference/comm/index.html +++ b/dev/reference/comm/index.html @@ -1,2 +1,2 @@ -Communicators · MPI.jl

Communicators

An MPI communicator specifies the communication context for a communication operation. In particular, it specifies the set of processes which share the context, and assigns each each process a unique rank (see MPI.Comm_rank) taking an integer value in 0:n-1, where n is the number of processes in the communicator (see MPI.Comm_size.

Types and enums

Constants

MPI.COMM_WORLDConstant
MPI.COMM_WORLD

A communicator containing all processes with which the local rank can communicate at initialization. In a typical "static-process" model, this will be all processes.

source

Functions

Operations

MPI.Comm_rankFunction
Comm_rank(comm::Comm)

The rank of the process in the particular communicator's group.

Returns an integer in the range 0:MPI.Comm_size()-1.

See also

External links

source
MPI.Comm_compareFunction
Comm_compare(comm1::Comm, comm2::Comm)::MPI.Comparison

Compare two communicators and their underlying groups, returning an element of the Comparison enum.

External links

source
MPI.Comm_groupFunction
Comm_group(comm::Comm)

Accesses the group associated with given communicator.

External links

source
MPI.Comm_remote_groupFunction
Comm_remote_group(comm::Comm)

Accesses the remote group associated with the given inter-communicator.

External links

source

Constructors

MPI.Comm_spawnFunction
Comm_spawn(command, argv::Vector{String}, nprocs::Integer, comm::Comm[, errors::Vector{Cint}]; kwargs...)

External links

source
MPI.Comm_splitFunction
Comm_split(comm::Comm, color::Union{Integer,Nothing}, key::Integer)

Partition the communicator comm, one for each value of color, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.

color should be a non-negative integer, or nothing, in which case a null communicator is returned for that rank.

External links

source
MPI.Comm_split_typeFunction
Comm_split_type(comm::Comm, split_type, key::Integer; kwargs...)

Partitions the communicator comm based on split_type, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.

Currently only one split_type is provided:

  • MPI.COMM_TYPE_SHARED: splits the communicator into subcommunicators, each of which can create a shared memory region.

External links

source

Miscellaneous

MPI.universe_sizeFunction
universe_size()

The total number of available slots, or nothing if it is not defined. This is determined by the MPI_UNIVERSE_SIZE attribute of COMM_WORLD.

This is typically dependent on the MPI implementation: for MPICH-based implementations, this is specified by the -usize argument. OpenMPI defines a default value based on the number of processes available.

source
MPI.tag_ubFunction
tag_ub()

The maximum value tag value for point-to-point operations.

source
+Communicators · MPI.jl

Communicators

An MPI communicator specifies the communication context for a communication operation. In particular, it specifies the set of processes which share the context, and assigns each each process a unique rank (see MPI.Comm_rank) taking an integer value in 0:n-1, where n is the number of processes in the communicator (see MPI.Comm_size.

Types and enums

Constants

MPI.COMM_WORLDConstant
MPI.COMM_WORLD

A communicator containing all processes with which the local rank can communicate at initialization. In a typical "static-process" model, this will be all processes.

source

Functions

Operations

MPI.Comm_rankFunction
Comm_rank(comm::Comm)

The rank of the process in the particular communicator's group.

Returns an integer in the range 0:MPI.Comm_size()-1.

See also

External links

source
MPI.Comm_compareFunction
Comm_compare(comm1::Comm, comm2::Comm)::MPI.Comparison

Compare two communicators and their underlying groups, returning an element of the Comparison enum.

External links

source
MPI.Comm_groupFunction
Comm_group(comm::Comm)

Accesses the group associated with given communicator.

External links

source
MPI.Comm_remote_groupFunction
Comm_remote_group(comm::Comm)

Accesses the remote group associated with the given inter-communicator.

External links

source

Constructors

MPI.Comm_spawnFunction
Comm_spawn(command, argv::Vector{String}, nprocs::Integer, comm::Comm[, errors::Vector{Cint}]; kwargs...)

External links

source
MPI.Comm_splitFunction
Comm_split(comm::Comm, color::Union{Integer,Nothing}, key::Integer)

Partition the communicator comm, one for each value of color, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.

color should be a non-negative integer, or nothing, in which case a null communicator is returned for that rank.

External links

source
MPI.Comm_split_typeFunction
Comm_split_type(comm::Comm, split_type, key::Integer; kwargs...)

Partitions the communicator comm based on split_type, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.

Currently only one split_type is provided:

  • MPI.COMM_TYPE_SHARED: splits the communicator into subcommunicators, each of which can create a shared memory region.

External links

source

Miscellaneous

MPI.universe_sizeFunction
universe_size()

The total number of available slots, or nothing if it is not defined. This is determined by the MPI_UNIVERSE_SIZE attribute of COMM_WORLD.

This is typically dependent on the MPI implementation: for MPICH-based implementations, this is specified by the -usize argument. OpenMPI defines a default value based on the number of processes available.

source
MPI.tag_ubFunction
tag_ub()

The maximum value tag value for point-to-point operations.

source
diff --git a/dev/reference/environment/index.html b/dev/reference/environment/index.html index 7c3cb52f7..8425b9855 100644 --- a/dev/reference/environment/index.html +++ b/dev/reference/environment/index.html @@ -2,6 +2,6 @@ Environment · MPI.jl

Environment

Launching MPI programs

MPICH_jll.mpiexecFunction
mpiexec(fn)

A wrapper function for the MPI launcher executable. Calls fn(cmd), where cmd is a Cmd object of the MPI launcher.

Usage

julia> mpiexec(cmd -> run(`$cmd -n 3 echo hello world`));
 hello world
 hello world
-hello world
source
MPI.install_mpiexecjlFunction
MPI.install_mpiexecjl(; command::String = "mpiexecjl",
                       destdir::String = joinpath(DEPOT_PATH[1], "bin"),
-                      force::Bool = false, verbose::Bool = true)

Install the mpiexec wrapper to destdir directory, with filename command. Set force to true to overwrite an existing destination file with the same path. If verbose is true, the installation prints information about the progress of the process.

source

Enums

MPI.ThreadLevelType
ThreadLevel

An Enum denoting the level of threading support in the current process:

  • MPI.THREAD_SINGLE: Only one thread will execute.

  • MPI.THREAD_FUNNELED: The process may be multi-threaded, but the application must ensure that only the main thread makes MPI calls. See Is_thread_main.

  • MPI.THREAD_SERIALIZED: The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time (i.e. all MPI calls are serialized).

  • MPI.THREAD_MULTIPLE: Multiple threads may call MPI, with no restrictions.

See also

source

Functions

MPI.AbortFunction
Abort(comm::Comm, errcode::Integer)

Make a “best attempt” to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program.

External links

source
MPI.InitFunction
Init(;threadlevel=:serialized, finalize_atexit=true, errors_return=true)

Initialize MPI in the current process. The keyword options:

  • threadlevel: either :single, :funneled, :serialized (default), :multiple, or an instance of ThreadLevel.
  • finalize_atexit: if true (default), adds an atexit hook to call MPI.Finalize if it hasn't already been called.
  • errors_return: if true (default), will set the default error handlers for MPI.COMM_SELF and MPI.COMM_WORLD to be MPI.ERRORS_RETURN. MPI errors will then appear as Julia exceptions.

It will return the ThreadLevel value which MPI is initialized at.

All MPI programs must call this function at least once before calling any other MPI operations: the only MPI functions that may be called before MPI.Init are MPI.Initialized and MPI.Finalized.

It is safe to call MPI.Init multiple times, however it is not valid to call it after calling MPI.Finalize.

External links

source
MPI.Is_thread_mainFunction
Is_thread_main()

Queries whether the current thread is the main thread according to MPI. This can be called by any thread, and is useful for the THREAD_FUNNELED ThreadLevel.

External links

source
MPI.FinalizeFunction
Finalize()

Marks MPI state for cleanup. This should be called after MPI.Init, and can be called at most once. No further MPI calls (other than Initialized or Finalized) should be made after it is called.

MPI.Init will automatically insert a hook to call this function when Julia exits, if it hasn't already been called.

External links

source
MPI.add_init_hook!Function
MPI.add_init_hook!(f)

Register a function f that will be called as f() when MPI.Init is called. These are invoked in a first-in, first-out (FIFO) order.

source
MPI.run_init_hooksFunction
MPI.run_init_hooks()

Execute all functions that have been registered using MPI.add_init_hook!().

This function is executed automatically by MPI.Init() but must be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times (subsequent runs will be a no-op).

source
MPI.add_finalize_hook!Function
MPI.add_finalize_hook!(f)

Register a function f that will be called as f() when MPI.Finalizer is called. These are invoked in a last-in, first-out (LIFO) order.

source

Errors

MPI.MPIErrorType
MPIError

Error thrown when an MPI function returns an error code. The code field contains the MPI error code.

source
+ force::Bool = false, verbose::Bool = true)

Install the mpiexec wrapper to destdir directory, with filename command. Set force to true to overwrite an existing destination file with the same path. If verbose is true, the installation prints information about the progress of the process.

source

Enums

MPI.ThreadLevelType
ThreadLevel

An Enum denoting the level of threading support in the current process:

  • MPI.THREAD_SINGLE: Only one thread will execute.

  • MPI.THREAD_FUNNELED: The process may be multi-threaded, but the application must ensure that only the main thread makes MPI calls. See Is_thread_main.

  • MPI.THREAD_SERIALIZED: The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time (i.e. all MPI calls are serialized).

  • MPI.THREAD_MULTIPLE: Multiple threads may call MPI, with no restrictions.

See also

source

Functions

MPI.AbortFunction
Abort(comm::Comm, errcode::Integer)

Make a “best attempt” to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program.

External links

source
MPI.InitFunction
Init(;threadlevel=:serialized, finalize_atexit=true, errors_return=true)

Initialize MPI in the current process. The keyword options:

  • threadlevel: either :single, :funneled, :serialized (default), :multiple, or an instance of ThreadLevel.
  • finalize_atexit: if true (default), adds an atexit hook to call MPI.Finalize if it hasn't already been called.
  • errors_return: if true (default), will set the default error handlers for MPI.COMM_SELF and MPI.COMM_WORLD to be MPI.ERRORS_RETURN. MPI errors will then appear as Julia exceptions.

It will return the ThreadLevel value which MPI is initialized at.

All MPI programs must call this function at least once before calling any other MPI operations: the only MPI functions that may be called before MPI.Init are MPI.Initialized and MPI.Finalized.

It is safe to call MPI.Init multiple times, however it is not valid to call it after calling MPI.Finalize.

External links

source
MPI.Query_threadFunction
Query_thread()

Query the level of threading support in the current process. Returns a ThreadLevel value denoting

External links

source
MPI.Is_thread_mainFunction
Is_thread_main()

Queries whether the current thread is the main thread according to MPI. This can be called by any thread, and is useful for the THREAD_FUNNELED ThreadLevel.

External links

source
MPI.InitializedFunction
Initialized()

Returns true if MPI.Init has been called, false otherwise.

It is unaffected by MPI.Finalize, and is one of the few functions that may be called before MPI.Init.

External links

source
MPI.FinalizeFunction
Finalize()

Marks MPI state for cleanup. This should be called after MPI.Init, and can be called at most once. No further MPI calls (other than Initialized or Finalized) should be made after it is called.

MPI.Init will automatically insert a hook to call this function when Julia exits, if it hasn't already been called.

External links

source
MPI.FinalizedFunction
Finalized()

Returns true if MPI.Finalize has completed, false otherwise.

It is safe to call before MPI.Init and after MPI.Finalize.

External links

source
MPI.add_init_hook!Function
MPI.add_init_hook!(f)

Register a function f that will be called as f() when MPI.Init is called. These are invoked in a first-in, first-out (FIFO) order.

source
MPI.run_init_hooksFunction
MPI.run_init_hooks()

Execute all functions that have been registered using MPI.add_init_hook!().

This function is executed automatically by MPI.Init() but must be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times (subsequent runs will be a no-op).

source
MPI.add_finalize_hook!Function
MPI.add_finalize_hook!(f)

Register a function f that will be called as f() when MPI.Finalizer is called. These are invoked in a last-in, first-out (LIFO) order.

source

Errors

MPI.MPIErrorType
MPIError

Error thrown when an MPI function returns an error code. The code field contains the MPI error code.

source
MPI.API.FeatureLevelErrorType
FeatureLevelError

Error thrown if a feature is not implemented in the current MPI backend.

source
diff --git a/dev/reference/group/index.html b/dev/reference/group/index.html index 5ff966698..a64126922 100644 --- a/dev/reference/group/index.html +++ b/dev/reference/group/index.html @@ -1,2 +1,2 @@ -Groups · MPI.jl

Groups

An MPI group is a set of process identifiers identified by their rank (see MPI.Comm_rank and MPI.Group_rank). They are used within a communicator to describe the participants in a communication universe.

Types and enums

MPI.ComparisonType
Comparison

An enum denoting the result of Comm_compare:

  • MPI.IDENT: the objects are handles for the same object (identical groups and same contexts).

  • MPI.CONGRUENT: the underlying groups are identical in constituents and rank order; these communicators differ only by context.

  • MPI.SIMILAR: members of both objects are the same but the rank order differs.

  • MPI.UNEQUAL: otherwise

source

Functions

Operations

MPI.Group_rankFunction
Group_rank(group::Group)

The rank of the process in the particular group.

Returns an integer in the range 0:MPI.Group_size()-1.

External links

source
+Groups · MPI.jl

Groups

An MPI group is a set of process identifiers identified by their rank (see MPI.Comm_rank and MPI.Group_rank). They are used within a communicator to describe the participants in a communication universe.

Types and enums

MPI.ComparisonType
Comparison

An enum denoting the result of Comm_compare:

  • MPI.IDENT: the objects are handles for the same object (identical groups and same contexts).

  • MPI.CONGRUENT: the underlying groups are identical in constituents and rank order; these communicators differ only by context.

  • MPI.SIMILAR: members of both objects are the same but the rank order differs.

  • MPI.UNEQUAL: otherwise

source

Functions

Operations

MPI.Group_rankFunction
Group_rank(group::Group)

The rank of the process in the particular group.

Returns an integer in the range 0:MPI.Group_size()-1.

External links

source
diff --git a/dev/reference/io/index.html b/dev/reference/io/index.html index 3c07ef1ba..e6b34549c 100644 --- a/dev/reference/io/index.html +++ b/dev/reference/io/index.html @@ -1,2 +1,2 @@ -I/O · MPI.jl

I/O

File manipulation

MPI.File.openFunction
MPI.File.open(comm::Comm, filename::AbstractString; keywords...)

Open the file identified by filename. This is a collective operation on comm.

Supported keywords are as follows:

  • read, write, create, append have the same behaviour and defaults as Base.open.
  • sequential: file will only be accessed sequentially (default: false)
  • uniqueopen: file will not be concurrently opened elsewhere (default: false)
  • deleteonclose: delete file on close (default: false)

Any additional keywords are passed via an Info object, and are implementation dependent.

External links

source

Views

MPI.File.set_view!Function
MPI.File.set_view!(file::FileHandle, disp::Integer, etype::Datatype, filetype::Datatype, datarep::AbstractString; kwargs...)

Set the current process's view of file.

The start of the view is set to disp; the type of data is set to etype; the distribution of data to processes is set to filetype; and the representation of data in the file is set to datarep: one of "native" (default), "internal", or "external32".

External links

source
MPI.File.get_byte_offsetFunction
MPI.File.get_byte_offset(file::FileHandle, offset::Integer)

Converts a view-relative offset into an absolute byte position. Returns the absolute byte position (from the beginning of the file) of offset relative to the current view of file.

External links

source

Consistency

MPI.File.syncFunction
MPI.File.sync(fh::FileHandle)

A collective operation causing all previous writes to fh by the calling process to be transferred to the storage device. If other processes have made updates to the storage device, then all such updates become visible to subsequent reads of fh by the calling process.

External links

source
MPI.File.get_atomicityFunction
MPI.File.get_atomicity(file::FileHandle)

Get the consistency option for the fh. If false it is non-atomic.

External links

source

Data access

Individual pointer

MPI.File.read_all!Function
MPI.File.read_all!(file::FileHandle, data)

Reads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_allFunction
MPI.File.write_all(file::FileHandle, data)

Writes data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source

Explicit offsets

MPI.File.read_at!Function
MPI.File.read_at!(file::FileHandle, offset::Integer, data)

Reads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined.

See also

External links

source
MPI.File.read_at_all!Function
MPI.File.read_at_all!(file::FileHandle, offset::Integer, data)

Reads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_at_allFunction
MPI.File.write_at_all(file::FileHandle, offset::Integer, data)

Writes from data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source

Shared pointer

MPI.File.read_ordered!Function
MPI.File.read_ordered!(file::FileHandle, data)

Collectively reads in rank order from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_orderedFunction
MPI.File.write_ordered(file::FileHandle, data)

Collectively writes in rank order to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.seek_sharedFunction
MPI.File.seek_shared(file::FileHandle, offset::Integer, whence::Seek=SEEK_SET)

Updates the shared file pointer according to whence, which has the following possible values:

  • MPI.File.SEEK_SET (default): the pointer is set to offset
  • MPI.File.SEEK_CUR: the pointer is set to the current pointer position plus offset
  • MPI.File.SEEK_END: the pointer is set to the end of file plus offset

This is a collective operation, and must be called with the same value on all processes in the communicator.

External links

source
MPI.File.get_position_sharedFunction
MPI.File.get_position_shared(file::FileHandle)

The current position of the shared file pointer (in etype units) relative to the current view.

External links

source
+I/O · MPI.jl

I/O

File manipulation

MPI.File.openFunction
MPI.File.open(comm::Comm, filename::AbstractString; keywords...)

Open the file identified by filename. This is a collective operation on comm.

Supported keywords are as follows:

  • read, write, create, append have the same behaviour and defaults as Base.open.
  • sequential: file will only be accessed sequentially (default: false)
  • uniqueopen: file will not be concurrently opened elsewhere (default: false)
  • deleteonclose: delete file on close (default: false)

Any additional keywords are passed via an Info object, and are implementation dependent.

External links

source

Views

MPI.File.set_view!Function
MPI.File.set_view!(file::FileHandle, disp::Integer, etype::Datatype, filetype::Datatype, datarep::AbstractString; kwargs...)

Set the current process's view of file.

The start of the view is set to disp; the type of data is set to etype; the distribution of data to processes is set to filetype; and the representation of data in the file is set to datarep: one of "native" (default), "internal", or "external32".

External links

source
MPI.File.get_byte_offsetFunction
MPI.File.get_byte_offset(file::FileHandle, offset::Integer)

Converts a view-relative offset into an absolute byte position. Returns the absolute byte position (from the beginning of the file) of offset relative to the current view of file.

External links

source

Consistency

MPI.File.syncFunction
MPI.File.sync(fh::FileHandle)

A collective operation causing all previous writes to fh by the calling process to be transferred to the storage device. If other processes have made updates to the storage device, then all such updates become visible to subsequent reads of fh by the calling process.

External links

source
MPI.File.get_atomicityFunction
MPI.File.get_atomicity(file::FileHandle)

Get the consistency option for the fh. If false it is non-atomic.

External links

source

Data access

Individual pointer

MPI.File.read_all!Function
MPI.File.read_all!(file::FileHandle, data)

Reads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_allFunction
MPI.File.write_all(file::FileHandle, data)

Writes data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source

Explicit offsets

MPI.File.read_at!Function
MPI.File.read_at!(file::FileHandle, offset::Integer, data)

Reads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined.

See also

External links

source
MPI.File.read_at_all!Function
MPI.File.read_at_all!(file::FileHandle, offset::Integer, data)

Reads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_at_allFunction
MPI.File.write_at_all(file::FileHandle, offset::Integer, data)

Writes from data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source

Shared pointer

MPI.File.read_ordered!Function
MPI.File.read_ordered!(file::FileHandle, data)

Collectively reads in rank order from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.write_orderedFunction
MPI.File.write_ordered(file::FileHandle, data)

Collectively writes in rank order to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.

See also

External links

source
MPI.File.seek_sharedFunction
MPI.File.seek_shared(file::FileHandle, offset::Integer, whence::Seek=SEEK_SET)

Updates the shared file pointer according to whence, which has the following possible values:

  • MPI.File.SEEK_SET (default): the pointer is set to offset
  • MPI.File.SEEK_CUR: the pointer is set to the current pointer position plus offset
  • MPI.File.SEEK_END: the pointer is set to the end of file plus offset

This is a collective operation, and must be called with the same value on all processes in the communicator.

External links

source
MPI.File.get_position_sharedFunction
MPI.File.get_position_shared(file::FileHandle)

The current position of the shared file pointer (in etype units) relative to the current view.

External links

source
diff --git a/dev/reference/library/index.html b/dev/reference/library/index.html index caf5721a4..2a81555c7 100644 --- a/dev/reference/library/index.html +++ b/dev/reference/library/index.html @@ -1,2 +1,2 @@ -Library information · MPI.jl

Library information

Constants

MPI.MPI_LIBRARYConstant
MPI_LIBRARY :: String

The current MPI implementation: this is determined by

source

Functions

MPI.versioninfoFunction
MPI.versioninfo(io::IO=stdout)

Print a summary of the current MPI configuration.

source
MPI.has_cudaFunction
MPI.has_cuda()

Check if the MPI implementation is known to have CUDA support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden). For "IBMSpectrumMPI" it will return true.

This can be overridden by setting the JULIA_MPI_HAS_CUDA environment variable to true or false.

Note

For OpenMPI or OpenMPI-based implementations you first need to call Init().

See also MPI.has_rocm for ROCm support.

source
MPI.has_rocmFunction
MPI.has_rocm()

Check if the MPI implementation is known to have ROCm support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden).

This can be overridden by setting the JULIA_MPI_HAS_ROCM environment variable to true or false.

See also MPI.has_cuda for CUDA support.

source
MPI.identify_implementationFunction
impl, version = identify_implementation()

Attempt to identify the MPI implementation based on MPI_LIBRARY_VERSION_STRING. Returns a triple of values:

  • impl: a String with the name of the MPI implementation, or "unknown" if it cannot be determined,
  • version: a VersionNumber of the library, or nothing if it cannot be determined.

This function is only intended for internal use. Users should use MPI_LIBRARY, MPI_LIBRARY_VERSION.

source
+Library information · MPI.jl

Library information

Constants

MPI.MPI_LIBRARYConstant
MPI_LIBRARY :: String

The current MPI implementation: this is determined by

source

Functions

MPI.versioninfoFunction
MPI.versioninfo(io::IO=stdout)

Print a summary of the current MPI configuration.

source
MPI.has_cudaFunction
MPI.has_cuda()

Check if the MPI implementation is known to have CUDA support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden). For "IBMSpectrumMPI" it will return true.

This can be overridden by setting the JULIA_MPI_HAS_CUDA environment variable to true or false.

Note

For OpenMPI or OpenMPI-based implementations you first need to call Init().

See also MPI.has_rocm for ROCm support.

source
MPI.has_rocmFunction
MPI.has_rocm()

Check if the MPI implementation is known to have ROCm support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden).

This can be overridden by setting the JULIA_MPI_HAS_ROCM environment variable to true or false.

See also MPI.has_cuda for CUDA support.

source
MPI.identify_implementationFunction
impl, version = identify_implementation()

Attempt to identify the MPI implementation based on MPI_LIBRARY_VERSION_STRING. Returns a triple of values:

  • impl: a String with the name of the MPI implementation, or "unknown" if it cannot be determined,
  • version: a VersionNumber of the library, or nothing if it cannot be determined.

This function is only intended for internal use. Users should use MPI_LIBRARY, MPI_LIBRARY_VERSION.

source
diff --git a/dev/reference/misc/index.html b/dev/reference/misc/index.html index 8a493acfb..ee59567fe 100644 --- a/dev/reference/misc/index.html +++ b/dev/reference/misc/index.html @@ -1,2 +1,2 @@ -Miscellanea · MPI.jl
+Miscellanea · MPI.jl
diff --git a/dev/reference/mpipreferences/index.html b/dev/reference/mpipreferences/index.html index e5e7296f8..f296cd26f 100644 --- a/dev/reference/mpipreferences/index.html +++ b/dev/reference/mpipreferences/index.html @@ -1,9 +1,9 @@ -MPIPreferences.jl · MPI.jl

MPIPreferences.jl

MPIPreferences.jl is a small package based on Preferences.jl for selecting MPI implementations. These choices are compile-time constants, and so any changes will require a Julia restart.

Consts

MPIPreferences.abiConstant
MPIPreferences.abi :: String

The ABI (application binary interface) of the currently selected binary. Supported values are:

  • "MPICH": MPICH-compatible ABI (https://www.mpich.org/abi/)
  • "OpenMPI": Open MPI compatible ABI (Open MPI, IBM Spectrum MPI, Fujitsu MPI)
  • "MicrosoftMPI": Microsoft MPI
  • "MPItrampoline": MPItrampoline
  • "HPE MPT": HPE MPT
source

Changing implementations

MPIPreferences.use_system_binaryFunction
use_system_binary(;
+MPIPreferences.jl · MPI.jl

MPIPreferences.jl

MPIPreferences.jl is a small package based on Preferences.jl for selecting MPI implementations. These choices are compile-time constants, and so any changes will require a Julia restart.

Consts

MPIPreferences.abiConstant
MPIPreferences.abi :: String

The ABI (application binary interface) of the currently selected binary. Supported values are:

  • "MPICH": MPICH-compatible ABI (https://www.mpich.org/abi/)
  • "OpenMPI": Open MPI compatible ABI (Open MPI, IBM Spectrum MPI, Fujitsu MPI)
  • "MicrosoftMPI": Microsoft MPI
  • "MPItrampoline": MPItrampoline
  • "HPE MPT": HPE MPT
source

Changing implementations

MPIPreferences.use_system_binaryFunction
use_system_binary(;
     library_names = ["libmpi", "libmpi_ibm", "msmpi", "libmpich", "libmpi_cray", "libmpitrampoline"],
     extra_paths = String[],
     mpiexec = "mpiexec",
     abi = nothing,
     vendor = nothing,
     export_prefs = false,
-    force = true)

Switches the underlying MPI implementation to a system provided one. A restart of Julia is required for the changes to take effect.

Options:

  • library_names: a name or collection of names of the MPI library, passed to Libdl.find_library. If the library isn't in the library search path, you can specify the full path to the library.

  • extra_paths: indicate extra directories where to search for the MPI library, besides the default ones of the dynamic linker.

  • mpiexec: the MPI launcher executable. The default is mpiexec, but some clusters require using the scheduler launcher interface (e.g. srun on Slurm, aprun on PBS). It is also possible to pass a Cmd object to include specific command line options.

  • abi: the ABI of the MPI library. By default this is determined automatically using identify_abi. See abi for currently supported values.

  • vendor: can be either nothing or a vendor name (such a "cray"). If vendor has the value "cray", then the output from cc --cray-print-opts=all is parsed for which libraries are linked by the Cray Compiler Wrappers. Note that if mpi_gtl_* is present, then this .so will be added to the preloads. Also note that the inputs to library_names will be overwritten by the library name used by the compiler wrapper.

  • export_prefs: if true, the preferences into the Project.toml instead of LocalPreferences.toml.

  • force: if true, the preferences are set even if they are already set.

source
MPIPreferences.use_jll_binaryFunction
use_jll_binary([binary]; export_prefs=false, force=true)

Switches the underlying MPI implementation to one provided by JLL packages. A restart of Julia is required for the changes to take effect.

Available options are:

  • "MicrosoftMPI_jll" (Only option and default on Windows)
  • "MPICH_jll" (Default on all other platform)
  • "OpenMPI_jll"
  • "MPItrampoline_jll"

The export_prefs option determines whether the preferences being set should be stored within LocalPreferences.toml or Project.toml.

source

Utils

MPIPreferences.check_unchangedFunction
MPIPreferences.check_unchanged()

Throws an error if the preferences have been modified in the current Julia session, or if they are modified after this function is called.

This is should be called from the __init__() function of any package which relies on the values of MPIPreferences.

source

Preferences schema

MPIPreferences utilizes the following keys to store information in the Preferences key-value store.

  • _format: the version number of the schema. Currently only "1.0" is supported.
  • binary: the choice of binary. This should be one of the strings listed in MPIPreferences.binary.

If binary == "system", then the following keys are also required (otherwise they have no effect):

  • libmpi: the filename or path of the MPI dynamic library.
  • abi: The ABI of the MPI implementation. This should be one of the strings listed in MPIPreferences.abi.
  • mpiexec: either
    • a string corresponding to the MPI launcher executable
    • an array of strings, with the first entry being the executable and remaining entries being additional flags that should be used with the executable.
+ force = true)

Switches the underlying MPI implementation to a system provided one. A restart of Julia is required for the changes to take effect.

Options:

  • library_names: a name or collection of names of the MPI library, passed to Libdl.find_library. If the library isn't in the library search path, you can specify the full path to the library.

  • extra_paths: indicate extra directories where to search for the MPI library, besides the default ones of the dynamic linker.

  • mpiexec: the MPI launcher executable. The default is mpiexec, but some clusters require using the scheduler launcher interface (e.g. srun on Slurm, aprun on PBS). It is also possible to pass a Cmd object to include specific command line options.

  • abi: the ABI of the MPI library. By default this is determined automatically using identify_abi. See abi for currently supported values.

  • vendor: can be either nothing or a vendor name (such a "cray"). If vendor has the value "cray", then the output from cc --cray-print-opts=all is parsed for which libraries are linked by the Cray Compiler Wrappers. Note that if mpi_gtl_* is present, then this .so will be added to the preloads. Also note that the inputs to library_names will be overwritten by the library name used by the compiler wrapper.

  • export_prefs: if true, the preferences into the Project.toml instead of LocalPreferences.toml.

  • force: if true, the preferences are set even if they are already set.

source
MPIPreferences.use_jll_binaryFunction
use_jll_binary([binary]; export_prefs=false, force=true)

Switches the underlying MPI implementation to one provided by JLL packages. A restart of Julia is required for the changes to take effect.

Available options are:

  • "MicrosoftMPI_jll" (Only option and default on Windows)
  • "MPICH_jll" (Default on all other platform)
  • "OpenMPI_jll"
  • "MPItrampoline_jll"

The export_prefs option determines whether the preferences being set should be stored within LocalPreferences.toml or Project.toml.

source

Utils

MPIPreferences.check_unchangedFunction
MPIPreferences.check_unchanged()

Throws an error if the preferences have been modified in the current Julia session, or if they are modified after this function is called.

This is should be called from the __init__() function of any package which relies on the values of MPIPreferences.

source

Preferences schema

MPIPreferences utilizes the following keys to store information in the Preferences key-value store.

  • _format: the version number of the schema. Currently only "1.0" is supported.
  • binary: the choice of binary. This should be one of the strings listed in MPIPreferences.binary.

If binary == "system", then the following keys are also required (otherwise they have no effect):

  • libmpi: the filename or path of the MPI dynamic library.
  • abi: The ABI of the MPI implementation. This should be one of the strings listed in MPIPreferences.abi.
  • mpiexec: either
    • a string corresponding to the MPI launcher executable
    • an array of strings, with the first entry being the executable and remaining entries being additional flags that should be used with the executable.
diff --git a/dev/reference/onesided/index.html b/dev/reference/onesided/index.html index aad8e37d1..0198e27f4 100644 --- a/dev/reference/onesided/index.html +++ b/dev/reference/onesided/index.html @@ -1,2 +1,2 @@ -One-sided communication · MPI.jl

One-sided communication

MPI.Win_createFunction
MPI.Win_create(base[, size::Integer, disp_unit::Integer], comm::Comm; infokws...)

Create a window over the array base, returning a Win object used by these processes to perform RMA operations. This is a collective call over comm.

  • size is the size of the window in bytes (default = sizeof(base))
  • disp_unit is the size of address scaling in bytes (default = sizeof(eltype(base)))
  • infokws are info keys providing optimization hints to the runtime.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_create_dynamicFunction
MPI.Win_create_dynamic(comm::Comm; infokws...)

Create a dynamic window returning a Win object used by these processes to perform RMA operations

This is a collective call over comm.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_allocate_sharedFunction
win, array = MPI.Win_allocate_shared(Array{T}, dims, comm::Comm; infokws...)

Create and allocate a shared memory window for objects of type T of dimension dims (either an integer or tuple of integers), returning a Win and the Array{T} attached to the local process.

This is a collective call over comm, but dims can differ for each call (and can be zero).

Use MPI.Win_shared_query to obtain the Array attached to a different process in the same shared memory space.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_shared_queryFunction
array = Win_shared_query(Array{T}, [dims,] win; rank)

Obtain the shared memory allocated by Win_allocate_shared of the process rank in win. Returns an Array{T} of size dims (being a Vector{T} if no dims argument is provided).

source
MPI.Win_flushFunction
Win_flush(win::Win; rank)

Completes all outstanding RMA operations initiated by the calling process to the target rank on the specified window.

External links

source
MPI.Win_lockFunction
Win_lock(win::Win; rank::Integer, type=:exclusive/:shared, nocheck=false)

Starts an RMA access epoch. The window at the process with rank rank can be accessed by RMA operations on win during that epoch.

Multiple RMA access epochs (with calls to MPI.Win_lock) can occur simultaneously; however, each access epoch must target a different process.

Accesses that are protected by an exclusive lock (type=:exclusive) will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock (type=:shared) will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.

If nocheck=true, no other process holds, or will attempt to acquire, a conflicting lock, while the caller holds the window lock. This is useful when mutual exclusion is achieved by other means, but the coherence operations that may be attached to the lock and unlock calls are still required.

External links

source
MPI.Get!Function
Get!(origin, win::Win; rank::Integer, disp::Integer=0)

Copies data from the memory window win on the remote rank rank, with displacement disp, into origin using remote memory access. origin can be a Buffer, or any object for which Buffer(origin) is defined.

External links

source
MPI.Put!Function
Put!(origin, win::Win; rank::Integer, disp::Integer=0)

Copies data from origin into memory window win on remote rank rank at displacement disp using remote memory access. origin can be a Buffer, or any object for which Buffer_send(origin) is defined.

External links

source
MPI.Accumulate!Function
Accumulate!(origin, op, win::Win; rank::Integer, disp::Integer=0)

Combine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access.

origin can be a Buffer, or any object for which Buffer_send(origin) is defined. op can be any predefined Op (custom operators are not supported).

External links

source
MPI.Get_accumulate!Function
Get_accumulate!(origin, result, target_rank::Integer, target_disp::Integer, op::Op, win::Win)

Combine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access. Get_accumulate also returns the content of the target buffer before accumulation into the result buffer.

origin can be a Buffer, or any object for which Buffer_send(origin) is defined, result can be a Buffer, or any object for which Buffer(result) is defined. op can be any predefined Op (custom operators are not supported).

External links

source
+One-sided communication · MPI.jl

One-sided communication

MPI.Win_createFunction
MPI.Win_create(base[, size::Integer, disp_unit::Integer], comm::Comm; infokws...)

Create a window over the array base, returning a Win object used by these processes to perform RMA operations. This is a collective call over comm.

  • size is the size of the window in bytes (default = sizeof(base))
  • disp_unit is the size of address scaling in bytes (default = sizeof(eltype(base)))
  • infokws are info keys providing optimization hints to the runtime.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_create_dynamicFunction
MPI.Win_create_dynamic(comm::Comm; infokws...)

Create a dynamic window returning a Win object used by these processes to perform RMA operations

This is a collective call over comm.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_allocate_sharedFunction
win, array = MPI.Win_allocate_shared(Array{T}, dims, comm::Comm; infokws...)

Create and allocate a shared memory window for objects of type T of dimension dims (either an integer or tuple of integers), returning a Win and the Array{T} attached to the local process.

This is a collective call over comm, but dims can differ for each call (and can be zero).

Use MPI.Win_shared_query to obtain the Array attached to a different process in the same shared memory space.

infokws are info keys providing optimization hints.

MPI.free should be called on the Win object once operations have been completed.

source
MPI.Win_shared_queryFunction
array = Win_shared_query(Array{T}, [dims,] win; rank)

Obtain the shared memory allocated by Win_allocate_shared of the process rank in win. Returns an Array{T} of size dims (being a Vector{T} if no dims argument is provided).

source
MPI.Win_flushFunction
Win_flush(win::Win; rank)

Completes all outstanding RMA operations initiated by the calling process to the target rank on the specified window.

External links

source
MPI.Win_lockFunction
Win_lock(win::Win; rank::Integer, type=:exclusive/:shared, nocheck=false)

Starts an RMA access epoch. The window at the process with rank rank can be accessed by RMA operations on win during that epoch.

Multiple RMA access epochs (with calls to MPI.Win_lock) can occur simultaneously; however, each access epoch must target a different process.

Accesses that are protected by an exclusive lock (type=:exclusive) will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock (type=:shared) will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.

If nocheck=true, no other process holds, or will attempt to acquire, a conflicting lock, while the caller holds the window lock. This is useful when mutual exclusion is achieved by other means, but the coherence operations that may be attached to the lock and unlock calls are still required.

External links

source
MPI.Get!Function
Get!(origin, win::Win; rank::Integer, disp::Integer=0)

Copies data from the memory window win on the remote rank rank, with displacement disp, into origin using remote memory access. origin can be a Buffer, or any object for which Buffer(origin) is defined.

External links

source
MPI.Put!Function
Put!(origin, win::Win; rank::Integer, disp::Integer=0)

Copies data from origin into memory window win on remote rank rank at displacement disp using remote memory access. origin can be a Buffer, or any object for which Buffer_send(origin) is defined.

External links

source
MPI.Accumulate!Function
Accumulate!(origin, op, win::Win; rank::Integer, disp::Integer=0)

Combine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access.

origin can be a Buffer, or any object for which Buffer_send(origin) is defined. op can be any predefined Op (custom operators are not supported).

External links

source
MPI.Get_accumulate!Function
Get_accumulate!(origin, result, target_rank::Integer, target_disp::Integer, op::Op, win::Win)

Combine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access. Get_accumulate also returns the content of the target buffer before accumulation into the result buffer.

origin can be a Buffer, or any object for which Buffer_send(origin) is defined, result can be a Buffer, or any object for which Buffer(result) is defined. op can be any predefined Op (custom operators are not supported).

External links

source
diff --git a/dev/reference/pointtopoint/index.html b/dev/reference/pointtopoint/index.html index ac14dba53..5cec9e640 100644 --- a/dev/reference/pointtopoint/index.html +++ b/dev/reference/pointtopoint/index.html @@ -1,41 +1,41 @@ -Point-to-point communication · MPI.jl

Point-to-point communication

Types

MPI.AbstractRequestType
MPI.AbstractRequest

An abstract type for Julia objects wrapping MPI Requests objects, which represent non-blocking MPI communication operations. The following implementations provided in MPI.jl

  • Request: this is the default request type.
  • UnsafeRequest: similar to Request, but does not maintain a reference to the underlying communication buffer.
  • MultiRequestItem: created by calling getindex on a MultiRequest / UnsafeMultiRequest object, which efficiently stores a collection of requests.

How request objects are used

A request object can be passed to non-blocking communication operations, such as MPI.Isend and MPI.Irecv!. If no object is provided, then an MPI.Request is used.

The status of a Request can be checked by the Wait and Test functions or their mœultiple-request variants, which will deallocate the request once it is determined to be complete.

Alternatively, it will be deallocated by calling MPI.free or at finalization, meaning that it is safe to ignore the request objects if the status of the communication can be checked by other means.

In certain cases, the operation can also be cancelled by Cancel!.

Implementing new request types

Subtypes R <: AbstractRequest should define the methods for the following functions:

  • C conversion functions to MPI_Request and Ptr{MPI_Request}:
    • Base.cconvert(::Type{MPI_Request}, req::R) / Base.unsafe_convert(::Type{MPI_Request}, req::R)
    • Base.cconvert(::Type{Ptr{MPI_Request}}, req::R) / Base.unsafe_convert(::Type{Ptr{MPI_Request}}, req::R)`
  • setbuffer!(req::R, val): keep a reference to the communication bufferval. Ifval == nothing`, then clear the reference.
source
MPI.RequestType
MPI.Request()

The default MPI Request object, representing a non-blocking communication. This also contains a reference to the buffer used in the communication to ensure it isn't garbage-collected during communication.

See AbstractRequest for more information.

source
MPI.UnsafeRequestType
MPI.UnsafeRequest()

Similar to MPI.Request, but does not maintain a reference to the underlying communication buffer. This may have improve performance by reducing memory allocations.

Warning

The user should ensure that another reference to the communication buffer is maintained so that it is not cleaned up by the garbage collector before the communication operation is complete.

For example ```julia buf = MPI.Buffer(zeros(10)) GC.@preserve buf begin req = MPI.Isend(buf, comm, UnsafeRequest(); rank=1) # ... MPI.Wait(req) end

source
MPI.MultiRequestType
MPI.MultiRequest(n::Integer=0)

A collection of MPI Requests. This is useful when operating on multiple MPI requests at the same time. MultiRequest objects can be passed directly to MPI.Waitall, MPI.Testall, etc.

req[i] will return a MultiRequestItem which adheres to the [AbstractRequest] interface.

Usage

reqs = MPI.MultiRequest(n)
+Point-to-point communication · MPI.jl

Point-to-point communication

Types

MPI.AbstractRequestType
MPI.AbstractRequest

An abstract type for Julia objects wrapping MPI Requests objects, which represent non-blocking MPI communication operations. The following implementations provided in MPI.jl

  • Request: this is the default request type.
  • UnsafeRequest: similar to Request, but does not maintain a reference to the underlying communication buffer.
  • MultiRequestItem: created by calling getindex on a MultiRequest / UnsafeMultiRequest object, which efficiently stores a collection of requests.

How request objects are used

A request object can be passed to non-blocking communication operations, such as MPI.Isend and MPI.Irecv!. If no object is provided, then an MPI.Request is used.

The status of a Request can be checked by the Wait and Test functions or their mœultiple-request variants, which will deallocate the request once it is determined to be complete.

Alternatively, it will be deallocated by calling MPI.free or at finalization, meaning that it is safe to ignore the request objects if the status of the communication can be checked by other means.

In certain cases, the operation can also be cancelled by Cancel!.

Implementing new request types

Subtypes R <: AbstractRequest should define the methods for the following functions:

  • C conversion functions to MPI_Request and Ptr{MPI_Request}:
    • Base.cconvert(::Type{MPI_Request}, req::R) / Base.unsafe_convert(::Type{MPI_Request}, req::R)
    • Base.cconvert(::Type{Ptr{MPI_Request}}, req::R) / Base.unsafe_convert(::Type{Ptr{MPI_Request}}, req::R)`
  • setbuffer!(req::R, val): keep a reference to the communication bufferval. Ifval == nothing`, then clear the reference.
source
MPI.RequestType
MPI.Request()

The default MPI Request object, representing a non-blocking communication. This also contains a reference to the buffer used in the communication to ensure it isn't garbage-collected during communication.

See AbstractRequest for more information.

source
MPI.UnsafeRequestType
MPI.UnsafeRequest()

Similar to MPI.Request, but does not maintain a reference to the underlying communication buffer. This may have improve performance by reducing memory allocations.

Warning

The user should ensure that another reference to the communication buffer is maintained so that it is not cleaned up by the garbage collector before the communication operation is complete.

For example ```julia buf = MPI.Buffer(zeros(10)) GC.@preserve buf begin req = MPI.Isend(buf, comm, UnsafeRequest(); rank=1) # ... MPI.Wait(req) end

source
MPI.MultiRequestType
MPI.MultiRequest(n::Integer=0)

A collection of MPI Requests. This is useful when operating on multiple MPI requests at the same time. MultiRequest objects can be passed directly to MPI.Waitall, MPI.Testall, etc.

req[i] will return a MultiRequestItem which adheres to the [AbstractRequest] interface.

Usage

reqs = MPI.MultiRequest(n)
 for i = 1:n
     MPI.Isend(buf, comm, reqs[i]; rank=dest[i])
 end
-MPI.Waitall(reqs)
source
MPI.StatusType
MPI.Status

The status of an MPI receive communication. It has 3 accessible fields

  • source: source of the received message
  • tag: tag of the received message
  • error: error code. This is only set if a function returns multiple statuses.

Additionally, the accessor function MPI.Get_count can be used to determine the number of entries received.

source

Accessors

MPI.Get_countFunction
MPI.Get_count(status::Status, T)

The number of entries received. T should match the argument provided by the receive call that set the status variable.

If the number of entries received exceeds the limits of the count parameter, then it returns MPI_UNDEFINED.

External links

source

Constants

MPI.PROC_NULLConstant
MPI.PROC_NULL

A dummy value that can be used instead of a rank wherever a source or a destination argument is required in a call. A send

source
MPI.ANY_SOURCEConstant
MPI.ANY_SOURCE

A wild card value for receive or probe operations that matches any source rank.

source
MPI.ANY_TAGConstant
MPI.ANY_TAG

A wild card value for receive or probe operations that matches any tag.

source

Blocking communication

MPI.SendFunction
Send(buf, comm::Comm; dest::Integer, tag::Integer=0)

Perform a blocking send from the buffer buf to MPI rank dest of communicator comm using the message tag tag.

Send(obj, comm::Comm; dest::Integer, tag::Integer=0)

Complete a blocking send of an isbits object obj to MPI rank dest of communicator comm using with the message tag tag.

External links

source
MPI.sendFunction
send(obj, comm::Comm; dest::Integer, tag::Integer=0)

Complete a blocking send using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.

source
MPI.Recv!Function
data = Recv!(recvbuf, comm::Comm;
+MPI.Waitall(reqs)
source
MPI.StatusType
MPI.Status

The status of an MPI receive communication. It has 3 accessible fields

  • source: source of the received message
  • tag: tag of the received message
  • error: error code. This is only set if a function returns multiple statuses.

Additionally, the accessor function MPI.Get_count can be used to determine the number of entries received.

source

Accessors

MPI.Get_countFunction
MPI.Get_count(status::Status, T)

The number of entries received. T should match the argument provided by the receive call that set the status variable.

If the number of entries received exceeds the limits of the count parameter, then it returns MPI_UNDEFINED.

External links

source

Constants

MPI.PROC_NULLConstant
MPI.PROC_NULL

A dummy value that can be used instead of a rank wherever a source or a destination argument is required in a call. A send

source
MPI.ANY_SOURCEConstant
MPI.ANY_SOURCE

A wild card value for receive or probe operations that matches any source rank.

source
MPI.ANY_TAGConstant
MPI.ANY_TAG

A wild card value for receive or probe operations that matches any tag.

source

Blocking communication

MPI.SendFunction
Send(buf, comm::Comm; dest::Integer, tag::Integer=0)

Perform a blocking send from the buffer buf to MPI rank dest of communicator comm using the message tag tag.

Send(obj, comm::Comm; dest::Integer, tag::Integer=0)

Complete a blocking send of an isbits object obj to MPI rank dest of communicator comm using with the message tag tag.

External links

source
MPI.sendFunction
send(obj, comm::Comm; dest::Integer, tag::Integer=0)

Complete a blocking send using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.

source
MPI.Recv!Function
data = Recv!(recvbuf, comm::Comm;
         source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 data, status = Recv!(recvbuf, comm::Comm, MPI.Status;
-        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive into the buffer recvbuf from MPI rank source of communicator comm using with the message tag tag.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Optionally returns the Status object of the receive.

See also

External links

source
MPI.RecvFunction
data = Recv(::Type{T}, comm::Comm;
+        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive into the buffer recvbuf from MPI rank source of communicator comm using with the message tag tag.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Optionally returns the Status object of the receive.

See also

External links

source
MPI.RecvFunction
data = Recv(::Type{T}, comm::Comm;
         source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 data, status = Recv(::Type{T}, comm::Comm, MPI.Status;
-        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive of a single isbits object of type T from MPI rank source of communicator comm using with the message tag tag.

Returns a tuple of the object of type T and optionally the Status of the receive.

See also

External links

source
MPI.recvFunction
obj = recv(comm::Comm;
+        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive of a single isbits object of type T from MPI rank source of communicator comm using with the message tag tag.

Returns a tuple of the object of type T and optionally the Status of the receive.

See also

External links

source
MPI.recvFunction
obj = recv(comm::Comm;
         source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 obj, status = recv(comm::Comm, MPI.Status;
-        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive of a serialized object from MPI rank source of communicator comm using with the message tag tag.

Returns the deserialized object and optionally the Status of the receive.

source
MPI.Sendrecv!Function
data = Sendrecv!(sendbuf, recvbuf, comm;
+        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Completes a blocking receive of a serialized object from MPI rank source of communicator comm using with the message tag tag.

Returns the deserialized object and optionally the Status of the receive.

source
MPI.Sendrecv!Function
data = Sendrecv!(sendbuf, recvbuf, comm;
         dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)
 data, status = Sendrecv!(sendbuf, recvbuf, comm, MPI.Status;
-        dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)

Complete a blocking send-receive operation over the MPI communicator comm. Send sendbuf to the MPI rank dest using message tag sendtag, and receive from MPI rank source into the buffer recvbuf using message tag recvtag. Return a Status object.

External links

source

Non-blocking communication

Initiation

MPI.IsendFunction
Isend(data, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)

Starts a nonblocking send of data to MPI rank dest of communicator comm using with the message tag tag.

data can be a Buffer, or any object for which Buffer_send is defined.

Returns the AbstractRequest object for the nonblocking send.

External links

source
MPI.isendFunction
isend(obj, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)

Starts a nonblocking send of using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.

Returns the communication Request for the nonblocking send.

source
MPI.Irecv!Function
req = Irecv!(recvbuf, comm::Comm[, req::AbstractRequest = Request()];
-        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Starts a nonblocking receive into the buffer data from MPI rank source of communicator comm using with the message tag tag.

data can be a Buffer, or any object for which Buffer(data) is defined.

Returns the AbstractRequest object for the nonblocking receive.

External links

source

Completion

MPI.TestFunction
flag = Test(req::AbstractRequest)
-flag, status = Test(req::AbstractRequest, Status)

Check if the request req is complete. If so, the request is deallocated and flag = true is returned. Otherwise flag = false.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.TestallFunction
flag = Testall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])
-flag, statuses = Testall(reqs::AbstractVector{Request}, Status)

Check if all active requests in the array reqs are complete. If so, the requests are deallocated and true is returned. Otherwise no requests are modified, and false is returned.

The optional statuses or Status argument can be used to obtain the return Status of each request.

See also

External links

source
MPI.TestanyFunction
flag, idx = Testany(reqs::AbstractVector{Request}[, status::Ref{Status}])
-flag, idx, status = Testany(reqs::AbstractVector{Request}, Status)

Checks if any one of the requests in the array reqs is complete.

If one or more requests are complete, then one is chosen arbitrarily, deallocated. flag = true and its (1-based) index idx is returned.

If there are no completed requests, then flag = false and idx = nothing is returned.

If there are no active requests, flag = true and idx = nothing.

The optional status argument can be used to obtain the return Status of the request.

See also

External links

source
MPI.TestsomeFunction
inds = Testsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])

Similar to Waitsome except that if no operations have completed it will return an empty array.

If there are no active requests, then the function returns nothing.

The optional statuses argument can be used to obtain the return Status of each completed request.

See also

External links

source
MPI.WaitFunction
Wait(req::AbstractRequest)
-status = Wait(req::AbstractRequest, Status)

Block until the request req is complete and deallocated.

The Status argument returns the Status of the completed request.

External links

source
Base.waitMethod
Base.wait(req::MPI.Request)

Wait for an MPI request to complete. Unlike MPI.Wait, it will yield to other Julia tasks resulting in a cooperative wait.

source
MPI.WaitallFunction
Waitall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])
-statuses = Waitall(reqs::AbstractVector{Request}, Status)

Block until all active requests in the array reqs are complete.

The optional statuses or Status argument can be used to obtain the return Status of each request.

See also

External links

source
MPI.WaitanyFunction
i = Waitany(reqs::AbstractVector{Request}[, status::Ref{Status}])
-i, status = Waitany(reqs::AbstractVector{Request}, Status)

Blocks until one of the requests in the array reqs is complete: if more than one is complete, one is chosen arbitrarily. The request is deallocated and the (1-based) index i of the completed request is returned.

If there are no active requests, then i = nothing.

The optional status argument can be used to obtain the return Status of the request.

See also

External links

source
MPI.WaitsomeFunction
inds = Waitsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])

Block until at least one of the active requests in the array reqs is complete. The completed requests are deallocated, and an array inds of their indices in reqs is returned.

If there are no active requests, then inds = nothing.

The optional statuses argument can be used to obtain the return Status of each completed request.

See also

External links

source

Probe/Cancel

MPI.isnullFunction
isnull(req::AbstractRequest)

Is req is a null request.

source
MPI.Cancel!Function
Cancel!(req::Request)

Marks a pending Irecv! operation for cancellation (cancelling a Isend, while supported in some implementations, is deprecated as of MPI 3.1). Note that the request is not deallocated, and can still be queried using the test or wait functions.

External links

source
MPI.IprobeFunction
ismsg = Iprobe(comm::Comm;
+        dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)

Complete a blocking send-receive operation over the MPI communicator comm. Send sendbuf to the MPI rank dest using message tag sendtag, and receive from MPI rank source into the buffer recvbuf using message tag recvtag. Return a Status object.

External links

source

Non-blocking communication

Initiation

MPI.IsendFunction
Isend(data, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)

Starts a nonblocking send of data to MPI rank dest of communicator comm using with the message tag tag.

data can be a Buffer, or any object for which Buffer_send is defined.

Returns the AbstractRequest object for the nonblocking send.

External links

source
MPI.isendFunction
isend(obj, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)

Starts a nonblocking send of using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.

Returns the communication Request for the nonblocking send.

source
MPI.Irecv!Function
req = Irecv!(recvbuf, comm::Comm[, req::AbstractRequest = Request()];
+        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Starts a nonblocking receive into the buffer data from MPI rank source of communicator comm using with the message tag tag.

data can be a Buffer, or any object for which Buffer(data) is defined.

Returns the AbstractRequest object for the nonblocking receive.

External links

source

Completion

MPI.TestFunction
flag = Test(req::AbstractRequest)
+flag, status = Test(req::AbstractRequest, Status)

Check if the request req is complete. If so, the request is deallocated and flag = true is returned. Otherwise flag = false.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.TestallFunction
flag = Testall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])
+flag, statuses = Testall(reqs::AbstractVector{Request}, Status)

Check if all active requests in the array reqs are complete. If so, the requests are deallocated and true is returned. Otherwise no requests are modified, and false is returned.

The optional statuses or Status argument can be used to obtain the return Status of each request.

See also

External links

source
MPI.TestanyFunction
flag, idx = Testany(reqs::AbstractVector{Request}[, status::Ref{Status}])
+flag, idx, status = Testany(reqs::AbstractVector{Request}, Status)

Checks if any one of the requests in the array reqs is complete.

If one or more requests are complete, then one is chosen arbitrarily, deallocated. flag = true and its (1-based) index idx is returned.

If there are no completed requests, then flag = false and idx = nothing is returned.

If there are no active requests, flag = true and idx = nothing.

The optional status argument can be used to obtain the return Status of the request.

See also

External links

source
MPI.TestsomeFunction
inds = Testsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])

Similar to Waitsome except that if no operations have completed it will return an empty array.

If there are no active requests, then the function returns nothing.

The optional statuses argument can be used to obtain the return Status of each completed request.

See also

External links

source
MPI.WaitFunction
Wait(req::AbstractRequest)
+status = Wait(req::AbstractRequest, Status)

Block until the request req is complete and deallocated.

The Status argument returns the Status of the completed request.

External links

source
Base.waitMethod
Base.wait(req::MPI.Request)

Wait for an MPI request to complete. Unlike MPI.Wait, it will yield to other Julia tasks resulting in a cooperative wait.

source
MPI.WaitallFunction
Waitall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])
+statuses = Waitall(reqs::AbstractVector{Request}, Status)

Block until all active requests in the array reqs are complete.

The optional statuses or Status argument can be used to obtain the return Status of each request.

See also

External links

source
MPI.WaitanyFunction
i = Waitany(reqs::AbstractVector{Request}[, status::Ref{Status}])
+i, status = Waitany(reqs::AbstractVector{Request}, Status)

Blocks until one of the requests in the array reqs is complete: if more than one is complete, one is chosen arbitrarily. The request is deallocated and the (1-based) index i of the completed request is returned.

If there are no active requests, then i = nothing.

The optional status argument can be used to obtain the return Status of the request.

See also

External links

source
MPI.WaitsomeFunction
inds = Waitsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])

Block until at least one of the active requests in the array reqs is complete. The completed requests are deallocated, and an array inds of their indices in reqs is returned.

If there are no active requests, then inds = nothing.

The optional statuses argument can be used to obtain the return Status of each completed request.

See also

External links

source

Probe/Cancel

MPI.isnullFunction
isnull(req::AbstractRequest)

Is req is a null request.

source
MPI.Cancel!Function
Cancel!(req::Request)

Marks a pending Irecv! operation for cancellation (cancelling a Isend, while supported in some implementations, is deprecated as of MPI 3.1). Note that the request is not deallocated, and can still be queried using the test or wait functions.

External links

source
MPI.IprobeFunction
ismsg = Iprobe(comm::Comm;
         source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 ismsg, status = Iprobe(comm::Comm, MPI.Status;
-        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Checks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true. The Status argument additionally returns the Status of the completed request.

External links

source
MPI.ProbeFunction
Probe(comm::Comm;
+        source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Checks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true. The Status argument additionally returns the Status of the completed request.

External links

source
MPI.ProbeFunction
Probe(comm::Comm;
         source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 status = Probe(comm::Comm, MPI.Status;
-    source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Blocks until there is a message that can be received matching source, tag and comm. Optionally returns the corresponding Status object.

External links

source

Persistent requests

MPI.Send_initFunction
Send_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];
-    dest, tag=0)

Allocate a persistent send request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.

External links

source
MPI.Recv_initFunction
Recv_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];
-    source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)

Allocate a persistent receive request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.

External links

source

Matching probes and receives

MPI.MprobeFunction
msg = MPI.Mprobe(comm::MPI.Comm;
+    source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Blocks until there is a message that can be received matching source, tag and comm. Optionally returns the corresponding Status object.

External links

source

Persistent requests

MPI.Send_initFunction
Send_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];
+    dest, tag=0)

Allocate a persistent send request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.

External links

source
MPI.Recv_initFunction
Recv_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];
+    source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)

Allocate a persistent receive request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.

External links

source

Matching probes and receives

MPI.MprobeFunction
msg = MPI.Mprobe(comm::MPI.Comm;
     source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 msg, status = MPI.Mprobe(comm::MPI.Comm, MPI.Status;
-    source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Matching blocking probe. Similar to MPI.Probe, except that it also returns msg, an MPI.Message object.

Blocks until a message that can be received matching source, tag and comm, returning a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.ImprobeFunction
ismsg, msg = MPI.Improbe(comm::MPI.Comm;
+    source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Matching blocking probe. Similar to MPI.Probe, except that it also returns msg, an MPI.Message object.

Blocks until a message that can be received matching source, tag and comm, returning a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.ImprobeFunction
ismsg, msg = MPI.Improbe(comm::MPI.Comm;
     source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)
 ismsg, msg, status = MPI.Improbe(comm::MPI.Comm, MPI.Status;
-    source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Matching non-blocking probe. Similar to MPI.Iprobe, except that it also returns msg, an MPI.Message object.

Checks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true, and a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!. Otherwise msg is set to be a null Message.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.Mrecv!Function
data = MPI.Mrecv!(recvbuf, msg::MPI.Message)
-data, status = MPI.Mrecv!(recvbuf, msg::MPI.Message, MPI.Status)

Completes a blocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Optionally returns the Status object of the receive.

External links

source
MPI.Imrecv!Function
req = MPI.Imrecv!(recvbuf, msg::MPI.Message[, req::AbstractRequest=Request()])

Starts a nonblocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Returns req, an AbstractRequest object for the nonblocking receive.

External links

source
+ source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)

Matching non-blocking probe. Similar to MPI.Iprobe, except that it also returns msg, an MPI.Message object.

Checks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true, and a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!. Otherwise msg is set to be a null Message.

The Status argument additionally returns the Status of the completed request.

External links

source
MPI.Mrecv!Function
data = MPI.Mrecv!(recvbuf, msg::MPI.Message)
+data, status = MPI.Mrecv!(recvbuf, msg::MPI.Message, MPI.Status)

Completes a blocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Optionally returns the Status object of the receive.

External links

source
MPI.Imrecv!Function
req = MPI.Imrecv!(recvbuf, msg::MPI.Message[, req::AbstractRequest=Request()])

Starts a nonblocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.

recvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.

Returns req, an AbstractRequest object for the nonblocking receive.

External links

source
diff --git a/dev/reference/topology/index.html b/dev/reference/topology/index.html index 213df4c04..618385653 100644 --- a/dev/reference/topology/index.html +++ b/dev/reference/topology/index.html @@ -1,18 +1,18 @@ -Topology · MPI.jl

Topology

Cartesian

MPI.Dims_createFunction
newdims = Dims_create(nnodes::Integer, dims)

A convenience function for selecting a balanced Cartesian grid of a total of nnodes nodes, for example to use with MPI.Cart_create.

dims is an array or tuple of integers specifying the number of nodes in each dimension. The function returns an array newdims of the same length, such that if newdims[i] = dims[i] if dims[i] is non-zero, and prod(newdims) == nnodes, and values newdims are as close to each other as possible.

nnodes should be divisible by the product of the non-zero entries of dims.

External links

source
MPI.Cart_createFunction
comm_cart = Cart_create(comm::Comm, dims; periodic=map(_->false, dims), reorder=false)

Create new MPI communicator with Cartesian topology information attached.

dims is an array or tuple of integers specifying the number of MPI processes in each coordinate direction, and periodic is an array or tuple of Bools indicating the periodicity of each coordinate. prod(dims) must be less than or equal to the size of comm; if it is smaller than some processes are returned a null communicator.

If reorder == false then the rank of each process in the new group is identical to its rank in the old group, otherwise the function may reorder the processes.

See also MPI.Dims_create.

External links

source
MPI.Cart_getFunction
dims, periods, coords = Cart_get(comm::Comm)

Obtain information on the Cartesian topology of dimension N underlying the communicator comm. This is specified by two Cint arrays of N elements for the number of processes and periodicity properties along each Cartesian dimension. A third Cint array is returned, containing the Cartesian coordinates of the calling process.

External links

source
MPI.Cart_coordsFunction
coords = Cart_coords(comm::Comm, rank::Integer=Comm_rank(comm))

Determine coordinates of a process with rank rank in the Cartesian communicator comm. If no rank is provided, it returns the coordinates of the current process.

Returns an integer array of the 0-based coordinates. The inverse of Cart_rank.

External links

source
MPI.Cart_rankFunction
rank = Cart_rank(comm::Comm, coords)

Determine process rank in communicator comm with Cartesian structure. The coords array specifies the 0-based Cartesian coordinates of the process. This is the inverse of MPI.Cart_coords

External links

source
MPI.Cart_shiftFunction
rank_source, rank_dest = Cart_shift(comm::Comm, direction::Integer, disp::Integer)

Return the source and destination ranks associated to a shift along a given direction.

External links

source
MPI.Cart_subFunction
comm_sub = Cart_sub(comm::Comm, remain_dims)

Create lower-dimensional Cartesian communicator from existent Cartesian topology.

remain_dims should be a boolean vector specifying the dimensions that should be kept in the generated subgrid.

External links

source
MPI.Cartdim_getFunction
ndims = Cartdim_get(comm::Comm)

Return number of dimensions of the Cartesian topology associated with the communicator comm.

External links

source

Graph topology

MPI.Dist_graph_createFunction
graph_comm = Dist_graph_create(comm::Comm, sources::Vector{Cint}, degrees::Vector{Cint}, destinations::Vector{Cint}; weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)

Create a new communicator from a given directed graph topology, described by incoming and outgoing edges on an existing communicator.

Arguments

  • comm::Comm: The communicator on which the distributed graph topology should be induced.
  • sources::Vector{Cint}: An array with the ranks for which this call will specify outgoing edges.
  • degrees::Vector{Cint}: An array with the number of outgoing edges for each entry in the sources array.
  • destinations::Vector{Cint}: An array containing destination nodes for the source nodes in the source node array, of lengthsum(sources).
  • weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the specified edges. The default is MPI.UNWEIGHTED.
  • reorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.

Example

We can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows

julia> rank = MPI.Comm_rank(comm);
+Topology · MPI.jl

Topology

Cartesian

MPI.Dims_createFunction
newdims = Dims_create(nnodes::Integer, dims)

A convenience function for selecting a balanced Cartesian grid of a total of nnodes nodes, for example to use with MPI.Cart_create.

dims is an array or tuple of integers specifying the number of nodes in each dimension. The function returns an array newdims of the same length, such that if newdims[i] = dims[i] if dims[i] is non-zero, and prod(newdims) == nnodes, and values newdims are as close to each other as possible.

nnodes should be divisible by the product of the non-zero entries of dims.

External links

source
MPI.Cart_createFunction
comm_cart = Cart_create(comm::Comm, dims; periodic=map(_->false, dims), reorder=false)

Create new MPI communicator with Cartesian topology information attached.

dims is an array or tuple of integers specifying the number of MPI processes in each coordinate direction, and periodic is an array or tuple of Bools indicating the periodicity of each coordinate. prod(dims) must be less than or equal to the size of comm; if it is smaller than some processes are returned a null communicator.

If reorder == false then the rank of each process in the new group is identical to its rank in the old group, otherwise the function may reorder the processes.

See also MPI.Dims_create.

External links

source
MPI.Cart_getFunction
dims, periods, coords = Cart_get(comm::Comm)

Obtain information on the Cartesian topology of dimension N underlying the communicator comm. This is specified by two Cint arrays of N elements for the number of processes and periodicity properties along each Cartesian dimension. A third Cint array is returned, containing the Cartesian coordinates of the calling process.

External links

source
MPI.Cart_coordsFunction
coords = Cart_coords(comm::Comm, rank::Integer=Comm_rank(comm))

Determine coordinates of a process with rank rank in the Cartesian communicator comm. If no rank is provided, it returns the coordinates of the current process.

Returns an integer array of the 0-based coordinates. The inverse of Cart_rank.

External links

source
MPI.Cart_rankFunction
rank = Cart_rank(comm::Comm, coords)

Determine process rank in communicator comm with Cartesian structure. The coords array specifies the 0-based Cartesian coordinates of the process. This is the inverse of MPI.Cart_coords

External links

source
MPI.Cart_shiftFunction
rank_source, rank_dest = Cart_shift(comm::Comm, direction::Integer, disp::Integer)

Return the source and destination ranks associated to a shift along a given direction.

External links

source
MPI.Cart_subFunction
comm_sub = Cart_sub(comm::Comm, remain_dims)

Create lower-dimensional Cartesian communicator from existent Cartesian topology.

remain_dims should be a boolean vector specifying the dimensions that should be kept in the generated subgrid.

External links

source
MPI.Cartdim_getFunction
ndims = Cartdim_get(comm::Comm)

Return number of dimensions of the Cartesian topology associated with the communicator comm.

External links

source

Graph topology

MPI.Dist_graph_createFunction
graph_comm = Dist_graph_create(comm::Comm, sources::Vector{Cint}, degrees::Vector{Cint}, destinations::Vector{Cint}; weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)

Create a new communicator from a given directed graph topology, described by incoming and outgoing edges on an existing communicator.

Arguments

  • comm::Comm: The communicator on which the distributed graph topology should be induced.
  • sources::Vector{Cint}: An array with the ranks for which this call will specify outgoing edges.
  • degrees::Vector{Cint}: An array with the number of outgoing edges for each entry in the sources array.
  • destinations::Vector{Cint}: An array containing destination nodes for the source nodes in the source node array, of lengthsum(sources).
  • weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the specified edges. The default is MPI.UNWEIGHTED.
  • reorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.

Example

We can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows

julia> rank = MPI.Comm_rank(comm);
 julia> N = MPI.Comm_size(comm);
 julia> sources = Cint[rank];
 julia> degrees = Cint[1];
 julia> destinations = Cint[mod(rank-1, N)];
-julia> graph_comm = Dist_graph_create(comm, sources, degrees, destinations)

External links

source
MPI.Dist_graph_create_adjacentFunction
graph_comm = Dist_graph_create_adjacent(comm::Comm,
     sources::Vector{Cint}, destinations::Vector{Cint};
     source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, destination_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED,
     reorder=false, infokws...)

Create a new communicator from a given directed graph topology, described by local incoming and outgoing edges on an existing communicator.

Arguments

  • comm::Comm: The communicator on which the distributed graph topology should be induced.
  • sources::Vector{Cint}: The local, incoming edges on the rank of the calling process.
  • destinations::Vector{Cint}: The local, outgoing edges on the rank of the calling process.
  • source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, incoming edges. The default is MPI.UNWEIGHTED.
  • destinations_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, outgoing edges. The default is MPI.UNWEIGHTED.
  • reorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.

Example

We can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows

julia> rank = MPI.Comm_rank(comm);
 julia> N = MPI.Comm_size(comm);
 julia> sources = Cint[mod(rank-1, N)];
 julia> destinations = Cint[mod(rank+1, N)];
-julia> graph_comm = Dist_graph_create_adjacent(comm, sources, destinations);

External links

source
MPI.Dist_graph_neighbors_countFunction
indegree, outdegree, weighted = Dist_graph_neighbors_count(graph_comm::Comm)

Return the number of in and out edges for the calling processes in a distributed graph topology and a flag indicating whether the distributed graph is weighted.

Arguments

  • graph_comm::Comm: The communicator of the distributed graph topology.

Example

Let us assume the following graph 0 <--> 1 --> 2, which has no weights on its edges, then the process with rank 1 will obtain the following result from calling the function

julia> Dist_graph_neighbors_count(graph_comm)
-(1,2,false)

External links

source
MPI.Dist_graph_neighbors!Function
Dist_graph_neighbors!(graph_comm::MPI.Comm,
+julia> graph_comm = Dist_graph_create_adjacent(comm, sources, destinations);

External links

source
MPI.Dist_graph_neighbors_countFunction
indegree, outdegree, weighted = Dist_graph_neighbors_count(graph_comm::Comm)

Return the number of in and out edges for the calling processes in a distributed graph topology and a flag indicating whether the distributed graph is weighted.

Arguments

  • graph_comm::Comm: The communicator of the distributed graph topology.

Example

Let us assume the following graph 0 <--> 1 --> 2, which has no weights on its edges, then the process with rank 1 will obtain the following result from calling the function

julia> Dist_graph_neighbors_count(graph_comm)
+(1,2,false)

External links

source
MPI.Dist_graph_neighbors!Function
Dist_graph_neighbors!(graph_comm::MPI.Comm,
    sources::Vector{Cint}, source_weights::Union{Vector{Cint}, Unweighted},
    destinations::Vector{Cint}, destination_weights::Union{Vector{Cint}, Unweighted},
 )
@@ -25,4 +25,4 @@
 julia> destinations
 [0,2]
 julia> destination_weights
-[3,4]

Note that the edge between ranks 0 and 1 can have a different weight depending on whether it is the incoming edge 0 --> 1 or the outgoing one 0 <-- 1.

See also

External links

source
MPI.Dist_graph_neighborsFunction
sources, source_weights, destinations, destination_weights = Dist_graph_neighbors(graph_comm::MPI.Comm)

Return (sources, source_weights, destinations, destination_weights) of the graph communicator graph_comm. For unweighted graphs source_weights and destination_weights are returned as MPI.UNWEIGHTED.

This function is a wrapper around MPI.Dist_graph_neighbors_count and MPI.Dist_graph_neighbors! that automatically handles the allocation of the result vectors.

source
+[3,4]

Note that the edge between ranks 0 and 1 can have a different weight depending on whether it is the incoming edge 0 --> 1 or the outgoing one 0 <-- 1.

See also

External links

source
MPI.Dist_graph_neighborsFunction
sources, source_weights, destinations, destination_weights = Dist_graph_neighbors(graph_comm::MPI.Comm)

Return (sources, source_weights, destinations, destination_weights) of the graph communicator graph_comm. For unweighted graphs source_weights and destination_weights are returned as MPI.UNWEIGHTED.

This function is a wrapper around MPI.Dist_graph_neighbors_count and MPI.Dist_graph_neighbors! that automatically handles the allocation of the result vectors.

source
diff --git a/dev/refindex/index.html b/dev/refindex/index.html index 0c28f9632..ee349cfae 100644 --- a/dev/refindex/index.html +++ b/dev/refindex/index.html @@ -1,2 +1,2 @@ -Index · MPI.jl

Index

+Index · MPI.jl

Index

diff --git a/dev/search_index.js b/dev/search_index.js index e536f2069..40624c88b 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/07-rma_active.jl\"","category":"page"},{"location":"examples/07-rma_active/#Active-RMA","page":"Active RMA","title":"Active RMA","text":"","category":"section"},{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"# examples/07-rma_active.jl\n# This example demonstrates one-sided communication,\n# specifically activate Remote Memory Access (RMA)\n\nusing MPI\n\nMPI.Init()\nconst world_sz = MPI.Comm_size(MPI.COMM_WORLD)\nconst rank = MPI.Comm_rank(MPI.COMM_WORLD)\n\n# allocate memory\nall_ranks = fill(-1, world_sz)\n# create RMA window on all ranks\nwin = MPI.Win_create(all_ranks, MPI.COMM_WORLD)\n\n#### first, let's MPI.Put on all ranks\n\n# start the communication epoch\nMPI.Win_fence(0, win)\n# each rank writes to exposed windows of rank 0\n# Signature: obj, target_rank, target_displacement, window\nMPI.Put(rank, 0, rank, win)\n# finish the communication epoch\nMPI.Win_fence(0, win)\n# print window content on all ranks\nfor j in 0:world_sz-1\n if rank == j\n println(\"After Put, Rank $rank:\")\n @show all_ranks\n end\n MPI.Barrier(MPI.COMM_WORLD)\nend\nrank == 0 && println()\n\n#### now, let's MPI.Get on all ranks\n\n# start the communication epoch\nMPI.Win_fence(0, win)\n# each rank reads from exposed windows of rank 0\nMPI.Get(all_ranks, 0, win)\n# finish the communication epoch\nMPI.Win_fence(0, win)\n# print window content on all ranks\nfor j in 0:world_sz-1\n if rank == j\n println(\"After Get, Rank $rank:\")\n @show all_ranks\n end\n MPI.Barrier(MPI.COMM_WORLD)\nend\n\n# free window\nMPI.free(win)","category":"page"},{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"> mpiexecjl -n 4 julia examples/07-rma_active.jl\nAfter Put, Rank 0:\nall_ranks = [0, 1, 2, 3]\nAfter Put, Rank 1:\nall_ranks = [-1, -1, -1, -1]\nAfter Put, Rank 2:\nall_ranks = [-1, -1, -1, -1]\nAfter Put, Rank 3:\nall_ranks = [-1, -1, -1, -1]\n\nAfter Get, Rank 0:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 1:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 2:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 3:\nall_ranks = [0, 1, 2, 3]","category":"page"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/08-rma_passive.jl\"","category":"page"},{"location":"examples/08-rma_passive/#Passive-RMA","page":"Passive RMA","title":"Passive RMA","text":"","category":"section"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"# examples/08-rma_passive.jl\n# This example demonstrates one-sided communication,\n# specifically passive Remote Memory Access (RMA)\n\nusing MPI\n\nMPI.Init()\nconst world_sz = MPI.Comm_size(MPI.COMM_WORLD)\nconst rank = MPI.Comm_rank(MPI.COMM_WORLD)\n\n# allocate memory\nall_ranks = fill(-1, world_sz)\n# create RMA window on all ranks\nwin = MPI.Win_create(all_ranks, MPI.COMM_WORLD)\n\n# let each rank write its rank number into window\nif rank != 0\n # lock window (MPI.LOCK_SHARED works as well)\n MPI.Win_lock(MPI.LOCK_EXCLUSIVE, 0, 0, win)\n # each rank writes to exposed windows of rank 0\n # Signature: obj, target_rank, target_displacement, window\n MPI.Put(rank, 0, rank, win)\n # finish the communication epoch\n MPI.Win_unlock(0, win)\nelse\n all_ranks[1] = 0\nend\n\n# wait with printing\nMPI.Win_fence(0, win)\n\n# print window content on all ranks\nif rank == 0\n println(\"After Put with lock / unlock, window content on rank 0:\")\n @show all_ranks\nend\n\n# free window\nMPI.free(win)","category":"page"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"> mpiexecjl -n 4 julia examples/08-rma_passive.jl\nAfter Put with lock / unlock, window content on rank 0:\nall_ranks = [0, 1, 2, 3]","category":"page"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/04-sendrecv.jl\"","category":"page"},{"location":"examples/04-sendrecv/#Send/receive","page":"Send/receive","title":"Send/receive","text":"","category":"section"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"# examples/04-sendrecv.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\nsize = MPI.Comm_size(comm)\n\ndst = mod(rank+1, size)\nsrc = mod(rank-1, size)\n\nN = 4\n\nsend_mesg = Array{Float64}(undef, N)\nrecv_mesg = Array{Float64}(undef, N)\n\nfill!(send_mesg, Float64(rank))\n\nrreq = MPI.Irecv!(recv_mesg, comm; source=src, tag=src+32)\n\nprint(\"$rank: Sending $rank -> $dst = $send_mesg\\n\")\nsreq = MPI.Isend(send_mesg, comm; dest=dst, tag=rank+32)\n\nstats = MPI.Waitall([rreq, sreq])\n\nprint(\"$rank: Received $src -> $rank = $recv_mesg\\n\")\n\nMPI.Barrier(comm)","category":"page"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"> mpiexecjl -n 4 julia examples/04-sendrecv.jl\n0: Sending 0 -> 1 = [0.0, 0.0, 0.0, 0.0]\n1: Sending 1 -> 2 = [1.0, 1.0, 1.0, 1.0]\n2: Sending 2 -> 3 = [2.0, 2.0, 2.0, 2.0]\n3: Sending 3 -> 0 = [3.0, 3.0, 3.0, 3.0]\n0: Received 3 -> 0 = [3.0, 3.0, 3.0, 3.0]\n1: Received 0 -> 1 = [0.0, 0.0, 0.0, 0.0]\n2: Received 1 -> 2 = [1.0, 1.0, 1.0, 1.0]\n3: Received 2 -> 3 = [2.0, 2.0, 2.0, 2.0]","category":"page"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/09-graph_communication.jl\"","category":"page"},{"location":"examples/09-graph_communication/#Graph-Communication","page":"Graph Communication","title":"Graph Communication","text":"","category":"section"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"# examples/09-graph_communication.jl\nusing Test\nusing MPI\n\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nsize = MPI.Comm_size(comm)\nrank = MPI.Comm_rank(comm)\n\n#\n# Setup the following communication graph\n#\n# +-----+\n# | |\n# v v\n# 0<-+ 3\n# ^ | ^\n# | | |\n# v | v\n# 1 +--2\n# ^ |\n# | |\n# +-----+\n#\n#\n\nif rank == 0\n dest = Cint[1,3]\n degree = Cint[length(dest)]\nelseif rank == 1\n dest = Cint[0]\n degree = Cint[length(dest)]\nelseif rank == 2\n dest = Cint[3,0,1]\n degree = Cint[length(dest)]\nelseif rank == 3\n dest = Cint[0,2,1]\n degree = Cint[length(dest)]\nend\n\nsource = Cint[rank]\ngraph_comm = MPI.Dist_graph_create(comm, source, degree, dest)\n\n# Query number of ranks that point to this rank, and number of ranks this rank point to\nindegree, outdegree, _ = MPI.Dist_graph_neighbors_count(graph_comm)\n\n# Query which ranks that point to this rank, and which ranks this rank point to\ninranks = Vector{Cint}(undef, indegree)\noutranks = Vector{Cint}(undef, outdegree)\nMPI.Dist_graph_neighbors!(graph_comm, inranks, outranks)\n\n#\n# Now send the rank across the edges.\n#\n# Version 1: use allgather primitive\n#\n\nsend = Cint[rank]\nrecv = Vector{Cint}(undef, indegree)\n\nMPI.Neighbor_allgather!(send, recv, graph_comm);\n\nprint(\"rank = $(rank): $(recv)\\n\")\n\n#\n# Version 2: use alltoall primitive\n#\n\nsend = fill(Cint(rank), outdegree)\nrecv = Vector{Cint}(undef, indegree)\n\nMPI.Neighbor_alltoall!(UBuffer(send,1), UBuffer(recv,1), graph_comm);\n\nprint(\"rank = $(rank): $(recv)\\n\")\n\n#\n# Now send the this rank \"destination rank\"+1 times across the edges.\n# Rank i receives i+1 values from each adjacent process\n#\n\nsend_count = outranks .+ Cint(1)\nsend = fill(Cint(rank), sum(send_count))\nrecv_count = fill(Cint(rank + 1), length(inranks))\nrecv = Vector{Cint}(undef, sum(recv_count))\n\nMPI.Neighbor_alltoallv!(VBuffer(send,send_count), VBuffer(recv,recv_count), graph_comm);\nprint(\"rank = $(rank): $(recv)\\n\")\n\nMPI.Finalize()","category":"page"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"> mpiexecjl -n 4 julia examples/09-graph_communication.jl\nrank = 0: Int32[1, 2, 3]\nrank = 1: Int32[0, 2, 3]\nrank = 2: Int32[3]\nrank = 3: Int32[2, 0]\nrank = 1: Int32[0, 2, 3]\nrank = 2: Int32[3]\nrank = 3: Int32[2, 0]\nrank = 0: Int32[1, 2, 3]\nrank = 0: Int32[1, 2, 3]\nrank = 1: Int32[0, 0, 2, 2, 3, 3]\nrank = 2: Int32[3, 3, 3]\nrank = 3: Int32[2, 2, 2, 2, 0, 0, 0, 0]","category":"page"},{"location":"knownissues/#Known-issues","page":"Known issues","title":"Known issues","text":"","category":"section"},{"location":"knownissues/#Julia-module-precompilation","page":"Known issues","title":"Julia module precompilation","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"If multiple MPI ranks trigger Julia's module precompilation, then a race condition can result in an error such as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ERROR: LoadError: IOError: mkdir: file already exists (EEXIST)\nStacktrace:\n [1] uv_error at ./libuv.jl:97 [inlined]\n [2] mkdir(::String; mode::UInt16) at ./file.jl:177\n [3] mkpath(::String; mode::UInt16) at ./file.jl:227\n [4] mkpath at ./file.jl:222 [inlined]\n [5] compilecache_path(::Base.PkgId) at ./loading.jl:1210\n [6] compilecache(::Base.PkgId, ::String) at ./loading.jl:1240\n [7] _require(::Base.PkgId) at ./loading.jl:1029\n [8] require(::Base.PkgId) at ./loading.jl:927\n [9] require(::Module, ::Symbol) at ./loading.jl:922\n [10] include(::Module, ::String) at ./Base.jl:377\n [11] exec_options(::Base.JLOptions) at ./client.jl:288\n [12] _start() at ./client.jl:484","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"See julia issue #30174 for more discussion of this problem. There are similar issues with Pkg operations, see Pkg issue #1219.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This can be worked around be either:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Triggering precompilation before launching MPI processes, for example:\njulia --project -e 'using Pkg; pkg\"instantiate\"'\njulia --project -e 'using Pkg; pkg\"precompile\"'\nmpiexec julia --project script.jl\nLaunching julia with the --compiled-modules=no option. This can result in much longer package load times.","category":"page"},{"location":"knownissues/#Open-MPI","page":"Known issues","title":"Open MPI","text":"","category":"section"},{"location":"knownissues/#Segmentation-fault-when-loading-the-library","page":"Known issues","title":"Segmentation fault when loading the library","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When attempting to use a system-provided Open MPI implementation, you may encounter a segmentation fault upon loading the library, or whenever the value of an environment variable is requested. This can be fixed by setting the environment variable ZES_ENABLE_SYSMAN=1. See Open MPI issue #10142 for more details.","category":"page"},{"location":"knownissues/#Segmentation-fault-in-HCOLL","page":"Known issues","title":"Segmentation fault in HCOLL","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"If Open MPI was built with support for HCOLL, you may encounter a segmentation fault in certain operations involving custom datatypes. The stacktrace may look something like","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"hcoll_create_mpi_type at /opt/mellanox/hcoll/lib/libhcoll.so.1 (unknown line)\nompi_dtype_2_hcoll_dtype at /lustre/software/openmpi/llvm14/4.1.4/lib/openmpi/mca_coll_hcoll.so (unknown line)\nmca_coll_hcoll_allgather at /lustre/software/openmpi/llvm14/4.1.4/lib/openmpi/mca_coll_hcoll.so (unknown line)\nMPI_Allgather at /lustre/software/openmpi/llvm14/4.1.4/lib/libmpi.so (unknown line)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This is due to a bug in HCOLL, see Open MPI issue #11201 for more details. You can disable HCOLL by exporting the environment variable","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export OMPI_MCA_coll_hcoll_enable=\"0\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"before starting the MPI process.","category":"page"},{"location":"knownissues/#MPICH","page":"Known issues","title":"MPICH","text":"","category":"section"},{"location":"knownissues/#gethostbyname-failure-in-internal_Init_thread","page":"Known issues","title":"gethostbyname failure in internal_Init_thread","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When your internal network stack/route is not correctly configured for the local loopback device, MPICH may fail to initialize with an error message which looks like the following:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Fatal error in internal_Init_thread: Other MPI error, error stack:\ninternal_Init_thread(67)...........: MPI_Init_thread(argc=0x0, argv=0x0, required=2, provided=0x16db94160) failed\nMPII_Init_thread(234)..............:\nMPID_Init(67)......................:\ninit_world(171)....................: channel initialization failed\nMPIDI_CH3_Init(84).................:\nMPID_nem_init(314).................:\nMPID_nem_tcp_init(175).............:\nMPID_nem_tcp_get_business_card(397):\nGetSockInterfaceAddr(370)..........: gethostbyname failed, bogon (errno 0)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"A workaround is provided in the documentation of the MOOSE framework and we report it here for reference:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"obtain your hostname\n$ hostname\nmycoolname\nfor both Linux and macOS systems, in your /etc/hosts file map the hostname you obtained at the previous step to the localhost address 127.0.0.1, if not already present. Note: this step requires root access, to modify the system configuration file /etc/hosts, if you don't have it talk to your system administrator. For example, open the file /etc/hosts with sudo access with your favorite text editor (e.g. sudo vi /etc/hosts, or sudo emacs /etc/hosts) and add the line\n127.0.0.1 mycoolname\nto the end of the file\nas an alternative to the previous step, only for macOS systems, run the command\nsudo scutil --set HostName mycoolname\nHowever it has been reported that this method may not always be effective.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"For further information see","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"MPI.jl issue #824\nMOOSE discussion #23610","category":"page"},{"location":"knownissues/#UCX","page":"Known issues","title":"UCX","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"UCX is a communication framework used by several MPI implementations.","category":"page"},{"location":"knownissues/#Memory-cache","page":"Known issues","title":"Memory cache","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When used with CUDA, UCX intercepts cudaMalloc so it can determine whether the pointer passed to MPI is on the host (main memory) or the device (GPU). Unfortunately, there are several known issues with how this works with Julia:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"UCX issue #5061\nUCX issue #4001 (fixed in UCX v1.7.0)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"By default, MPI.jl disables this by setting","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ENV[\"UCX_MEMTYPE_CACHE\"] = \"no\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"at __init__ which may result in reduced performance, especially for smaller messages.","category":"page"},{"location":"knownissues/#Multi-threading-and-signal-handling","page":"Known issues","title":"Multi-threading and signal handling","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When using Julia multi-threading, the Julia garbage collector internally uses SIGSEGV to synchronize threads.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"By default, UCX will error if this signal is raised (#337), resulting in a message such as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Caught signal 11 (Segmentation fault: invalid permissions for mapped object at address 0xXXXXXXXX)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This signal interception can be controlled by setting the environment variable UCX_ERROR_SIGNALS: if not already defined, MPI.jl will set it as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ENV[\"UCX_ERROR_SIGNALS\"] = \"SIGILL,SIGBUS,SIGFPE\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"at __init__. If set externally, it should be modified to exclude SIGSEGV from the list. Note that in some cases even if UCX_ERROR_SIGNALS is not set explicitly, UCX might still take SIGSEGV as an error signal. In this case, it might be needed to explicitly set UCX_ERROR_SIGNALS with","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export UCX_ERROR_SIGNALS=\"SIGILL,SIGBUS,SIGFPE\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"before calling mpiexec.","category":"page"},{"location":"knownissues/#CUDA-aware-MPI","page":"Known issues","title":"CUDA-aware MPI","text":"","category":"section"},{"location":"knownissues/#Memory-pool","page":"Known issues","title":"Memory pool","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Using CUDA-aware MPI on multi-GPU nodes with recent CUDA.jl may trigger (see here)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"The call to cuIpcGetMemHandle failed. This means the GPU RDMA protocol\ncannot be used.\n cuIpcGetMemHandle return value: 1","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"in the MPI layer, or fail on a segmentation fault (see here) with","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"[1642930332.032032] [gcn19:4087661:0] gdr_copy_md.c:122 UCX ERROR gdr_pin_buffer failed. length :65536 ret:22","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This is due to the MPI implementation using legacy cuIpc* APIs, which are incompatible with stream-ordered allocator, now default in CUDA.jl, see UCX issue #7110.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"To circumvent this, one has to ensure the CUDA memory pool to be set to none:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export JULIA_CUDA_MEMORY_POOL=none","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"More about CUDA.jl memory environment-variables.","category":"page"},{"location":"knownissues/#Hints-to-ensure-CUDA-aware-MPI-to-be-functional","page":"Known issues","title":"Hints to ensure CUDA-aware MPI to be functional","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Make sure to:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Have MPI and CUDA on path (or module loaded) that were used to build the CUDA-aware MPI\nSet the following environment variables: export JULIA_CUDA_MEMORY_POOL=none export JULIA_CUDA_USE_BINARYBUILDER=false\nAdd CUDA, MPIPreferences, and MPI packages in Julia. Switch to using the system binary\njulia --project -e 'using Pkg; Pkg.add([\"CUDA\", \"MPIPreferences\", \"MPI\"]); using MPIPreferences; MPIPreferences.use_system_binary()'\nThen in Julia, upon loading MPI and CUDA modules, you can check\nCUDA version: CUDA.versioninfo()\nIf MPI has CUDA: MPI.has_cuda()\nIf you are using correct MPI library: MPI.libmpi","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"After that, it may be preferred to run the Julia MPI script (as suggested here) launching it from a shell script (as suggested here).","category":"page"},{"location":"knownissues/#ROCm-aware-MPI","page":"Known issues","title":"ROCm-aware MPI","text":"","category":"section"},{"location":"knownissues/#Hints-to-ensure-ROCm-aware-MPI-to-be-functional","page":"Known issues","title":"Hints to ensure ROCm-aware MPI to be functional","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Make sure to:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Have MPI and ROCm on path (or module loaded) that were used to build the ROCm-aware MPI\nAdd AMDGPU, MPIPreferences, and MPI packages in Julia:\njulia --project -e 'using Pkg; Pkg.add([\"AMDGPU\", \"MPIPreferences\", \"MPI\"]); using MPIPreferences; MPIPreferences.use_system_binary()'\nThen in Julia, upon loading MPI and CUDA modules, you can check\nAMDGPU version: AMDGPU.versioninfo()\nIf MPI has ROCm: MPI.has_rocm()\nIf you are using correct MPI implementation: MPI.identify_implementation()","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"After that, this script can be used to verify if ROCm-aware MPI is functional (modified after the CUDA-aware version from here). It may be preferred to run the Julia ROCm-aware MPI script launching it from a shell script (as suggested here).","category":"page"},{"location":"knownissues/#Custom-reduction-operators","page":"Known issues","title":"Custom reduction operators","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"It is not possible to use custom reduction operators with 32-bit Microsoft MPI on Windows and on ARM CPUs with any operating system. These issues are due to due how custom operators are currently implemented in MPI.jl, that is by using closure cfunctions. However they have two limitations:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Julia's C-compatible function pointers cannot be used where the stdcall calling convention is expected, which is the case for 32-bit Microsoft MPI,\nclosure cfunctions in Julia are based on LLVM trampolines, which are not supported on ARM architecture.","category":"page"},{"location":"reference/onesided/#One-sided-communication","page":"One-sided communication","title":"One-sided communication","text":"","category":"section"},{"location":"reference/onesided/","page":"One-sided communication","title":"One-sided communication","text":"MPI.Win_create\nMPI.Win_create_dynamic\nMPI.Win_allocate_shared\nMPI.Win_shared_query\nMPI.Win_flush\nMPI.Win_lock\nMPI.Win_unlock\nMPI.Get!\nMPI.Put!\nMPI.Accumulate!\nMPI.Get_accumulate!","category":"page"},{"location":"reference/onesided/#MPI.Win_create","page":"One-sided communication","title":"MPI.Win_create","text":"MPI.Win_create(base[, size::Integer, disp_unit::Integer], comm::Comm; infokws...)\n\nCreate a window over the array base, returning a Win object used by these processes to perform RMA operations. This is a collective call over comm.\n\nsize is the size of the window in bytes (default = sizeof(base))\ndisp_unit is the size of address scaling in bytes (default = sizeof(eltype(base)))\ninfokws are info keys providing optimization hints to the runtime.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_create_dynamic","page":"One-sided communication","title":"MPI.Win_create_dynamic","text":"MPI.Win_create_dynamic(comm::Comm; infokws...)\n\nCreate a dynamic window returning a Win object used by these processes to perform RMA operations\n\nThis is a collective call over comm.\n\ninfokws are info keys providing optimization hints.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_allocate_shared","page":"One-sided communication","title":"MPI.Win_allocate_shared","text":"win, array = MPI.Win_allocate_shared(Array{T}, dims, comm::Comm; infokws...)\n\nCreate and allocate a shared memory window for objects of type T of dimension dims (either an integer or tuple of integers), returning a Win and the Array{T} attached to the local process.\n\nThis is a collective call over comm, but dims can differ for each call (and can be zero).\n\nUse MPI.Win_shared_query to obtain the Array attached to a different process in the same shared memory space.\n\ninfokws are info keys providing optimization hints.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_shared_query","page":"One-sided communication","title":"MPI.Win_shared_query","text":"array = Win_shared_query(Array{T}, [dims,] win; rank)\n\nObtain the shared memory allocated by Win_allocate_shared of the process rank in win. Returns an Array{T} of size dims (being a Vector{T} if no dims argument is provided).\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_flush","page":"One-sided communication","title":"MPI.Win_flush","text":"Win_flush(win::Win; rank)\n\nCompletes all outstanding RMA operations initiated by the calling process to the target rank on the specified window.\n\nExternal links\n\nMPI_Win_flush man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_lock","page":"One-sided communication","title":"MPI.Win_lock","text":"Win_lock(win::Win; rank::Integer, type=:exclusive/:shared, nocheck=false)\n\nStarts an RMA access epoch. The window at the process with rank rank can be accessed by RMA operations on win during that epoch.\n\nMultiple RMA access epochs (with calls to MPI.Win_lock) can occur simultaneously; however, each access epoch must target a different process.\n\nAccesses that are protected by an exclusive lock (type=:exclusive) will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock (type=:shared) will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.\n\nIf nocheck=true, no other process holds, or will attempt to acquire, a conflicting lock, while the caller holds the window lock. This is useful when mutual exclusion is achieved by other means, but the coherence operations that may be attached to the lock and unlock calls are still required.\n\nExternal links\n\nMPI_Win_lock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_unlock","page":"One-sided communication","title":"MPI.Win_unlock","text":"Win_unlock(win::Win; rank::Integer)\n\nCompletes an RMA access epoch started by a call to Win_lock.\n\nExternal links\n\nMPI_Win_unlock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Get!","page":"One-sided communication","title":"MPI.Get!","text":"Get!(origin, win::Win; rank::Integer, disp::Integer=0)\n\nCopies data from the memory window win on the remote rank rank, with displacement disp, into origin using remote memory access. origin can be a Buffer, or any object for which Buffer(origin) is defined.\n\nExternal links\n\nMPI_Get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Put!","page":"One-sided communication","title":"MPI.Put!","text":"Put!(origin, win::Win; rank::Integer, disp::Integer=0)\n\nCopies data from origin into memory window win on remote rank rank at displacement disp using remote memory access. origin can be a Buffer, or any object for which Buffer_send(origin) is defined.\n\nExternal links\n\nMPI_Put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Accumulate!","page":"One-sided communication","title":"MPI.Accumulate!","text":"Accumulate!(origin, op, win::Win; rank::Integer, disp::Integer=0)\n\nCombine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access.\n\norigin can be a Buffer, or any object for which Buffer_send(origin) is defined. op can be any predefined Op (custom operators are not supported).\n\nExternal links\n\nMPI_Accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Get_accumulate!","page":"One-sided communication","title":"MPI.Get_accumulate!","text":"Get_accumulate!(origin, result, target_rank::Integer, target_disp::Integer, op::Op, win::Win)\n\nCombine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access. Get_accumulate also returns the content of the target buffer before accumulation into the result buffer.\n\norigin can be a Buffer, or any object for which Buffer_send(origin) is defined, result can be a Buffer, or any object for which Buffer(result) is defined. op can be any predefined Op (custom operators are not supported).\n\nExternal links\n\nMPI_Get_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/buffers/#Buffers","page":"Buffers","title":"Buffers","text":"","category":"section"},{"location":"reference/buffers/","page":"Buffers","title":"Buffers","text":"Buffers are used for sending and receiving data. MPI.jl provides the following buffer types:","category":"page"},{"location":"reference/buffers/","page":"Buffers","title":"Buffers","text":"MPI.IN_PLACE\nMPI.Buffer\nMPI.Buffer_send\nMPI.UBuffer\nMPI.VBuffer\nMPI.RBuffer\nMPI.MPIPtr","category":"page"},{"location":"reference/buffers/#MPI.IN_PLACE","page":"Buffers","title":"MPI.IN_PLACE","text":"MPI.IN_PLACE\n\nA sentinel value that can be passed as a buffer argument for certain collective operations to use the same buffer for send and receive operations.\n\nScatter! and Scatterv!: can be used as the recvbuf argument on the root process.\nGather! and Gatherv!: can be used as the sendbuf argument on the root process.\nAllgather!, Allgatherv!, Alltoall! and Alltoallv!: can be used as the sendbuf argument on all processes.\nReduce! (root only), Allreduce!, Scan! and Exscan!: can be used as sendbuf argument.\n\n\n\n\n\n","category":"constant"},{"location":"reference/buffers/#MPI.Buffer","page":"Buffers","title":"MPI.Buffer","text":"MPI.Buffer\n\nAn MPI buffer for communication with a single rank. It is used for point-to-point and one-sided operations, as well as some collective operations. Operations will implicitly construct a Buffer when required via the generic constructor, but it can be advantageous to manually construct Buffers when doing so incurs additional overhead, for example when using a non-predefined MPI.Datatype.\n\nFields\n\ndata: a Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncount: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.\ndatatype: the MPI.Datatype stored in the buffer.\n\nUsage\n\nBuffer(data, count::Integer, datatype::Datatype)\n\nGeneric constructor.\n\nBuffer(data)\n\nConstruct a Buffer backed by data, automatically determining the appropriate count and datatype. Methods are provided for\n\nRef\nArray\nCUDA.CuArray if CUDA.jl is loaded.\nAMDGPU.ROCArray if AMDGPU.jl is loaded.\nSubArrays of an Array, CUDA.CuArray or AMDGPU.ROCArray where the layout is contiguous, sequential or blocked.\n\nSee also\n\nBuffer_send\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.Buffer_send","page":"Buffers","title":"MPI.Buffer_send","text":"Buffer_send(data)\n\nConstruct a Buffer object for a send operation from data, allowing cases where isbits(data).\n\n\n\n\n\n","category":"function"},{"location":"reference/buffers/#MPI.UBuffer","page":"Buffers","title":"MPI.UBuffer","text":"MPI.UBuffer\n\nAn MPI buffer for chunked collective communication, where all chunks are of uniform size.\n\nFields\n\ndata: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncount: The number of elements of datatype in each chunk.\nnchunks: The maximum number of chunks stored in the buffer. This is used only for validation, and can be set to nothing to disable checks.\ndatatype: The MPI.Datatype stored in the buffer.\n\nUsage\n\nUBuffer(data, count::Integer, nchunks::Union{Nothing, Integer}, datatype::Datatype)\n\nGeneric constructor.\n\nUBuffer(data, count::Integer)\n\nConstruct a UBuffer backed by data, where count is the number of elements in each chunk.\n\nSee also\n\nVBuffer: similar, but supports chunks of non-uniform sizes.\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.VBuffer","page":"Buffers","title":"MPI.VBuffer","text":"MPI.VBuffer\n\nAn MPI buffer for chunked collective communication, where chunks can be of different sizes and at different offsets.\n\nFields\n\ndata: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncounts: An array containing the length of each chunk.\ndispls: An array containing the (0-based) displacements of each chunk.\ndatatype: The MPI.Datatype stored in the buffer.\n\nUsage\n\nVBuffer(data, counts[, displs[, datatype]])\n\nConstruct a VBuffer backed by data, where counts[j] is the number of elements in the jth chunk, and displs[j] is the 0-based displacement. In other words, the jth chunk occurs in indices displs[j]+1:displs[j]+counts[j].\n\nThe default value for displs[j] = sum(counts[1:j-1]).\n\nSee also\n\nUBuffer when chunks are all of the same size.\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.RBuffer","page":"Buffers","title":"MPI.RBuffer","text":"MPI.RBuffer\n\nAn MPI buffer for reduction operations (MPI.Reduce!, MPI.Allreduce!, MPI.Scan!, MPI.Exscan!).\n\nFields\n\nsenddata: A Julia object referencing a region of memory to be used for the send buffer. It is required that the object can be cconverted to an MPIPtr.\nrecvdata: A Julia object referencing a region of memory to be used for the receive buffer. It is required that the object can be cconverted to an MPIPtr.\ncount: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.\ndatatype: the MPI.Datatype stored in the buffer.\n\nUsage\n\nRBuffer(senddata, recvdata[, count, datatype])\n\nGeneric constructor.\n\nRBuffer(senddata, recvdata)\n\nConstruct a Buffer backed by senddata and recvdata, automatically determining the appropriate count and datatype.\n\nsenddata can be MPI.IN_PLACE\nrecvdata can be nothing on a non-root node with MPI.Reduce!\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.API.MPIPtr","page":"Buffers","title":"MPI.API.MPIPtr","text":"MPI.MPIPtr\n\nA pointer to an MPI buffer. This type is used only as part of the implicit conversion in ccall: a Julia object can be passed to MPI by defining methods for Base.cconvert(::Type{MPIPtr}, ...)/Base.unsafe_convert(::Type{MPIPtr}, ...).\n\nCurrently supported are:\n\nPtr\nRef\nArray\nSubArray\nCUDA.CuArray if CUDA.jl is loaded.\nAMDGPU.ROCArray if AMDGPU.jl is loaded.\n\nAdditionally, certain sentinel values can be used, e.g. MPI_IN_PLACE or MPI_BOTTOM.\n\n\n\n\n\n","category":"type"},{"location":"reference/comm/#Communicators","page":"Communicators","title":"Communicators","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"An MPI communicator specifies the communication context for a communication operation. In particular, it specifies the set of processes which share the context, and assigns each each process a unique rank (see MPI.Comm_rank) taking an integer value in 0:n-1, where n is the number of processes in the communicator (see MPI.Comm_size.","category":"page"},{"location":"reference/comm/#Types-and-enums","page":"Communicators","title":"Types and enums","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm","category":"page"},{"location":"reference/comm/#MPI.Comm","page":"Communicators","title":"MPI.Comm","text":"MPI.Comm\n\nAn MPI Communicator object.\n\n\n\n\n\n","category":"type"},{"location":"reference/comm/#Constants","page":"Communicators","title":"Constants","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.COMM_WORLD\nMPI.COMM_SELF","category":"page"},{"location":"reference/comm/#MPI.COMM_WORLD","page":"Communicators","title":"MPI.COMM_WORLD","text":"MPI.COMM_WORLD\n\nA communicator containing all processes with which the local rank can communicate at initialization. In a typical \"static-process\" model, this will be all processes.\n\n\n\n\n\n","category":"constant"},{"location":"reference/comm/#MPI.COMM_SELF","page":"Communicators","title":"MPI.COMM_SELF","text":"MPI.COMM_SELF\n\nA communicator containing only the local process.\n\n\n\n\n\n","category":"constant"},{"location":"reference/comm/#Functions","page":"Communicators","title":"Functions","text":"","category":"section"},{"location":"reference/comm/#Operations","page":"Communicators","title":"Operations","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm_size\nMPI.Comm_rank\nMPI.Comm_compare\nMPI.Comm_group\nMPI.Comm_remote_group","category":"page"},{"location":"reference/comm/#MPI.Comm_size","page":"Communicators","title":"MPI.Comm_size","text":"Comm_size(comm::Comm)\n\nThe number of processes involved in communicator.\n\nSee also\n\nMPI.Comm_rank.\n\nExternal links\n\nMPI_Comm_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_rank","page":"Communicators","title":"MPI.Comm_rank","text":"Comm_rank(comm::Comm)\n\nThe rank of the process in the particular communicator's group.\n\nReturns an integer in the range 0:MPI.Comm_size()-1.\n\nSee also\n\nMPI.Comm_size.\n\nExternal links\n\nMPI_Comm_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_compare","page":"Communicators","title":"MPI.Comm_compare","text":"Comm_compare(comm1::Comm, comm2::Comm)::MPI.Comparison\n\nCompare two communicators and their underlying groups, returning an element of the Comparison enum.\n\nExternal links\n\nMPI_Comm_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_group","page":"Communicators","title":"MPI.Comm_group","text":"Comm_group(comm::Comm)\n\nAccesses the group associated with given communicator.\n\nExternal links\n\nMPI_Comm_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_remote_group","page":"Communicators","title":"MPI.Comm_remote_group","text":"Comm_remote_group(comm::Comm)\n\nAccesses the remote group associated with the given inter-communicator.\n\nExternal links\n\nMPI_Comm_remote_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#Constructors","page":"Communicators","title":"Constructors","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm_create\nMPI.Comm_create_group\nMPI.Comm_dup\nMPI.Comm_get_parent\nMPI.Comm_spawn\nMPI.Comm_split\nMPI.Comm_split_type\nMPI.Intercomm_merge","category":"page"},{"location":"reference/comm/#MPI.Comm_create","page":"Communicators","title":"MPI.Comm_create","text":"Comm_create(comm::Comm, group::Group)\n\nCollectively creates a new communicator.\n\nSee also\n\nMPI.Comm_create_group for the noncollective operation\n\nExternal links\n\nMPI_Comm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_create_group","page":"Communicators","title":"MPI.Comm_create_group","text":"Comm_create_group(comm::Comm, group::Group, tag::Integer)\n\nNoncollectively creates a new communicator.\n\nSee also\n\nMPI.Comm_create for the noncollective operation\n\nExternal links\n\nMPI_Comm_create_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_dup","page":"Communicators","title":"MPI.Comm_dup","text":"Comm_dup(comm::Comm)\n\nExternal links\n\nMPI_Comm_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_get_parent","page":"Communicators","title":"MPI.Comm_get_parent","text":"Comm_get_parent()\n\nExternal links\n\nMPI_Comm_get_parent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_spawn","page":"Communicators","title":"MPI.Comm_spawn","text":"Comm_spawn(command, argv::Vector{String}, nprocs::Integer, comm::Comm[, errors::Vector{Cint}]; kwargs...)\n\nExternal links\n\nMPI_Comm_spawn man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_split","page":"Communicators","title":"MPI.Comm_split","text":"Comm_split(comm::Comm, color::Union{Integer,Nothing}, key::Integer)\n\nPartition the communicator comm, one for each value of color, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.\n\ncolor should be a non-negative integer, or nothing, in which case a null communicator is returned for that rank.\n\nExternal links\n\nMPI_Comm_split man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_split_type","page":"Communicators","title":"MPI.Comm_split_type","text":"Comm_split_type(comm::Comm, split_type, key::Integer; kwargs...)\n\nPartitions the communicator comm based on split_type, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.\n\nCurrently only one split_type is provided:\n\nMPI.COMM_TYPE_SHARED: splits the communicator into subcommunicators, each of which can create a shared memory region.\n\nExternal links\n\nMPI_Comm_split_type man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Intercomm_merge","page":"Communicators","title":"MPI.Intercomm_merge","text":"Intercomm_merge(intercomm::Comm, flag::Bool)\n\nExternal links\n\nMPI_Intercomm_merge man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#Miscellaneous","page":"Communicators","title":"Miscellaneous","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.universe_size\nMPI.tag_ub","category":"page"},{"location":"reference/comm/#MPI.universe_size","page":"Communicators","title":"MPI.universe_size","text":"universe_size()\n\nThe total number of available slots, or nothing if it is not defined. This is determined by the MPI_UNIVERSE_SIZE attribute of COMM_WORLD.\n\nThis is typically dependent on the MPI implementation: for MPICH-based implementations, this is specified by the -usize argument. OpenMPI defines a default value based on the number of processes available.\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.tag_ub","page":"Communicators","title":"MPI.tag_ub","text":"tag_ub()\n\nThe maximum value tag value for point-to-point operations.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Point-to-point-communication","page":"Point-to-point communication","title":"Point-to-point communication","text":"","category":"section"},{"location":"reference/pointtopoint/#Types","page":"Point-to-point communication","title":"Types","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.AbstractRequest\nMPI.Request\nMPI.UnsafeRequest\nMPI.MultiRequest\nMPI.UnsafeMultiRequest\nMPI.RequestSet\nMPI.Status","category":"page"},{"location":"reference/pointtopoint/#MPI.AbstractRequest","page":"Point-to-point communication","title":"MPI.AbstractRequest","text":"MPI.AbstractRequest\n\nAn abstract type for Julia objects wrapping MPI Requests objects, which represent non-blocking MPI communication operations. The following implementations provided in MPI.jl\n\nRequest: this is the default request type.\nUnsafeRequest: similar to Request, but does not maintain a reference to the underlying communication buffer.\nMultiRequestItem: created by calling getindex on a MultiRequest / UnsafeMultiRequest object, which efficiently stores a collection of requests.\n\nHow request objects are used\n\nA request object can be passed to non-blocking communication operations, such as MPI.Isend and MPI.Irecv!. If no object is provided, then an MPI.Request is used.\n\nThe status of a Request can be checked by the Wait and Test functions or their mœultiple-request variants, which will deallocate the request once it is determined to be complete.\n\nAlternatively, it will be deallocated by calling MPI.free or at finalization, meaning that it is safe to ignore the request objects if the status of the communication can be checked by other means.\n\nIn certain cases, the operation can also be cancelled by Cancel!.\n\nImplementing new request types\n\nSubtypes R <: AbstractRequest should define the methods for the following functions:\n\nC conversion functions to MPI_Request and Ptr{MPI_Request}:\nBase.cconvert(::Type{MPI_Request}, req::R) / Base.unsafe_convert(::Type{MPI_Request}, req::R)\nBase.cconvert(::Type{Ptr{MPI_Request}}, req::R) / Base.unsafe_convert(::Type{Ptr{MPI_Request}}, req::R)`\nsetbuffer!(req::R, val): keep a reference to the communication bufferval. Ifval == nothing`, then clear the reference.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Request","page":"Point-to-point communication","title":"MPI.Request","text":"MPI.Request()\n\nThe default MPI Request object, representing a non-blocking communication. This also contains a reference to the buffer used in the communication to ensure it isn't garbage-collected during communication.\n\nSee AbstractRequest for more information.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.UnsafeRequest","page":"Point-to-point communication","title":"MPI.UnsafeRequest","text":"MPI.UnsafeRequest()\n\nSimilar to MPI.Request, but does not maintain a reference to the underlying communication buffer. This may have improve performance by reducing memory allocations.\n\nwarning: Warning\nThe user should ensure that another reference to the communication buffer is maintained so that it is not cleaned up by the garbage collector before the communication operation is complete.For example ```julia buf = MPI.Buffer(zeros(10)) GC.@preserve buf begin req = MPI.Isend(buf, comm, UnsafeRequest(); rank=1) # ... MPI.Wait(req) end\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.MultiRequest","page":"Point-to-point communication","title":"MPI.MultiRequest","text":"MPI.MultiRequest(n::Integer=0)\n\nA collection of MPI Requests. This is useful when operating on multiple MPI requests at the same time. MultiRequest objects can be passed directly to MPI.Waitall, MPI.Testall, etc.\n\nreq[i] will return a MultiRequestItem which adheres to the [AbstractRequest] interface.\n\nUsage\n\nreqs = MPI.MultiRequest(n)\nfor i = 1:n\n MPI.Isend(buf, comm, reqs[i]; rank=dest[i])\nend\nMPI.Waitall(reqs)\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.UnsafeMultiRequest","page":"Point-to-point communication","title":"MPI.UnsafeMultiRequest","text":"MPI.UnsafeMultiRequest(n::Integer=0)\n\nSimilar to MPI.MultiRequest, except that it does not maintain references to the underlying communication buffers. The same caveats apply as MPI.UnsafeRequest.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.RequestSet","page":"Point-to-point communication","title":"MPI.RequestSet","text":"RequestSet(requests::Vector{Request})\nRequestSet() # create an empty RequestSet\n\nA wrapper for an array of Requests that can be used to reduce intermediate memory allocations in Waitall, Testall, Waitany, Testany, Waitsome or Testsome.\n\nConsider using a MultiRequest or UnsafeMultiRequest instead.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Status","page":"Point-to-point communication","title":"MPI.Status","text":"MPI.Status\n\nThe status of an MPI receive communication. It has 3 accessible fields\n\nsource: source of the received message\ntag: tag of the received message\nerror: error code. This is only set if a function returns multiple statuses.\n\nAdditionally, the accessor function MPI.Get_count can be used to determine the number of entries received.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#Accessors","page":"Point-to-point communication","title":"Accessors","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Get_count","category":"page"},{"location":"reference/pointtopoint/#MPI.Get_count","page":"Point-to-point communication","title":"MPI.Get_count","text":"MPI.Get_count(status::Status, T)\n\nThe number of entries received. T should match the argument provided by the receive call that set the status variable.\n\nIf the number of entries received exceeds the limits of the count parameter, then it returns MPI_UNDEFINED.\n\nExternal links\n\nMPI_Get_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Constants","page":"Point-to-point communication","title":"Constants","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.PROC_NULL\nMPI.ANY_SOURCE\nMPI.ANY_TAG","category":"page"},{"location":"reference/pointtopoint/#MPI.PROC_NULL","page":"Point-to-point communication","title":"MPI.PROC_NULL","text":"MPI.PROC_NULL\n\nA dummy value that can be used instead of a rank wherever a source or a destination argument is required in a call. A send\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#MPI.ANY_SOURCE","page":"Point-to-point communication","title":"MPI.ANY_SOURCE","text":"MPI.ANY_SOURCE\n\nA wild card value for receive or probe operations that matches any source rank.\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#MPI.ANY_TAG","page":"Point-to-point communication","title":"MPI.ANY_TAG","text":"MPI.ANY_TAG\n\nA wild card value for receive or probe operations that matches any tag.\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#Blocking-communication","page":"Point-to-point communication","title":"Blocking communication","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Send\nMPI.send\nMPI.Recv!\nMPI.Recv\nMPI.recv\nMPI.Sendrecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Send","page":"Point-to-point communication","title":"MPI.Send","text":"Send(buf, comm::Comm; dest::Integer, tag::Integer=0)\n\nPerform a blocking send from the buffer buf to MPI rank dest of communicator comm using the message tag tag.\n\nSend(obj, comm::Comm; dest::Integer, tag::Integer=0)\n\nComplete a blocking send of an isbits object obj to MPI rank dest of communicator comm using with the message tag tag.\n\nExternal links\n\nMPI_Send man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.send","page":"Point-to-point communication","title":"MPI.send","text":"send(obj, comm::Comm; dest::Integer, tag::Integer=0)\n\nComplete a blocking send using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv!","page":"Point-to-point communication","title":"MPI.Recv!","text":"data = Recv!(recvbuf, comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\ndata, status = Recv!(recvbuf, comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive into the buffer recvbuf from MPI rank source of communicator comm using with the message tag tag.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nOptionally returns the Status object of the receive.\n\nSee also\n\nRecv\nrecv\n\nExternal links\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv","page":"Point-to-point communication","title":"MPI.Recv","text":"data = Recv(::Type{T}, comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\ndata, status = Recv(::Type{T}, comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive of a single isbits object of type T from MPI rank source of communicator comm using with the message tag tag.\n\nReturns a tuple of the object of type T and optionally the Status of the receive.\n\nSee also\n\nRecv!\nrecv\n\nExternal links\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.recv","page":"Point-to-point communication","title":"MPI.recv","text":"obj = recv(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nobj, status = recv(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive of a serialized object from MPI rank source of communicator comm using with the message tag tag.\n\nReturns the deserialized object and optionally the Status of the receive.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Sendrecv!","page":"Point-to-point communication","title":"MPI.Sendrecv!","text":"data = Sendrecv!(sendbuf, recvbuf, comm;\n dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)\ndata, status = Sendrecv!(sendbuf, recvbuf, comm, MPI.Status;\n dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)\n\nComplete a blocking send-receive operation over the MPI communicator comm. Send sendbuf to the MPI rank dest using message tag sendtag, and receive from MPI rank source into the buffer recvbuf using message tag recvtag. Return a Status object.\n\nExternal links\n\nMPI_Sendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Non-blocking-communication","page":"Point-to-point communication","title":"Non-blocking communication","text":"","category":"section"},{"location":"reference/pointtopoint/#Initiation","page":"Point-to-point communication","title":"Initiation","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Isend\nMPI.isend\nMPI.Irecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Isend","page":"Point-to-point communication","title":"MPI.Isend","text":"Isend(data, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)\n\nStarts a nonblocking send of data to MPI rank dest of communicator comm using with the message tag tag.\n\ndata can be a Buffer, or any object for which Buffer_send is defined.\n\nReturns the AbstractRequest object for the nonblocking send.\n\nExternal links\n\nMPI_Isend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.isend","page":"Point-to-point communication","title":"MPI.isend","text":"isend(obj, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)\n\nStarts a nonblocking send of using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.\n\nReturns the communication Request for the nonblocking send.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Irecv!","page":"Point-to-point communication","title":"MPI.Irecv!","text":"req = Irecv!(recvbuf, comm::Comm[, req::AbstractRequest = Request()];\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nStarts a nonblocking receive into the buffer data from MPI rank source of communicator comm using with the message tag tag.\n\ndata can be a Buffer, or any object for which Buffer(data) is defined.\n\nReturns the AbstractRequest object for the nonblocking receive.\n\nExternal links\n\nMPI_Irecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Completion","page":"Point-to-point communication","title":"Completion","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Test\nMPI.Testall\nMPI.Testany\nMPI.Testsome\nMPI.Wait\nBase.wait(req::MPI.Request)\nMPI.Waitall\nMPI.Waitany\nMPI.Waitsome","category":"page"},{"location":"reference/pointtopoint/#MPI.Test","page":"Point-to-point communication","title":"MPI.Test","text":"flag = Test(req::AbstractRequest)\nflag, status = Test(req::AbstractRequest, Status)\n\nCheck if the request req is complete. If so, the request is deallocated and flag = true is returned. Otherwise flag = false.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testall","page":"Point-to-point communication","title":"MPI.Testall","text":"flag = Testall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\nflag, statuses = Testall(reqs::AbstractVector{Request}, Status)\n\nCheck if all active requests in the array reqs are complete. If so, the requests are deallocated and true is returned. Otherwise no requests are modified, and false is returned.\n\nThe optional statuses or Status argument can be used to obtain the return Status of each request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testany","page":"Point-to-point communication","title":"MPI.Testany","text":"flag, idx = Testany(reqs::AbstractVector{Request}[, status::Ref{Status}])\nflag, idx, status = Testany(reqs::AbstractVector{Request}, Status)\n\nChecks if any one of the requests in the array reqs is complete.\n\nIf one or more requests are complete, then one is chosen arbitrarily, deallocated. flag = true and its (1-based) index idx is returned.\n\nIf there are no completed requests, then flag = false and idx = nothing is returned.\n\nIf there are no active requests, flag = true and idx = nothing.\n\nThe optional status argument can be used to obtain the return Status of the request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testsome","page":"Point-to-point communication","title":"MPI.Testsome","text":"inds = Testsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\n\nSimilar to Waitsome except that if no operations have completed it will return an empty array.\n\nIf there are no active requests, then the function returns nothing.\n\nThe optional statuses argument can be used to obtain the return Status of each completed request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Wait","page":"Point-to-point communication","title":"MPI.Wait","text":"Wait(req::AbstractRequest)\nstatus = Wait(req::AbstractRequest, Status)\n\nBlock until the request req is complete and deallocated.\n\nThe Status argument returns the Status of the completed request.\n\nExternal links\n\nMPI_Wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Base.wait-Tuple{MPI.Request}","page":"Point-to-point communication","title":"Base.wait","text":"Base.wait(req::MPI.Request)\n\nWait for an MPI request to complete. Unlike MPI.Wait, it will yield to other Julia tasks resulting in a cooperative wait.\n\n\n\n\n\n","category":"method"},{"location":"reference/pointtopoint/#MPI.Waitall","page":"Point-to-point communication","title":"MPI.Waitall","text":"Waitall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\nstatuses = Waitall(reqs::AbstractVector{Request}, Status)\n\nBlock until all active requests in the array reqs are complete.\n\nThe optional statuses or Status argument can be used to obtain the return Status of each request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Waitany","page":"Point-to-point communication","title":"MPI.Waitany","text":"i = Waitany(reqs::AbstractVector{Request}[, status::Ref{Status}])\ni, status = Waitany(reqs::AbstractVector{Request}, Status)\n\nBlocks until one of the requests in the array reqs is complete: if more than one is complete, one is chosen arbitrarily. The request is deallocated and the (1-based) index i of the completed request is returned.\n\nIf there are no active requests, then i = nothing.\n\nThe optional status argument can be used to obtain the return Status of the request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Waitsome","page":"Point-to-point communication","title":"MPI.Waitsome","text":"inds = Waitsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\n\nBlock until at least one of the active requests in the array reqs is complete. The completed requests are deallocated, and an array inds of their indices in reqs is returned.\n\nIf there are no active requests, then inds = nothing.\n\nThe optional statuses argument can be used to obtain the return Status of each completed request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Probe/Cancel","page":"Point-to-point communication","title":"Probe/Cancel","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.isnull\nMPI.Cancel!\nMPI.Iprobe\nMPI.Probe","category":"page"},{"location":"reference/pointtopoint/#MPI.isnull","page":"Point-to-point communication","title":"MPI.isnull","text":"isnull(req::AbstractRequest)\n\nIs req is a null request.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Cancel!","page":"Point-to-point communication","title":"MPI.Cancel!","text":"Cancel!(req::Request)\n\nMarks a pending Irecv! operation for cancellation (cancelling a Isend, while supported in some implementations, is deprecated as of MPI 3.1). Note that the request is not deallocated, and can still be queried using the test or wait functions.\n\nExternal links\n\nMPI_Cancel man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Iprobe","page":"Point-to-point communication","title":"MPI.Iprobe","text":"ismsg = Iprobe(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nismsg, status = Iprobe(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nChecks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true. The Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Iprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Probe","page":"Point-to-point communication","title":"MPI.Probe","text":"Probe(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nstatus = Probe(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nBlocks until there is a message that can be received matching source, tag and comm. Optionally returns the corresponding Status object.\n\nExternal links\n\nMPI_Probe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Persistent-requests","page":"Point-to-point communication","title":"Persistent requests","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Send_init\nMPI.Recv_init\nMPI.Start\nMPI.Startall","category":"page"},{"location":"reference/pointtopoint/#MPI.Send_init","page":"Point-to-point communication","title":"MPI.Send_init","text":"Send_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];\n dest, tag=0)\n\nAllocate a persistent send request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.\n\nExternal links\n\nMPI_Send_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv_init","page":"Point-to-point communication","title":"MPI.Recv_init","text":"Recv_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];\n source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)\n\nAllocate a persistent receive request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.\n\nExternal links\n\nMPI_Recv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Start","page":"Point-to-point communication","title":"MPI.Start","text":"Start(request::AbstractRequest)\n\nStart a persistent communication request created by Send_init or Recv_init. Call Wait to complete the request.\n\nExternal links\n\nMPI_Start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Startall","page":"Point-to-point communication","title":"MPI.Startall","text":"Startall(reqs::AbstractVector{Request})\n\nStart a set of persistent communication requests created by Send_init or Recv_init. Call Waitall to complete the requests.\n\nExternal links\n\nMPI_Startall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Matching-probes-and-receives","page":"Point-to-point communication","title":"Matching probes and receives","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Message\nMPI.Mprobe\nMPI.Improbe\nMPI.Mrecv!\nMPI.Imrecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Message","page":"Point-to-point communication","title":"MPI.Message","text":"MPI.Message\n\nAn MPI message handle object, used by matched receive operations. These are returned by MPI.Mprobe and MPI.Improbe operations, and must be received by either MPI.Mrecv! or MPI.Imrecv!.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Mprobe","page":"Point-to-point communication","title":"MPI.Mprobe","text":"msg = MPI.Mprobe(comm::MPI.Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nmsg, status = MPI.Mprobe(comm::MPI.Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nMatching blocking probe. Similar to MPI.Probe, except that it also returns msg, an MPI.Message object. \n\nBlocks until a message that can be received matching source, tag and comm, returning a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Mprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Improbe","page":"Point-to-point communication","title":"MPI.Improbe","text":"ismsg, msg = MPI.Improbe(comm::MPI.Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nismsg, msg, status = MPI.Improbe(comm::MPI.Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nMatching non-blocking probe. Similar to MPI.Iprobe, except that it also returns msg, an MPI.Message object. \n\nChecks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true, and a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!. Otherwise msg is set to be a null Message.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Improbe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Mrecv!","page":"Point-to-point communication","title":"MPI.Mrecv!","text":"data = MPI.Mrecv!(recvbuf, msg::MPI.Message)\ndata, status = MPI.Mrecv!(recvbuf, msg::MPI.Message, MPI.Status)\n\nCompletes a blocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nOptionally returns the Status object of the receive.\n\nExternal links\n\nMPI_Mrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Imrecv!","page":"Point-to-point communication","title":"MPI.Imrecv!","text":"req = MPI.Imrecv!(recvbuf, msg::MPI.Message[, req::AbstractRequest=Request()])\n\nStarts a nonblocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nReturns req, an AbstractRequest object for the nonblocking receive.\n\nExternal links\n\nMPI_Imrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"external/#External-libraries-and-packages","page":"External libraries and packages","title":"External libraries and packages","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"Other libraries and packages may also make use of MPI. There are several concerns to ensure things are set up correctly.","category":"page"},{"location":"external/#Binary-requirements","page":"External libraries and packages","title":"Binary requirements","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"You need to ensure that external libraries are built correctly. In particular, if you are using a system-provided MPI backend in Julia, you also need to use the same system-provided binary for all packages and external libraries you use.","category":"page"},{"location":"external/#Passing-MPI-handles-via-ccall","page":"External libraries and packages","title":"Passing MPI handles via ccall","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"When passing MPI.jl handle objects (MPI.Comm, MPI.Info, etc) to C/C++ functions via ccall, you should pass the object directly as an argument, and specify the argument type as either the underlying handle type (MPI.MPI_Comm, MPI.MPI_Info, etc.), or a pointer (Ptr{MPI.MPI_Comm}, Ptr{MPI.MPI_Info}, etc.). This will internally handle the unwrapping, but ensure that a reference is kept to avoid premature garbage collection.","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"For example the C function signatures","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"int cfunc1(MPI_Comm comm);\nint cfunc2(MPI_Comm * comm);","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"would be called as","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"ccall((:cfunc1, lib), Cint, (MPI.MPI_Comm,), comm)\nccall((:cfunc2, lib), Cint, (Ptr{MPI.MPI_Comm},), comm)","category":"page"},{"location":"external/#Object-finalizers-and-MPI.Finalize","page":"External libraries and packages","title":"Object finalizers and MPI.Finalize","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"External libraries may allocate their own MPI handles (e.g., create or duplicate MPI communicators), which need to be cleaned up before MPI is finalized. If these are attached to object finalizers, they may not be guaranteed to be called before MPI.Finalize, which can result in an error upon program exit. (By default, MPI.jl will install an atexit hook that calls MPI.Finalize if it hasn't already been invoked.)","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"There are two typical solutions to this problem:","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"Gate the clean up functions behind an MPI.Finalized call, e.g.\nfinalizer(obj) do obj\n if !MPI.Finalized\n # call clean up function\n end\nend\nKeep track of all such objects, clean them up via MPI.add_finalize_hook!, e.g.\nfinalizer(obj) do obj\n # call clean up function\nend\nMPI.add_finalize_hook!(() -> finalize(obj))\nA variant of this is to keep track of all such objects, for example, using a WeakKeyDict, and use a hook to clean them all:\nconst REFS = WeakKeyDict{ObjType, Nothing}()\nMPI.add_finalize_hook!() do\n for obj in keys(REFS)\n finalize(obj)\n end\nend\n\n# for each object `obj`\nfinalizer(obj) do obj\n # call clean up function\nend\nREFS[obj] = nothing","category":"page"},{"location":"external/#Externally-initialized-MPI","page":"External libraries and packages","title":"Externally initialized MPI","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"When working with non-Julia libraries or tools, MPI_Init may be invoked in another part of the execution flow and not via MPI.jl's MPI.Init function. This leaves some package-internal settings uninitialized. In this case, you need to call [MPI.run_init_hooks())(@ref) manually to fully initialize MPI.jl. You may also want to consider calling MPI.set_default_error_handler_return().","category":"page"},{"location":"reference/io/#I/O","page":"I/O","title":"I/O","text":"","category":"section"},{"location":"reference/io/#File-manipulation","page":"I/O","title":"File manipulation","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.open","category":"page"},{"location":"reference/io/#MPI.File.open","page":"I/O","title":"MPI.File.open","text":"MPI.File.open(comm::Comm, filename::AbstractString; keywords...)\n\nOpen the file identified by filename. This is a collective operation on comm.\n\nSupported keywords are as follows:\n\nread, write, create, append have the same behaviour and defaults as Base.open.\nsequential: file will only be accessed sequentially (default: false)\nuniqueopen: file will not be concurrently opened elsewhere (default: false)\ndeleteonclose: delete file on close (default: false)\n\nAny additional keywords are passed via an Info object, and are implementation dependent.\n\nExternal links\n\nMPI_File_open man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Views","page":"I/O","title":"Views","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.set_view!\nMPI.File.get_byte_offset","category":"page"},{"location":"reference/io/#MPI.File.set_view!","page":"I/O","title":"MPI.File.set_view!","text":"MPI.File.set_view!(file::FileHandle, disp::Integer, etype::Datatype, filetype::Datatype, datarep::AbstractString; kwargs...)\n\nSet the current process's view of file.\n\nThe start of the view is set to disp; the type of data is set to etype; the distribution of data to processes is set to filetype; and the representation of data in the file is set to datarep: one of \"native\" (default), \"internal\", or \"external32\".\n\nExternal links\n\nMPI_File_set_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_byte_offset","page":"I/O","title":"MPI.File.get_byte_offset","text":"MPI.File.get_byte_offset(file::FileHandle, offset::Integer)\n\nConverts a view-relative offset into an absolute byte position. Returns the absolute byte position (from the beginning of the file) of offset relative to the current view of file.\n\nExternal links\n\nMPI_File_get_byte_offset man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Consistency","page":"I/O","title":"Consistency","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.sync\nMPI.File.get_atomicity\nMPI.File.set_atomicity","category":"page"},{"location":"reference/io/#MPI.File.sync","page":"I/O","title":"MPI.File.sync","text":"MPI.File.sync(fh::FileHandle)\n\nA collective operation causing all previous writes to fh by the calling process to be transferred to the storage device. If other processes have made updates to the storage device, then all such updates become visible to subsequent reads of fh by the calling process.\n\nExternal links\n\nMPI_File_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_atomicity","page":"I/O","title":"MPI.File.get_atomicity","text":"MPI.File.get_atomicity(file::FileHandle)\n\nGet the consistency option for the fh. If false it is non-atomic.\n\nExternal links\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.set_atomicity","page":"I/O","title":"MPI.File.set_atomicity","text":"MPI.File.set_atomicity(file::FileHandle, flag::Bool)\n\nSet the consistency option for the fh.\n\nExternal links\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Data-access","page":"I/O","title":"Data access","text":"","category":"section"},{"location":"reference/io/#Individual-pointer","page":"I/O","title":"Individual pointer","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read!\nMPI.File.read_all!\nMPI.File.write\nMPI.File.write_all","category":"page"},{"location":"reference/io/#MPI.File.read!","page":"I/O","title":"MPI.File.read!","text":"MPI.File.read!(file::FileHandle, data)\n\nReads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.read_all! for the collective operation\n\nExternal links\n\nMPI_File_read man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_all!","page":"I/O","title":"MPI.File.read_all!","text":"MPI.File.read_all!(file::FileHandle, data)\n\nReads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.read! for the noncollective operation\n\nExternal links\n\nMPI_File_read_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write","page":"I/O","title":"MPI.File.write","text":"MPI.File.write(file::FileHandle, data)\n\nWrites data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.write_all for the collective operation\n\nExternal links\n\nMPI_File_write man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_all","page":"I/O","title":"MPI.File.write_all","text":"MPI.File.write_all(file::FileHandle, data)\n\nWrites data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.write for the noncollective operation\n\nExternal links\n\nMPI_File_write_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Explicit-offsets","page":"I/O","title":"Explicit offsets","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read_at!\nMPI.File.read_at_all!\nMPI.File.write_at\nMPI.File.write_at_all","category":"page"},{"location":"reference/io/#MPI.File.read_at!","page":"I/O","title":"MPI.File.read_at!","text":"MPI.File.read_at!(file::FileHandle, offset::Integer, data)\n\nReads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.read_at_all! for the collective operation\n\nExternal links\n\nMPI_File_read_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_at_all!","page":"I/O","title":"MPI.File.read_at_all!","text":"MPI.File.read_at_all!(file::FileHandle, offset::Integer, data)\n\nReads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.read_at! for the noncollective operation\n\nExternal links\n\nMPI_File_read_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_at","page":"I/O","title":"MPI.File.write_at","text":"MPI.File.write_at(file::FileHandle, offset::Integer, data)\n\nWrites data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined.\n\nSee also\n\nMPI.File.write_at_all for the collective operation\n\nExternal links\n\nMPI_File_write_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_at_all","page":"I/O","title":"MPI.File.write_at_all","text":"MPI.File.write_at_all(file::FileHandle, offset::Integer, data)\n\nWrites from data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.write_at for the noncollective operation\n\nExternal links\n\nMPI_File_write_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Shared-pointer","page":"I/O","title":"Shared pointer","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read_shared!\nMPI.File.write_shared\nMPI.File.read_ordered!\nMPI.File.write_ordered\nMPI.File.seek_shared\nMPI.File.get_position_shared","category":"page"},{"location":"reference/io/#MPI.File.read_shared!","page":"I/O","title":"MPI.File.read_shared!","text":"MPI.File.read_shared!(file::FileHandle, data)\n\nReads from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.read_ordered! for the collective operation\n\nExternal links\n\nMPI_File_read_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_shared","page":"I/O","title":"MPI.File.write_shared","text":"MPI.File.write_shared(file::FileHandle, data)\n\nWrites to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.write_ordered for the collective operation\n\nExternal links\n\nMPI_File_write_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_ordered!","page":"I/O","title":"MPI.File.read_ordered!","text":"MPI.File.read_ordered!(file::FileHandle, data)\n\nCollectively reads in rank order from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.read_shared! for the noncollective operation\n\nExternal links\n\nMPI_File_read_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_ordered","page":"I/O","title":"MPI.File.write_ordered","text":"MPI.File.write_ordered(file::FileHandle, data)\n\nCollectively writes in rank order to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.write_shared for the noncollective operation\n\nExternal links\n\nMPI_File_write_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.seek_shared","page":"I/O","title":"MPI.File.seek_shared","text":"MPI.File.seek_shared(file::FileHandle, offset::Integer, whence::Seek=SEEK_SET)\n\nUpdates the shared file pointer according to whence, which has the following possible values:\n\nMPI.File.SEEK_SET (default): the pointer is set to offset\nMPI.File.SEEK_CUR: the pointer is set to the current pointer position plus offset\nMPI.File.SEEK_END: the pointer is set to the end of file plus offset\n\nThis is a collective operation, and must be called with the same value on all processes in the communicator.\n\nExternal links\n\nMPI_File_seek_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_position_shared","page":"I/O","title":"MPI.File.get_position_shared","text":"MPI.File.get_position_shared(file::FileHandle)\n\nThe current position of the shared file pointer (in etype units) relative to the current view.\n\nExternal links\n\nMPI_File_get_position_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Environment","page":"Environment","title":"Environment","text":"","category":"section"},{"location":"reference/environment/#Launching-MPI-programs","page":"Environment","title":"Launching MPI programs","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"mpiexec\nMPI.install_mpiexecjl","category":"page"},{"location":"reference/environment/#MPICH_jll.mpiexec","page":"Environment","title":"MPICH_jll.mpiexec","text":"mpiexec(fn)\n\nA wrapper function for the MPI launcher executable. Calls fn(cmd), where cmd is a Cmd object of the MPI launcher.\n\nUsage\n\njulia> mpiexec(cmd -> run(`$cmd -n 3 echo hello world`));\nhello world\nhello world\nhello world\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.install_mpiexecjl","page":"Environment","title":"MPI.install_mpiexecjl","text":"MPI.install_mpiexecjl(; command::String = \"mpiexecjl\",\n destdir::String = joinpath(DEPOT_PATH[1], \"bin\"),\n force::Bool = false, verbose::Bool = true)\n\nInstall the mpiexec wrapper to destdir directory, with filename command. Set force to true to overwrite an existing destination file with the same path. If verbose is true, the installation prints information about the progress of the process.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Enums","page":"Environment","title":"Enums","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.ThreadLevel","category":"page"},{"location":"reference/environment/#MPI.ThreadLevel","page":"Environment","title":"MPI.ThreadLevel","text":"ThreadLevel\n\nAn Enum denoting the level of threading support in the current process:\n\nMPI.THREAD_SINGLE: Only one thread will execute.\nMPI.THREAD_FUNNELED: The process may be multi-threaded, but the application must ensure that only the main thread makes MPI calls. See Is_thread_main.\nMPI.THREAD_SERIALIZED: The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time (i.e. all MPI calls are serialized).\nMPI.THREAD_MULTIPLE: Multiple threads may call MPI, with no restrictions.\n\nSee also\n\nInit\nQuery_thread\n\n\n\n\n\n","category":"type"},{"location":"reference/environment/#Functions","page":"Environment","title":"Functions","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.Abort\nMPI.Init\nMPI.Query_thread\nMPI.Is_thread_main\nMPI.Initialized\nMPI.Finalize\nMPI.Finalized\nMPI.add_init_hook!\nMPI.run_init_hooks\nMPI.add_finalize_hook!","category":"page"},{"location":"reference/environment/#MPI.Abort","page":"Environment","title":"MPI.Abort","text":"Abort(comm::Comm, errcode::Integer)\n\nMake a “best attempt” to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program.\n\nExternal links\n\nMPI_Abort man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Init","page":"Environment","title":"MPI.Init","text":"Init(;threadlevel=:serialized, finalize_atexit=true, errors_return=true)\n\nInitialize MPI in the current process. The keyword options:\n\nthreadlevel: either :single, :funneled, :serialized (default), :multiple, or an instance of ThreadLevel.\nfinalize_atexit: if true (default), adds an atexit hook to call MPI.Finalize if it hasn't already been called.\nerrors_return: if true (default), will set the default error handlers for MPI.COMM_SELF and MPI.COMM_WORLD to be MPI.ERRORS_RETURN. MPI errors will then appear as Julia exceptions.\n\nIt will return the ThreadLevel value which MPI is initialized at.\n\nAll MPI programs must call this function at least once before calling any other MPI operations: the only MPI functions that may be called before MPI.Init are MPI.Initialized and MPI.Finalized.\n\nIt is safe to call MPI.Init multiple times, however it is not valid to call it after calling MPI.Finalize.\n\nExternal links\n\nMPI_Init man page: OpenMPI, MPICH\nMPI_Init_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Query_thread","page":"Environment","title":"MPI.Query_thread","text":"Query_thread()\n\nQuery the level of threading support in the current process. Returns a ThreadLevel value denoting\n\nExternal links\n\nMPI_Query_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Is_thread_main","page":"Environment","title":"MPI.Is_thread_main","text":"Is_thread_main()\n\nQueries whether the current thread is the main thread according to MPI. This can be called by any thread, and is useful for the THREAD_FUNNELED ThreadLevel.\n\nExternal links\n\nMPI_Is_thread_main man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Initialized","page":"Environment","title":"MPI.Initialized","text":"Initialized()\n\nReturns true if MPI.Init has been called, false otherwise.\n\nIt is unaffected by MPI.Finalize, and is one of the few functions that may be called before MPI.Init.\n\nExternal links\n\nMPI_Initialized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Finalize","page":"Environment","title":"MPI.Finalize","text":"Finalize()\n\nMarks MPI state for cleanup. This should be called after MPI.Init, and can be called at most once. No further MPI calls (other than Initialized or Finalized) should be made after it is called.\n\nMPI.Init will automatically insert a hook to call this function when Julia exits, if it hasn't already been called.\n\nExternal links\n\nMPI_Finalize man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Finalized","page":"Environment","title":"MPI.Finalized","text":"Finalized()\n\nReturns true if MPI.Finalize has completed, false otherwise.\n\nIt is safe to call before MPI.Init and after MPI.Finalize.\n\nExternal links\n\nMPI_Finalized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.add_init_hook!","page":"Environment","title":"MPI.add_init_hook!","text":"MPI.add_init_hook!(f)\n\nRegister a function f that will be called as f() when MPI.Init is called. These are invoked in a first-in, first-out (FIFO) order.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.run_init_hooks","page":"Environment","title":"MPI.run_init_hooks","text":"MPI.run_init_hooks()\n\nExecute all functions that have been registered using MPI.add_init_hook!().\n\nThis function is executed automatically by MPI.Init() but must be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times (subsequent runs will be a no-op).\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.add_finalize_hook!","page":"Environment","title":"MPI.add_finalize_hook!","text":"MPI.add_finalize_hook!(f)\n\nRegister a function f that will be called as f() when MPI.Finalizer is called. These are invoked in a last-in, first-out (LIFO) order.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Errors","page":"Environment","title":"Errors","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.MPIError\nMPI.FeatureLevelError","category":"page"},{"location":"reference/environment/#MPI.MPIError","page":"Environment","title":"MPI.MPIError","text":"MPIError\n\nError thrown when an MPI function returns an error code. The code field contains the MPI error code.\n\n\n\n\n\n","category":"type"},{"location":"reference/environment/#MPI.API.FeatureLevelError","page":"Environment","title":"MPI.API.FeatureLevelError","text":"FeatureLevelError\n\nError thrown if a feature is not implemented in the current MPI backend.\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#Groups","page":"Groups","title":"Groups","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"An MPI group is a set of process identifiers identified by their rank (see MPI.Comm_rank and MPI.Group_rank). They are used within a communicator to describe the participants in a communication universe.","category":"page"},{"location":"reference/group/#Types-and-enums","page":"Groups","title":"Types and enums","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"MPI.Group\nMPI.Comparison","category":"page"},{"location":"reference/group/#MPI.Group","page":"Groups","title":"MPI.Group","text":"MPI.Group\n\nAn MPI Group object.\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#MPI.Comparison","page":"Groups","title":"MPI.Comparison","text":"Comparison\n\nAn enum denoting the result of Comm_compare:\n\nMPI.IDENT: the objects are handles for the same object (identical groups and same contexts).\nMPI.CONGRUENT: the underlying groups are identical in constituents and rank order; these communicators differ only by context.\nMPI.SIMILAR: members of both objects are the same but the rank order differs.\nMPI.UNEQUAL: otherwise\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#Functions","page":"Groups","title":"Functions","text":"","category":"section"},{"location":"reference/group/#Operations","page":"Groups","title":"Operations","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"MPI.Group_size\nMPI.Group_rank","category":"page"},{"location":"reference/group/#MPI.Group_size","page":"Groups","title":"MPI.Group_size","text":"Group_size(group::Group)\n\nThe number of processes involved in group.\n\nExternal links\n\nMPI_Group_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/group/#MPI.Group_rank","page":"Groups","title":"MPI.Group_rank","text":"Group_rank(group::Group)\n\nThe rank of the process in the particular group.\n\nReturns an integer in the range 0:MPI.Group_size()-1.\n\nExternal links\n\nMPI_Group_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#Topology","page":"Topology","title":"Topology","text":"","category":"section"},{"location":"reference/topology/#Cartesian","page":"Topology","title":"Cartesian","text":"","category":"section"},{"location":"reference/topology/","page":"Topology","title":"Topology","text":"MPI.Dims_create\nMPI.Cart_create\nMPI.Cart_get\nMPI.Cart_coords\nMPI.Cart_rank\nMPI.Cart_shift\nMPI.Cart_sub\nMPI.Cartdim_get","category":"page"},{"location":"reference/topology/#MPI.Dims_create","page":"Topology","title":"MPI.Dims_create","text":"newdims = Dims_create(nnodes::Integer, dims)\n\nA convenience function for selecting a balanced Cartesian grid of a total of nnodes nodes, for example to use with MPI.Cart_create.\n\ndims is an array or tuple of integers specifying the number of nodes in each dimension. The function returns an array newdims of the same length, such that if newdims[i] = dims[i] if dims[i] is non-zero, and prod(newdims) == nnodes, and values newdims are as close to each other as possible.\n\nnnodes should be divisible by the product of the non-zero entries of dims.\n\nExternal links\n\nMPI_Dims_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_create","page":"Topology","title":"MPI.Cart_create","text":"comm_cart = Cart_create(comm::Comm, dims; periodic=map(_->false, dims), reorder=false)\n\nCreate new MPI communicator with Cartesian topology information attached.\n\ndims is an array or tuple of integers specifying the number of MPI processes in each coordinate direction, and periodic is an array or tuple of Bools indicating the periodicity of each coordinate. prod(dims) must be less than or equal to the size of comm; if it is smaller than some processes are returned a null communicator.\n\nIf reorder == false then the rank of each process in the new group is identical to its rank in the old group, otherwise the function may reorder the processes.\n\nSee also MPI.Dims_create.\n\nExternal links\n\nMPI_Cart_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_get","page":"Topology","title":"MPI.Cart_get","text":"dims, periods, coords = Cart_get(comm::Comm)\n\nObtain information on the Cartesian topology of dimension N underlying the communicator comm. This is specified by two Cint arrays of N elements for the number of processes and periodicity properties along each Cartesian dimension. A third Cint array is returned, containing the Cartesian coordinates of the calling process.\n\nExternal links\n\nMPI_Cart_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_coords","page":"Topology","title":"MPI.Cart_coords","text":"coords = Cart_coords(comm::Comm, rank::Integer=Comm_rank(comm))\n\nDetermine coordinates of a process with rank rank in the Cartesian communicator comm. If no rank is provided, it returns the coordinates of the current process.\n\nReturns an integer array of the 0-based coordinates. The inverse of Cart_rank.\n\nExternal links\n\nMPI_Cart_coords man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_rank","page":"Topology","title":"MPI.Cart_rank","text":"rank = Cart_rank(comm::Comm, coords)\n\nDetermine process rank in communicator comm with Cartesian structure. The coords array specifies the 0-based Cartesian coordinates of the process. This is the inverse of MPI.Cart_coords\n\nExternal links\n\nMPI_Cart_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_shift","page":"Topology","title":"MPI.Cart_shift","text":"rank_source, rank_dest = Cart_shift(comm::Comm, direction::Integer, disp::Integer)\n\nReturn the source and destination ranks associated to a shift along a given direction.\n\nExternal links\n\nMPI_Cart_shift man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_sub","page":"Topology","title":"MPI.Cart_sub","text":"comm_sub = Cart_sub(comm::Comm, remain_dims)\n\nCreate lower-dimensional Cartesian communicator from existent Cartesian topology.\n\nremain_dims should be a boolean vector specifying the dimensions that should be kept in the generated subgrid.\n\nExternal links\n\nMPI_Cart_sub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cartdim_get","page":"Topology","title":"MPI.Cartdim_get","text":"ndims = Cartdim_get(comm::Comm)\n\nReturn number of dimensions of the Cartesian topology associated with the communicator comm.\n\nExternal links\n\nMPI_Cartdim_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#Graph-topology","page":"Topology","title":"Graph topology","text":"","category":"section"},{"location":"reference/topology/","page":"Topology","title":"Topology","text":"MPI.UNWEIGHTED\nMPI.Dist_graph_create\nMPI.Dist_graph_create_adjacent\nMPI.Dist_graph_neighbors_count\nMPI.Dist_graph_neighbors!\nMPI.Dist_graph_neighbors","category":"page"},{"location":"reference/topology/#MPI.UNWEIGHTED","page":"Topology","title":"MPI.UNWEIGHTED","text":"MPI.UNWEIGHTED :: MPI.Unweighted\n\nThis is used to indicate that a graph topology is unweighted. It can be supplied as an argument to Dist_graph_create_adjacent, Dist_graph_create, and Dist_graph_neighbors!; or obtained as the return value from Dist_graph_neighbors.\n\n\n\n\n\n","category":"constant"},{"location":"reference/topology/#MPI.Dist_graph_create","page":"Topology","title":"MPI.Dist_graph_create","text":"graph_comm = Dist_graph_create(comm::Comm, sources::Vector{Cint}, degrees::Vector{Cint}, destinations::Vector{Cint}; weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)\n\nCreate a new communicator from a given directed graph topology, described by incoming and outgoing edges on an existing communicator.\n\nArguments\n\ncomm::Comm: The communicator on which the distributed graph topology should be induced.\nsources::Vector{Cint}: An array with the ranks for which this call will specify outgoing edges.\ndegrees::Vector{Cint}: An array with the number of outgoing edges for each entry in the sources array.\ndestinations::Vector{Cint}: An array containing destination nodes for the source nodes in the source node array, of lengthsum(sources).\nweights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the specified edges. The default is MPI.UNWEIGHTED.\nreorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.\n\nExample\n\nWe can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows\n\njulia> rank = MPI.Comm_rank(comm);\njulia> N = MPI.Comm_size(comm);\njulia> sources = Cint[rank];\njulia> degrees = Cint[1];\njulia> destinations = Cint[mod(rank-1, N)];\njulia> graph_comm = Dist_graph_create(comm, sources, degrees, destinations)\n\nExternal links\n\nMPI_Dist_graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_create_adjacent","page":"Topology","title":"MPI.Dist_graph_create_adjacent","text":"graph_comm = Dist_graph_create_adjacent(comm::Comm,\n sources::Vector{Cint}, destinations::Vector{Cint};\n source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, destination_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED,\n reorder=false, infokws...)\n\nCreate a new communicator from a given directed graph topology, described by local incoming and outgoing edges on an existing communicator.\n\nArguments\n\ncomm::Comm: The communicator on which the distributed graph topology should be induced.\nsources::Vector{Cint}: The local, incoming edges on the rank of the calling process.\ndestinations::Vector{Cint}: The local, outgoing edges on the rank of the calling process.\nsource_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, incoming edges. The default is MPI.UNWEIGHTED.\ndestinations_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, outgoing edges. The default is MPI.UNWEIGHTED.\nreorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.\n\nExample\n\nWe can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows\n\njulia> rank = MPI.Comm_rank(comm);\njulia> N = MPI.Comm_size(comm);\njulia> sources = Cint[mod(rank-1, N)];\njulia> destinations = Cint[mod(rank+1, N)];\njulia> graph_comm = Dist_graph_create_adjacent(comm, sources, destinations);\n\nExternal links\n\nMPI_Dist_graph_create_adjacent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors_count","page":"Topology","title":"MPI.Dist_graph_neighbors_count","text":"indegree, outdegree, weighted = Dist_graph_neighbors_count(graph_comm::Comm)\n\nReturn the number of in and out edges for the calling processes in a distributed graph topology and a flag indicating whether the distributed graph is weighted.\n\nArguments\n\ngraph_comm::Comm: The communicator of the distributed graph topology.\n\nExample\n\nLet us assume the following graph 0 <--> 1 --> 2, which has no weights on its edges, then the process with rank 1 will obtain the following result from calling the function\n\njulia> Dist_graph_neighbors_count(graph_comm)\n(1,2,false)\n\nExternal links\n\nMPI_Dist_graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors!","page":"Topology","title":"MPI.Dist_graph_neighbors!","text":"Dist_graph_neighbors!(graph_comm::MPI.Comm,\n sources::Vector{Cint}, source_weights::Union{Vector{Cint}, Unweighted},\n destinations::Vector{Cint}, destination_weights::Union{Vector{Cint}, Unweighted},\n)\nDist_graph_neighbors!(graph_comm::Comm, sources::Vector{Cint}, destinations::Vector{Cint})\n\nQuery the neighbors and edge weights (optional) of the calling process in a distributed graph topology.\n\nArguments\n\ngraph_comm::Comm: The communicator of the distributed graph topology.\nsources: A preallocated Vector{Cint}, which will be filled with the ranks of the processes whose edges pointing towards the calling process. The length is exactly the indegree returned by MPI.Dist_graph_neighbors_count.\nsource_weights: A preallocated Vector{Cint}, which will be filled with the weights associated to the edges pointing towards the calling process. The length is exactly the indegree returned by MPI.Dist_graph_neighbors_count. Alternatively, MPI.UNWEIGHTED can be used if weight information is not required.\ndestinations: A preallocated Vector{Cint}, which will be filled with the ranks of the processes towards which the edges of the calling process point. The length is exactly the outdegree returned by [MPI.Distgraphneighbors_count`](@ref).\ndestination_weights: A preallocated Vector{Cint}, which will be filled with the weights associated to the edges of the outgoing edges of the calling process point. The length is exactly the outdegree returned by MPI.Dist_graph_neighbors_count. Alternatively, MPI.UNWEIGHTED can be used if weight information is not required.\n\nExample\n\nLet us assume the following graph:\n\n rank 0 <-----> rank 1 ------> rank 2\nweights: 3 4\n\nthen then the process with rank 1 will need to preallocate sources and source_weights as vectors of length 1, and a destinations and destination_weights as vectors of length 2.\n\nThe call will fill the vectors as follows:\n\njulia> MPI.Dist_graph_neighbors!(graph_comm, sources, source_weights, destinations, destination_weights);\njulia> sources\n[0]\njulia> source_weights\n[3]\njulia> destinations\n[0,2]\njulia> destination_weights\n[3,4]\n\nNote that the edge between ranks 0 and 1 can have a different weight depending on whether it is the incoming edge 0 --> 1 or the outgoing one 0 <-- 1.\n\nSee also\n\nDist_graph_neighbors\n\nExternal links\n\nMPI_Dist_graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors","page":"Topology","title":"MPI.Dist_graph_neighbors","text":"sources, source_weights, destinations, destination_weights = Dist_graph_neighbors(graph_comm::MPI.Comm)\n\nReturn (sources, source_weights, destinations, destination_weights) of the graph communicator graph_comm. For unweighted graphs source_weights and destination_weights are returned as MPI.UNWEIGHTED.\n\nThis function is a wrapper around MPI.Dist_graph_neighbors_count and MPI.Dist_graph_neighbors! that automatically handles the allocation of the result vectors.\n\n\n\n\n\n","category":"function"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/05-job_schedule.jl\"","category":"page"},{"location":"examples/05-job_schedule/#Job-Scheduling","page":"Job Scheduling","title":"Job Scheduling","text":"","category":"section"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"# examples/05-job_schedule.jl\n# This example demonstrates a job scheduling through adding the\n# number 100 to every component of the vector data. The root\n# assigns one element to each worker to compute the operation.\n# When the worker is finished, the root sends another element\n# until each element is added 100\n# Inspired by https://www.hpc.ntnu.no/vilje/software/mpi-and-mpi-io-training-tutorial/\n# https://www.hpc.ntnu.no/vilje/software/mpi-and-mpi-io-training-tutorial/basic-mpi/job-queue/\n# an updated job_queue.c is available in the basic_mpi/04_job_queue/src subdirectory of\n# the extracted https://www.hpc.ntnu.no/wp-content/uploads/2019/09/mpiexamples.tar.gz\n\nusing MPI\n\nfunction job_queue(data,f)\n MPI.Init()\n\n comm = MPI.COMM_WORLD\n rank = MPI.Comm_rank(comm)\n world_size = MPI.Comm_size(comm)\n nworkers = world_size - 1\n\n root = 0\n\n MPI.Barrier(comm)\n T = eltype(data)\n N = size(data)[1]\n send_mesg = Array{T}(undef, 1)\n recv_mesg = Array{T}(undef, 1)\n\n if rank == root # I am root\n\n idx_recv = 0\n idx_sent = 1\n\n new_data = Array{T}(undef, N)\n # Array of workers requests\n sreqs_workers = Array{MPI.Request}(undef,nworkers)\n # -1 = start, 0 = channel not available, 1 = channel available\n status_workers = ones(nworkers).*-1\n\n # Send message to workers\n for dst in 1:nworkers\n if idx_sent > N\n break\n end\n send_mesg[1] = data[idx_sent]\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n idx_sent += 1\n sreqs_workers[dst] = sreq\n status_workers[dst] = 0\n print(\"Root: Sent number $(send_mesg[1]) to Worker $dst\\n\")\n end\n\n # Send and receive messages until all elements are added\n while idx_recv != N\n # Check to see if there is an available message to receive\n for dst in 1:nworkers\n if status_workers[dst] == 0\n flag = MPI.Test(sreqs_workers[dst])\n if flag\n status_workers[dst] = 1\n end\n end\n end\n for dst in 1:nworkers\n if status_workers[dst] == 1\n ismessage = MPI.Iprobe(comm; source=dst, tag=dst+32)\n if ismessage\n # Receives message\n MPI.Recv!(recv_mesg, comm; source=dst, tag=dst+32)\n idx_recv += 1\n new_data[idx_recv] = recv_mesg[1]\n print(\"Root: Received number $(recv_mesg[1]) from Worker $dst\\n\")\n if idx_sent <= N\n send_mesg[1] = data[idx_sent]\n # Sends new message\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n idx_sent += 1\n sreqs_workers[dst] = sreq\n status_workers[dst] = 1\n print(\"Root: Sent number $(send_mesg[1]) to Worker $dst\\n\")\n end\n end\n end\n end\n end\n\n for dst in 1:nworkers\n # Termination message to worker\n send_mesg[1] = -1\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n sreqs_workers[dst] = sreq\n status_workers[dst] = 0\n print(\"Root: Finish Worker $dst\\n\")\n end\n\n MPI.Waitall(sreqs_workers)\n print(\"Root: New data = $new_data\\n\")\n else # If rank == worker\n # -1 = start, 0 = channel not available, 1 = channel available\n status_worker = -1\n while true\n sreqs_workers = Array{MPI.Request}(undef,1)\n ismessage = MPI.Iprobe(comm; source=root, tag=rank+32)\n\n if ismessage\n # Receives message\n MPI.Recv!(recv_mesg, comm; source=root, tag=rank+32)\n # Termination message from root\n if recv_mesg[1] == -1\n print(\"Worker $rank: Finish\\n\")\n break\n end\n print(\"Worker $rank: Received number $(recv_mesg[1]) from root\\n\")\n # Apply function (add number 100) to array\n send_mesg = f(recv_mesg)\n sreq = MPI.Isend(send_mesg, comm; dest=root, tag=rank+32)\n sreqs_workers[1] = sreq\n status_worker = 0\n end\n # Check to see if there is an available message to receive\n if status_worker == 0\n flag = MPI.Test(sreqs_workers[1])\n if flag\n status_worker = 1\n end\n end\n end\n end\n MPI.Barrier(comm)\n MPI.Finalize()\nend\n\nf = x -> x.+100\ndata = collect(1:10)\njob_queue(data,f)","category":"page"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"> mpiexecjl -n 4 julia examples/05-job_schedule.jl\nRoot: Sent number 1 to Worker 1\nWorker 1: Received number 1 from root\nRoot: Sent number 2 to Worker 2\nRoot: Sent number 3 to Worker 3\nRoot: Received number 101 from Worker 1\nRoot: Sent number 4 to Worker 1\nWorker 1: Received number 4 from root\nRoot: Received number 104 from Worker 1\nRoot: Sent number 5 to Worker 1\nWorker 1: Received number 5 from root\nRoot: Received number 105 from Worker 1\nRoot: Sent number 6 to Worker 1\nWorker 1: Received number 6 from root\nRoot: Received number 106 from Worker 1\nRoot: Sent number 7 to Worker 1\nWorker 1: Received number 7 from root\nRoot: Received number 107 from Worker 1\nRoot: Sent number 8 to Worker 1\nWorker 1: Received number 8 from root\nRoot: Received number 108 from Worker 1\nRoot: Sent number 9 to Worker 1\nWorker 1: Received number 9 from root\nRoot: Received number 109 from Worker 1\nRoot: Sent number 10 to Worker 1\nWorker 1: Received number 10 from root\nRoot: Received number 110 from Worker 1\nWorker 3: Received number 3 from root\nRoot: Received number 103 from Worker 3\nWorker 2: Received number 2 from root\nRoot: Received number 102 from Worker 2\nRoot: Finish Worker 1\nWorker 1: Finish\nRoot: Finish Worker 2\nWorker 2: Finish\nRoot: Finish Worker 3\nWorker 3: Finish\nRoot: New data = [101, 104, 105, 106, 107, 108, 109, 110, 103, 102]","category":"page"},{"location":"reference/advanced/#Advanced","page":"Advanced","title":"Advanced","text":"","category":"section"},{"location":"reference/advanced/#Object-handling","page":"Advanced","title":"Object handling","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.free","category":"page"},{"location":"reference/advanced/#MPI.free","page":"Advanced","title":"MPI.free","text":"MPI.free(obj)\n\nFree the MPI object handle obj. This is typically used as the finalizer, and so need not be called directly unless otherwise noted.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Datatype-objects","page":"Advanced","title":"Datatype objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Datatype\nMPI.to_type\nMPI.Types.extent\nMPI.Types.create_contiguous\nMPI.Types.create_vector\nMPI.Types.create_hvector\nMPI.Types.create_subarray\nMPI.Types.create_struct\nMPI.Types.create_resized\nMPI.Types.commit!\nMPI.Types.duplicate","category":"page"},{"location":"reference/advanced/#MPI.Datatype","page":"Advanced","title":"MPI.Datatype","text":"Datatype\n\nA Datatype represents the layout of the data in memory.\n\nUsage\n\nDatatype(T)\n\nEither return the predefined Datatype corresponding to T, or create a new Datatype for the Julia type T, calling Types.commit! so that it can be used for communication operations.\n\nNote that this can only be called on types for which isbitstype(T) is true.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.to_type","page":"Advanced","title":"MPI.to_type","text":"to_type(datatype::Datatype)\n\nReturn the Julia type corresponding to the MPI Datatype datatype, or nothing if it doesn't correspond directly.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.extent","page":"Advanced","title":"MPI.Types.extent","text":"lb, extent = MPI.Types.extent(dt::MPI.Datatype)\n\nGets the lowerbound lb and the extent extent in bytes.\n\nExternal links\n\nMPI_Type_get_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_contiguous","page":"Advanced","title":"MPI.Types.create_contiguous","text":"MPI.Types.create_contiguous(count::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into count contiguous locations.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_contiguous man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_vector","page":"Advanced","title":"MPI.Types.create_vector","text":"MPI.Types.create_vector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into locations that consist of equally spaced blocks.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExample\n\ndatatype = MPI.Types.create_vector(3, 2, 5, MPI.Datatype(Int64))\nMPI.Types.commit!(datatype)\n\nwill create a datatype with the following layout\n\n|<----->| block length\n\n+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+\n| X | X | | | | X | X | | | | X | X | | | |\n+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+\n\n|<---- stride ----->|\n\nwhere each segment represents an Int64.\n\n(image by Jonathan Dursi, https://stackoverflow.com/a/10788351/392585)\n\nExternal links\n\nMPI_Type_vector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_hvector","page":"Advanced","title":"MPI.Types.create_hvector","text":"MPI.Types.create_hvector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into locations that consist of equally spaced (bytes) blocks.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExample\n\ndatatype = MPI.Types.create_hvector(3, 2, 5, MPI.Datatype(Int64))\nMPI.Types.commit!(datatype)\n\nExternal links\n\nMPI_Type_create_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_subarray","page":"Advanced","title":"MPI.Types.create_subarray","text":"MPI.Types.create_subarray(sizes, subsizes, offset, oldtype::Datatype;\n rowmajor=false)\n\nCreates a derived Datatype describing an N-dimensional subarray of size subsizes of an N-dimensional array of size sizes and element type oldtype, with the first element offset by offset (i.e. the 0-based index of the first element).\n\nColumn-major indexing (used by Julia and Fortran) is assumed; use the keyword rowmajor=true to specify row-major layout (used by C and numpy).\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_create_subarray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_struct","page":"Advanced","title":"MPI.Types.create_struct","text":"MPI.Types.create_struct(blocklengths, displacements, types)\n\nCreates a derived Datatype describing a struct layout.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_create_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_resized","page":"Advanced","title":"MPI.Types.create_resized","text":"MPI.Types.create_resized(oldtype::Datatype, lb::Integer, extent::Integer)\n\nCreates a new Datatype that is identical to oldtype, except that the lower bound of this new datatype is set to be lb, and its upper bound is set to be lb + extent.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nSee also\n\nMPI.Types.extent\n\nExternal links\n\nMPI_Type_create_resized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.commit!","page":"Advanced","title":"MPI.Types.commit!","text":"MPI.Types.commit!(newtype::Datatype)\n\nCommits the Datatype newtype so that it can be used for communication. Returns newtype.\n\nExternal links\n\nMPI_Type_commit man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.duplicate","page":"Advanced","title":"MPI.Types.duplicate","text":"MPI.Types.duplicate(oldtype::Datatype)\n\nDuplicates the datatype oldtype.\n\nExternal links\n\nMPI_Type_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Operator-objects","page":"Advanced","title":"Operator objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Op","category":"page"},{"location":"reference/advanced/#MPI.Op","page":"Advanced","title":"MPI.Op","text":"Op\n\nAn MPI reduction operator, for use with Reduce/Scan collective operations to wrap binary operators. MPI.jl will perform this conversion automatically.\n\nUsage\n\nOp(op, T=Any; iscommutative=false)\n\nWrap the Julia reduction function op for arguments of type T. op is assumed to be associative, and if iscommutative is true, assumed to be commutative as well.\n\nSee also\n\nReduce!/Reduce\nAllreduce!/Allreduce\nScan!/Scan\nExscan!/Exscan\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#Info-objects","page":"Advanced","title":"Info objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Info\nMPI.infoval","category":"page"},{"location":"reference/advanced/#MPI.Info","page":"Advanced","title":"MPI.Info","text":"Info <: AbstractDict{Symbol,String}\n\nMPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.\n\nUsage\n\nThese will typically be hidden from user-facing APIs by splatting keywords, e.g.\n\nfunction f(args...; kwargs...)\n info = Info(kwargs...)\n # pass `info` object to `ccall`\nend\n\nFor manual usage, Info objects act like Julia Dict objects:\n\ninfo = Info(init=true) # keyword argument is required\ninfo[key] = value\nx = info[key]\ndelete!(info, key)\n\nIf init=false is used in the constructor (the default), a \"null\" Info object will be returned: no keys can be added to such an object.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.infoval","page":"Advanced","title":"MPI.infoval","text":"infoval(x)\n\nConvert Julia object x to a string representation for storing in an Info object.\n\nThe MPI specification allows passing strings, Boolean values, integers, and lists.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Error-handler-objects","page":"Advanced","title":"Error handler objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Errhandler\nMPI.get_errorhandler\nMPI.set_errorhandler!\nMPI.set_default_error_handler_return","category":"page"},{"location":"reference/advanced/#MPI.Errhandler","page":"Advanced","title":"MPI.Errhandler","text":"MPI.Errhandler\n\nAn MPI error handler object. Currently only two are supported:\n\nERRORS_ARE_FATAL (default): program will immediately abort\nERRORS_RETURN: program will throw an MPIError.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.get_errorhandler","page":"Advanced","title":"MPI.get_errorhandler","text":"MPI.get_errorhandler(comm::MPI.Comm)\nMPI.get_errorhandler(win::MPI.Win)\nMPI.get_errorhandler(file::MPI.File.FileHandle)\n\nGet the current Errhandler for the relevant MPI object.\n\nSee also\n\nset_errorhandler!\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.set_errorhandler!","page":"Advanced","title":"MPI.set_errorhandler!","text":"MPI.set_errorhandler!(comm::MPI.Comm, errh::Errhandler)\nMPI.set_errorhandler!(win::MPI.Win, errh::Errhandler)\nMPI.set_errorhandler!(file::MPI.File.FileHandle, errh::Errhandler)\n\nSet the Errhandler for the relevant MPI object.\n\nSee also\n\nget_errorhandler\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.set_default_error_handler_return","page":"Advanced","title":"MPI.set_default_error_handler_return","text":"MPI.set_default_error_handler_return()\n\nSet the error handler for MPI_COMM_SELF and MPI_COMM_WORLD to MPI_ERRORS_RETURN. This will cause certain MPI errors to appear as Julia exceptions.\n\nThis function is executed automatically by MPI.Init() but may be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Miscellaneous","page":"Advanced","title":"Miscellaneous","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.API.@const_ref","category":"page"},{"location":"reference/advanced/#MPI.API.@const_ref","page":"Advanced","title":"MPI.API.@const_ref","text":"@const_ref name T expr\n\nDefines an constant binding\n\nconst name = Ref{T}()\n\nand adds a hook to execute\n\nname[] = expr\n\nat module initialization time.\n\n\n\n\n\n","category":"macro"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/06-scatterv.jl\"","category":"page"},{"location":"examples/06-scatterv/#Scatterv-and-Gatherv","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"","category":"section"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"# examples/06-scatterv.jl\n# This example shows how to use MPI.Scatterv! and MPI.Gatherv!\n# roughly based on the example from\n# https://stackoverflow.com/a/36082684/392585\n\nusing MPI\n\n\"\"\"\n split_count(N::Integer, n::Integer)\n\nReturn a vector of `n` integers which are approximately equally sized and sum to `N`.\n\"\"\"\nfunction split_count(N::Integer, n::Integer)\n q,r = divrem(N, n)\n return [i <= r ? q+1 : q for i = 1:n]\nend\n\n\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\ncomm_size = MPI.Comm_size(comm)\n\nroot = 0\n\nif rank == root\n M, N = 4, 7\n\n test = Float64[i for i = 1:M, j = 1:N]\n output = similar(test)\n \n # Julia arrays are stored in column-major order, so we need to split along the last dimension\n # dimension\n M_counts = [M for i = 1:comm_size]\n N_counts = split_count(N, comm_size)\n\n # store sizes in 2 * comm_size Array\n sizes = vcat(M_counts', N_counts')\n size_ubuf = UBuffer(sizes, 2)\n\n # store number of values to send to each rank in comm_size length Vector\n counts = vec(prod(sizes, dims=1))\n\n test_vbuf = VBuffer(test, counts) # VBuffer for scatter\n output_vbuf = VBuffer(output, counts) # VBuffer for gather\nelse\n # these variables can be set to `nothing` on non-root processes\n size_ubuf = UBuffer(nothing)\n output_vbuf = test_vbuf = VBuffer(nothing)\nend\n\nif rank == root\n println(\"Original matrix\")\n println(\"================\")\n @show test sizes counts\n println()\n println(\"Each rank\")\n println(\"================\")\nend \nMPI.Barrier(comm)\n\nlocal_size = MPI.Scatter(size_ubuf, NTuple{2,Int}, root, comm)\nlocal_test = MPI.Scatterv!(test_vbuf, zeros(Float64, local_size), root, comm)\n\nfor i = 0:comm_size-1\n if rank == i\n @show rank local_test\n end\n MPI.Barrier(comm)\nend\n\nMPI.Gatherv!(local_test, output_vbuf, root, comm)\n\nif rank == root\n println()\n println(\"Final matrix\")\n println(\"================\")\n @show output\nend ","category":"page"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"> mpiexecjl -n 4 julia examples/06-scatterv.jl\nOriginal matrix\n================\ntest = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0]\nsizes = [4 4 4 4; 2 2 2 1]\ncounts = [8, 8, 8, 4]\n\nEach rank\n================\nrank = 0\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 1\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 2\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 3\nlocal_test = [1.0; 2.0; 3.0; 4.0;;]\n\nFinal matrix\n================\noutput = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0]","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.jl","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.jl is a small package based on Preferences.jl for selecting MPI implementations. These choices are compile-time constants, and so any changes will require a Julia restart.","category":"page"},{"location":"reference/mpipreferences/#Consts","page":"MPIPreferences.jl","title":"Consts","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.binary\nMPIPreferences.abi","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.binary","page":"MPIPreferences.jl","title":"MPIPreferences.binary","text":"MPIPreferences.binary :: String\n\nThe currently selected binary. The possible values are\n\n\"MPICH_jll\": use the binary provided by MPICH_jll\n\"OpenMPI_jll\": use the binary provided by OpenMPI_jll\n\"MicrosoftMPI_jll\": use binary provided by MicrosoftMPI_jll\n\"MPItrampoline_jll\": use the binary provided by MPItrampoline_jll\n\"system\": use a system-provided binary.\n\n\n\n\n\n","category":"constant"},{"location":"reference/mpipreferences/#MPIPreferences.abi","page":"MPIPreferences.jl","title":"MPIPreferences.abi","text":"MPIPreferences.abi :: String\n\nThe ABI (application binary interface) of the currently selected binary. Supported values are:\n\n\"MPICH\": MPICH-compatible ABI (https://www.mpich.org/abi/)\n\"OpenMPI\": Open MPI compatible ABI (Open MPI, IBM Spectrum MPI, Fujitsu MPI)\n\"MicrosoftMPI\": Microsoft MPI\n\"MPItrampoline\": MPItrampoline\n\"HPE MPT\": HPE MPT\n\n\n\n\n\n","category":"constant"},{"location":"reference/mpipreferences/#Changing-implementations","page":"MPIPreferences.jl","title":"Changing implementations","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.use_system_binary\nMPIPreferences.use_jll_binary","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.use_system_binary","page":"MPIPreferences.jl","title":"MPIPreferences.use_system_binary","text":"use_system_binary(;\n library_names = [\"libmpi\", \"libmpi_ibm\", \"msmpi\", \"libmpich\", \"libmpi_cray\", \"libmpitrampoline\"],\n extra_paths = String[],\n mpiexec = \"mpiexec\",\n abi = nothing,\n vendor = nothing,\n export_prefs = false,\n force = true)\n\nSwitches the underlying MPI implementation to a system provided one. A restart of Julia is required for the changes to take effect.\n\nOptions:\n\nlibrary_names: a name or collection of names of the MPI library, passed to Libdl.find_library. If the library isn't in the library search path, you can specify the full path to the library.\nextra_paths: indicate extra directories where to search for the MPI library, besides the default ones of the dynamic linker.\nmpiexec: the MPI launcher executable. The default is mpiexec, but some clusters require using the scheduler launcher interface (e.g. srun on Slurm, aprun on PBS). It is also possible to pass a Cmd object to include specific command line options.\nabi: the ABI of the MPI library. By default this is determined automatically using identify_abi. See abi for currently supported values.\nvendor: can be either nothing or a vendor name (such a \"cray\"). If vendor has the value \"cray\", then the output from cc --cray-print-opts=all is parsed for which libraries are linked by the Cray Compiler Wrappers. Note that if mpi_gtl_* is present, then this .so will be added to the preloads. Also note that the inputs to library_names will be overwritten by the library name used by the compiler wrapper.\nexport_prefs: if true, the preferences into the Project.toml instead of LocalPreferences.toml.\nforce: if true, the preferences are set even if they are already set.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.use_jll_binary","page":"MPIPreferences.jl","title":"MPIPreferences.use_jll_binary","text":"use_jll_binary([binary]; export_prefs=false, force=true)\n\nSwitches the underlying MPI implementation to one provided by JLL packages. A restart of Julia is required for the changes to take effect.\n\nAvailable options are:\n\n\"MicrosoftMPI_jll\" (Only option and default on Windows)\n\"MPICH_jll\" (Default on all other platform)\n\"OpenMPI_jll\"\n\"MPItrampoline_jll\"\n\nThe export_prefs option determines whether the preferences being set should be stored within LocalPreferences.toml or Project.toml.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#Utils","page":"MPIPreferences.jl","title":"Utils","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.check_unchanged\nMPIPreferences.identify_abi\nMPIPreferences.dlopen_preloads","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.check_unchanged","page":"MPIPreferences.jl","title":"MPIPreferences.check_unchanged","text":"MPIPreferences.check_unchanged()\n\nThrows an error if the preferences have been modified in the current Julia session, or if they are modified after this function is called.\n\nThis is should be called from the __init__() function of any package which relies on the values of MPIPreferences.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.identify_abi","page":"MPIPreferences.jl","title":"MPIPreferences.identify_abi","text":"identify_abi(libmpi)\n\nIdentify the MPI implementation from the library version string\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.Preloads.dlopen_preloads","page":"MPIPreferences.jl","title":"MPIPreferences.Preloads.dlopen_preloads","text":"dlopen_preloads()\n\ndlopen's all preloads specified in the preloads section of MPIPreferences\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#Preferences-schema","page":"MPIPreferences.jl","title":"Preferences schema","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences utilizes the following keys to store information in the Preferences key-value store.","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"_format: the version number of the schema. Currently only \"1.0\" is supported.\nbinary: the choice of binary. This should be one of the strings listed in MPIPreferences.binary.","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"If binary == \"system\", then the following keys are also required (otherwise they have no effect):","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"libmpi: the filename or path of the MPI dynamic library.\nabi: The ABI of the MPI implementation. This should be one of the strings listed in MPIPreferences.abi.\nmpiexec: either\na string corresponding to the MPI launcher executable\nan array of strings, with the first entry being the executable and remaining entries being additional flags that should be used with the executable.","category":"page"},{"location":"configuration/#Configuration","page":"Configuration","title":"Configuration","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"By default, MPI.jl will download and link against the following MPI implementations:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Microsoft MPI on Windows\nMPICH on all other platforms","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This is suitable for most single-node use cases, but for larger systems, such as HPC clusters or multi-GPU machines, you will probably want to configure against a system-provided MPI implementation in order to exploit features such as fast network interfaces and CUDA-aware or ROCm-aware MPI interfaces.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses Preferences.jl to configure the MPI backend for each project separately. This provides a single source of truth that can be used for JLL packages (Julia packages providing C libraries) that link against MPI. It can be installed by","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using Pkg; Pkg.add(\"MPIPreferences\")'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nThe way MPI.jl is configured has changed with MPI.jl v0.20. See Migration from MPI.jl v0.19 or earlier for more information on how to migrate your configuration from earlier MPI.jl versions.","category":"page"},{"location":"configuration/#using_system_mpi","page":"Configuration","title":"Using a system-provided MPI backend","text":"","category":"section"},{"location":"configuration/#Requirements","page":"Configuration","title":"Requirements","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0 standard or later. The following MPI implementations should work out-of-the-box with MPI.jl:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Open MPI\nMPICH (v3.1 or later)\nIntel MPI\nMicrosoft MPI\nIBM Spectrum MPI\nMVAPICH\nCray MPICH\nFujitsu MPI\nHPE MPT/HMPT","category":"page"},{"location":"configuration/#configure_system_binary","page":"Configuration","title":"Configuration","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run MPIPreferences.use_system_binary(). This will attempt to locate and to identify any available MPI implementation, and create a file called LocalPreferences.toml adjacent to the current Project.toml.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If the implementation is changed, you will need to call this function again. See the MPIPreferences.use_system_binary documentation for specific options.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nYou can copy LocalPreferences.toml to a different project folder, but you must list MPIPreferences in the [extras] or [deps] section of the Project.toml for the settings to take effect.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nDue to a bug in Julia (until v1.6.5 and v1.7.1), getting preferences from transitive dependencies is broken (Preferences.jl#24). To fix this update your version of Julia, or add MPIPreferences as a direct dependency to your project.","category":"page"},{"location":"configuration/#Notes-to-HPC-cluster-administrators","page":"Configuration","title":"Notes to HPC cluster administrators","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to JULIA_LOAD_PATH variable that contains system-wide preferences. The steps are as follows:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run MPIPreferences.use_system_binary(), which will generate a file LocalPreferences.toml containing something like the following:\n[MPIPreferences]\n_format = \"1.0\"\nabi = \"OpenMPI\"\nbinary = \"system\"\nlibmpi = \"/software/mpi/lib/libmpi.so\"\nmpiexec = \"/software/mpi/bin/mpiexec\"\nCreate a file called Project.toml or JuliaProject.toml in a central location (for example /software/mpi/julia, or in the same directory as the MPI module file), and add the following contents:\n[extras]\nMPIPreferences = \"3da0fdf6-3ccc-4f1b-acd9-58baa6c99267\"\n\n[preferences.MPIPreferences]\n_format = \"1.0\"\nabi = \"OpenMPI\"\nbinary = \"system\"\nlibmpi = \"/software/mpi/lib/libmpi.so\"\nmpiexec = \"/software/mpi/bin/mpiexec\"\nupdating the contents of the [preferences.MPIPreferences] section match those of the [MPIPreferences] in LocalPreferences.toml.\nAppend the directory containing the file to the JULIA_LOAD_PATH environment variable, with a colon (:) separator.\nnote: Note\nIf this variable is not already set, it should be prefixed with a colon to ensure correct behavior of the Julia load path (e.g. JULIA_LOAD_PATH=\":/software/mpi/julia\")\nIf using environment modules, this can be achieved with\nappend-path -d {} JULIA_LOAD_PATH :/software/mpi/julia\nor if using an older version of environment modules\nif { ![info exists ::env(JULIA_LOAD_PATH)] } {\n append-path JULIA_LOAD_PATH \"\"\n}\nappend-path JULIA_LOAD_PATH /software/mpi/julia\nin the corresponding module file (preferably the module file for the MPI installation or for Julia).\nThe user can still provide differing MPI configurations for each Julia project that will take precedent by modifying the local Project.toml or by providing a LocalPreferences.toml file.","category":"page"},{"location":"configuration/#Notes-about-vendor-provided-MPI-backends","page":"Configuration","title":"Notes about vendor-provided MPI backends","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MPIPreferences can load vendor-specific libraries and settings using the vendor parameter, eg MPIPreferences.use_system_binary(mpiexec=\"srun\", vendor=\"cray\") configures MPIPreferences for use on Cray systems with srun.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nCurrently vendor only supports Cray systems.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This populates the library_names, preloads, preloads_env_switch and cclibs preferences. These are determined by parsing cc --cray-print-opts=all emitted from the Cray Compiler Wrappers. Therefore use_system_binary needs to be run on the target system, with the corresponding PrgEnv loaded.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The function of these settings are as follows:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"preloads specifies a list of libraries that are to be loaded (in order) before libmpi.\npreloads_env_switch specifies the name of an environment variable that, if set to 0, can disable the preloads\ncclibs is a list of libraries also linked by the compiler wrappers. This is recorded mainly for debugging purposes, and the libraries listed here are not explicitly loaded by MPI.jl.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If these are set, the _format key will be set to \"1.1\".","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"An example of running MPIPreferences.use_system_library(vendor=\"cray\") in PrgEnv-gnu is:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"[MPIPreferences]\n_format = \"1.1\"\nabi = \"MPICH\"\nbinary = \"system\"\ncclibs = [\"cupti\", \"cudart\", \"cuda\", \"sci_gnu_82_mpi\", \"sci_gnu_82\", \"dl\", \"dsmml\", \"xpmem\"]\nlibmpi = \"libmpi_gnu_91.so\"\nmpiexec = \"mpiexec\"\npreloads = [\"libmpi_gtl_cuda.so\"]\npreloads_env_switch = \"MPICH_GPU_SUPPORT_ENABLED\"","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This is an example of CrayMPICH requiring libmpi_gtl_cuda.so to be preloaded, unless MPICH_GPU_SUPPORT_ENABLED=0 (the latter allowing MPI-enabled code to run on a non-GPU enabled node without needing a separate LocalPreferences.toml).","category":"page"},{"location":"configuration/#configure_jll_binary","page":"Configuration","title":"Using an alternative JLL-provided MPI library","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MicrosoftMPI_jll: Microsoft MPI Default for Windows\nMPICH_jll: MPICH. Default for all other systems\nOpenMPI_jll: Open MPI\nMPItrampoline_jll: MPItrampoline: an MPI forwarding layer.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Call MPIPreferences.use_jll_binary, for example","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using MPIPreferences; MPIPreferences.use_jll_binary(\"MPItrampoline_jll\")'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If you omit the JLL binary name, the default is selected for the respective operating system.","category":"page"},{"location":"configuration/#Configuration-of-the-MPI.jl-testsuite","page":"Configuration","title":"Configuration of the MPI.jl testsuite","text":"","category":"section"},{"location":"configuration/#Testing-against-a-different-MPI-implementation","page":"Configuration","title":"Testing against a different MPI implementation","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The LocalPreferences.toml must be located within the test folder, you can either create it in place or copy it into place.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"~/MPI> julia --project=test\njulia> using MPIPreferences\njulia> MPIPreferences.use_system_binary()\n~/MPI> rm test/Manifest.toml\n~/MPI> julia --project\n(MPI) pkg> test","category":"page"},{"location":"configuration/#Testing-GPU-aware-buffers","page":"Configuration","title":"Testing GPU-aware buffers","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The test suite can target CUDA-aware interface with CUDA.CuArray and the ROCm-aware interface with AMDGPU.ROCArray upon selecting the corresponding test_args kwarg when calling Pkg.test.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run Pkg.test with --backend=CUDA to test CUDA-aware MPI buffers","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"import Pkg; Pkg.test(\"MPI\"; test_args=[\"--backend=CUDA\"])","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"and with --backend=AMDGPU to test ROCm-aware MPI buffers","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"import Pkg; Pkg.test(\"MPI\"; test_args=[\"--backend=AMDGPU\"])","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nThe JULIA_MPI_TEST_ARRAYTYPE environment variable has no effect anymore.","category":"page"},{"location":"configuration/#Environment-variables","page":"Configuration","title":"Environment variables","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The test suite can also be modified by the following variables:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"JULIA_MPI_TEST_NPROCS: How many ranks to use within the tests\nJULIA_MPI_TEST_BINARY: Check that the specified MPI binary is used for the tests\nJULIA_MPI_TEST_ABI: Check that the specified MPI ABI is used for the tests","category":"page"},{"location":"configuration/#Migration-from-MPI.jl-v0.19-or-earlier","page":"Configuration","title":"Migration from MPI.jl v0.19 or earlier","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"For MPI.jl v0.20, environment variables were used to configure which MPI library to use. These have been removed and no longer have any effect. The following subsections explain how to the same effects can be achieved with v0.20 or later.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nPlease refer to Notes to HPC cluster administrators if you want to migrate your MPI.jl preferences on a cluster with a centrally managed MPI.jl configuration.","category":"page"},{"location":"configuration/#JULIA_MPI_BINARY","page":"Configuration","title":"JULIA_MPI_BINARY","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary to use a system-provided MPI binary as described here. To switch back or select a different JLL-provided MPI binary, use MPIPreferences.use_jll_binary as described here.","category":"page"},{"location":"configuration/#JULIA_MPI_PATH","page":"Configuration","title":"JULIA_MPI_PATH","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement.","category":"page"},{"location":"configuration/#JULIA_MPI_LIBRARY","page":"Configuration","title":"JULIA_MPI_LIBRARY","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument library_names to specify possible, non-standard library names. Alternatively, you can also specify the full path to the library.","category":"page"},{"location":"configuration/#JULIA_MPI_ABI","page":"Configuration","title":"JULIA_MPI_ABI","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument abi to specify which ABI to use. See MPIPreferences.abi for possible values.","category":"page"},{"location":"configuration/#JULIA_MPIEXEC","page":"Configuration","title":"JULIA_MPIEXEC","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument mpiexec to specify the MPI launcher executable.","category":"page"},{"location":"configuration/#JULIA_MPIEXEC_ARGS","page":"Configuration","title":"JULIA_MPIEXEC_ARGS","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument mpiexec, and pass a Cmd object to set the MPI launcher executable and to include specific command line options.","category":"page"},{"location":"configuration/#JULIA_MPI_INCLUDE_PATH","page":"Configuration","title":"JULIA_MPI_INCLUDE_PATH","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"configuration/#JULIA_MPI_CFLAGS","page":"Configuration","title":"JULIA_MPI_CFLAGS","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"configuration/#JULIA_MPICC","page":"Configuration","title":"JULIA_MPICC","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"refindex/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"refindex/","page":"Index","title":"Index","text":"","category":"page"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/02-broadcast.jl\"","category":"page"},{"location":"examples/02-broadcast/#Broadcast","page":"Broadcast","title":"Broadcast","text":"","category":"section"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"# examples/02-broadcast.jl\nimport MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nN = 5\nroot = 0\n\nif MPI.Comm_rank(comm) == root\n print(\" Running on $(MPI.Comm_size(comm)) processes\\n\")\nend\nMPI.Barrier(comm)\n\nif MPI.Comm_rank(comm) == root\n A = [i*(1.0 + im*2.0) for i = 1:N]\nelse\n A = Array{ComplexF64}(undef, N)\nend\n\nMPI.Bcast!(A, root, comm)\n\nprint(\"rank = $(MPI.Comm_rank(comm)), A = $A\\n\")\n\nif MPI.Comm_rank(comm) == root\n B = Dict(\"foo\" => \"bar\")\nelse\n B = nothing\nend\n\nB = MPI.bcast(B, root, comm)\nprint(\"rank = $(MPI.Comm_rank(comm)), B = $B\\n\")\n\nif MPI.Comm_rank(comm) == root\n f = x -> x^2 + 2x - 1\nelse\n f = nothing\nend\n\nf = MPI.bcast(f, root, comm)\nprint(\"rank = $(MPI.Comm_rank(comm)), f(3) = $(f(3))\\n\")","category":"page"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"> mpiexecjl -n 4 julia examples/02-broadcast.jl\n Running on 4 processes\nrank = 0, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 1, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 2, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 3, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 0, B = Dict(\"foo\" => \"bar\")\nrank = 1, B = Dict(\"foo\" => \"bar\")\nrank = 2, B = Dict(\"foo\" => \"bar\")\nrank = 3, B = Dict(\"foo\" => \"bar\")\nrank = 0, f(3) = 14\nrank = 3, f(3) = 14\nrank = 1, f(3) = 14\nrank = 2, f(3) = 14","category":"page"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/03-reduce.jl\"","category":"page"},{"location":"examples/03-reduce/#Reduce","page":"Reduce","title":"Reduce","text":"","category":"section"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"# examples/03-reduce.jl\n# This example shows how to use custom datatypes and reduction operators\n# It computes the variance in parallel in a numerically stable way\n\nusing MPI, Statistics\n\nMPI.Init()\nconst comm = MPI.COMM_WORLD\nconst root = 0\n\n# Define a custom struct\n# This contains the summary statistics (mean, variance, length) of a vector\nstruct SummaryStat\n mean::Float64\n var::Float64\n n::Float64\nend\nfunction SummaryStat(X::AbstractArray)\n m = mean(X)\n v = varm(X,m, corrected=false)\n n = length(X)\n SummaryStat(m,v,n)\nend\n\n# Define a custom reduction operator\n# this computes the pooled mean, pooled variance and total length\nfunction pool(S1::SummaryStat, S2::SummaryStat)\n n = S1.n + S2.n\n m = (S1.mean*S1.n + S2.mean*S2.n) / n\n v = (S1.n * (S1.var + S1.mean * (S1.mean-m)) +\n S2.n * (S2.var + S2.mean * (S2.mean-m)))/n\n SummaryStat(m,v,n)\nend\n\nX = randn(10,3) .* [1,3,7]'\n\n# Perform a scalar reduction\nsumm = MPI.Reduce(SummaryStat(X), pool, root, comm)\n\nif MPI.Comm_rank(comm) == root\n @show summ.var\nend\n\n# Perform a vector reduction:\n# the reduction operator is applied elementwise\ncol_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, root, comm)\n\nif MPI.Comm_rank(comm) == root\n col_var = map(summ -> summ.var, col_summ)\n @show col_var\nend","category":"page"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"> mpiexecjl -n 4 julia examples/03-reduce.jl\nsumm.var = 22.73679457945\ncol_var = [0.8792365504728255 12.210218818926581 54.41456682774361]","category":"page"},{"location":"usage/#Usage","page":"Usage","title":"Usage","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"MPI is based on a single program, multiple data (SPMD) model, where multiple processes are launched running independent programs, which then communicate as necessary via messages.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"As the main entry point for users, MPI.jl provides a high-level interface which loosely follows the MPI C API and is described in details in the following sections. The syntax should look familiar if you know MPI already, but some arguments may not be needed (e.g. the type or the number of elements of arrays, which are inferred automatically), others may be placed slightly differently, and others may be optional keyword arguments (e.g. for the index of the root process, or the source and destination of point-to-point communication functions).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"In addition to the high-level interface, MPI.jl provides a low-level API which closely matches the MPI C API and from which it has been automatically generated. This is not intended for general usage, but it can be employed if a high-level wrapper is not yet available.","category":"page"},{"location":"usage/#Basic-example","page":"Usage","title":"Basic example","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"A script should include using MPI and MPI.Init() statements before calling any MPI operations, for example","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"# examples/01-hello.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nprintln(\"Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\")\nMPI.Barrier(comm)","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Calling MPI.Finalize() at the end of the program is optional, as it will be called automatically when Julia exits.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"The program can then be launched via an MPI launch command (typically mpiexec, mpirun or srun), e.g.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ mpiexec -n 3 julia --project examples/01-hello.jl\nHello world, I am rank 0 of 3\nHello world, I am rank 2 of 3\nHello world, I am rank 1 of 3","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"The mpiexec function is provided for launching MPI programs from Julia itself.","category":"page"},{"location":"usage/#Julia-wrapper-for-mpiexec","page":"Usage","title":"Julia wrapper for mpiexec","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"Since you can configure MPI.jl to use one of several MPI implementations, you may have different Julia projects using different implementation. Thus, it may be cumbersome to find out which mpiexec executable is associated to a specific project. To make this easy, on Unix-based systems MPI.jl comes with a thin project-aware wrapper around mpiexec, called mpiexecjl.","category":"page"},{"location":"usage/#Installation","page":"Usage","title":"Installation","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"You can install mpiexecjl with MPI.install_mpiexecjl(). The default destination directory is joinpath(DEPOT_PATH[1], \"bin\"), which usually translates to ~/.julia/bin, but check the value on your system. You can also tell MPI.install_mpiexecjl to install to a different directory.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ julia\njulia> using MPI\njulia> MPI.install_mpiexecjl()","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"To quickly call this wrapper we recommend you to add the destination directory to your PATH environment variable.","category":"page"},{"location":"usage/#Usage-2","page":"Usage","title":"Usage","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"mpiexecjl has the same syntax as the mpiexec binary that will be called, but it takes in addition a --project option to call the specific binary associated to the MPI.jl version in the given project. If no --project flag is used, the MPI.jl in the global Julia environment will be used instead.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"After installing mpiexecjl and adding its directory to PATH, you can run it with:","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ mpiexecjl --project=/path/to/project -n 20 julia script.jl","category":"page"},{"location":"usage/#CUDA-aware-MPI-support","page":"Usage","title":"CUDA-aware MPI support","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"If your MPI implementation has been compiled with CUDA support, then CUDA.CuArrays (from the CUDA.jl package) can be passed directly as send and receive buffers for point-to-point and collective operations (they may also work with one-sided operations, but these are not often supported).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Successfully running the alltoall_test_cuda.jl should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the alltoall_test_cuda_multigpu.jl should confirm your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"If using OpenMPI, the status of CUDA support can be checked via the MPI.has_cuda() function.","category":"page"},{"location":"usage/#ROCm-aware-MPI-support","page":"Usage","title":"ROCm-aware MPI support","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"If your MPI implementation has been compiled with ROCm support (AMDGPU), then AMDGPU.ROCArrays (from the AMDGPU.jl package) can be passed directly as send and receive buffers for point-to-point and collective operations (they may also work with one-sided operations, but these are not often supported).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Successfully running the alltoall_test_rocm.jl should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the alltoall_test_rocm_multigpu.jl should confirm your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"If using OpenMPI, the status of ROCm support can be checked via the MPI.has_rocm() function.","category":"page"},{"location":"usage/#Writing-MPI-tests","page":"Usage","title":"Writing MPI tests","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"It is recommended to use the mpiexec() wrapper when writing your package tests in runtests.jl:","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"# test/runtests.jl\nusing MPI\nusing Test\n\n@testset \"hello\" begin\n n = 2 # number of processes\n run(`$(mpiexec()) -n $n $(Base.julia_cmd()) [...]/01-hello.jl`)\n # alternatively:\n # p = run(ignorestatus(`$(mpiexec()) ...`))\n # @test success(p)\n end\nend","category":"page"},{"location":"reference/library/#Library-information","page":"Library information","title":"Library information","text":"","category":"section"},{"location":"reference/library/#Constants","page":"Library information","title":"Constants","text":"","category":"section"},{"location":"reference/library/","page":"Library information","title":"Library information","text":"MPI.MPI_VERSION\nMPI.MPI_LIBRARY\nMPI.MPI_LIBRARY_VERSION\nMPI.MPI_LIBRARY_VERSION_STRING","category":"page"},{"location":"reference/library/#MPI.MPI_VERSION","page":"Library information","title":"MPI.MPI_VERSION","text":"MPI_VERSION :: VersionNumber\n\nThe supported version of the MPI standard.\n\nExternal links\n\nMPI_Get_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY","page":"Library information","title":"MPI.MPI_LIBRARY","text":"MPI_LIBRARY :: String\n\nThe current MPI implementation: this is determined by\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY_VERSION","page":"Library information","title":"MPI.MPI_LIBRARY_VERSION","text":"MPI_LIBRARY_VERSION :: VersionNumber\n\nThe version of the MPI library\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY_VERSION_STRING","page":"Library information","title":"MPI.MPI_LIBRARY_VERSION_STRING","text":"MPI_LIBRARY_VERSION_STRING :: String\n\nThe full version string provided by the library\n\nExternal links\n\nMPI_Get_library_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#Functions","page":"Library information","title":"Functions","text":"","category":"section"},{"location":"reference/library/","page":"Library information","title":"Library information","text":"MPI.versioninfo\nMPI.has_cuda\nMPI.has_rocm\nMPI.has_gpu\nMPI.identify_implementation","category":"page"},{"location":"reference/library/#MPI.versioninfo","page":"Library information","title":"MPI.versioninfo","text":"MPI.versioninfo(io::IO=stdout)\n\nPrint a summary of the current MPI configuration.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_cuda","page":"Library information","title":"MPI.has_cuda","text":"MPI.has_cuda()\n\nCheck if the MPI implementation is known to have CUDA support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden). For \"IBMSpectrumMPI\" it will return true.\n\nThis can be overridden by setting the JULIA_MPI_HAS_CUDA environment variable to true or false.\n\nnote: Note\nFor OpenMPI or OpenMPI-based implementations you first need to call Init().\n\nSee also MPI.has_rocm for ROCm support.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_rocm","page":"Library information","title":"MPI.has_rocm","text":"MPI.has_rocm()\n\nCheck if the MPI implementation is known to have ROCm support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden).\n\nThis can be overridden by setting the JULIA_MPI_HAS_ROCM environment variable to true or false.\n\nSee also MPI.has_cuda for CUDA support.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_gpu","page":"Library information","title":"MPI.has_gpu","text":"MPI.has_gpu()\n\nChecks if the MPI implementation is known to have GPU support. Currently this checks for the following GPUs:\n\nCUDA: via MPI.has_cuda\nROCm: via MPI.has_rocm\n\nSee also MPI.has_cuda and MPI.has_rocm for more fine-grained checks.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.identify_implementation","page":"Library information","title":"MPI.identify_implementation","text":"impl, version = identify_implementation()\n\nAttempt to identify the MPI implementation based on MPI_LIBRARY_VERSION_STRING. Returns a triple of values:\n\nimpl: a String with the name of the MPI implementation, or \"unknown\" if it cannot be determined,\nversion: a VersionNumber of the library, or nothing if it cannot be determined.\n\nThis function is only intended for internal use. Users should use MPI_LIBRARY, MPI_LIBRARY_VERSION.\n\n\n\n\n\n","category":"function"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/01-hello.jl\"","category":"page"},{"location":"examples/01-hello/#Hello-world","page":"Hello world","title":"Hello world","text":"","category":"section"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"# examples/01-hello.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nprint(\"Hello world, I am rank $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\\n\")\nMPI.Barrier(comm)","category":"page"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"> mpiexecjl -n 4 julia examples/01-hello.jl\nHello world, I am rank 0 of 4\nHello world, I am rank 1 of 4\nHello world, I am rank 2 of 4\nHello world, I am rank 3 of 4","category":"page"},{"location":"reference/misc/#Miscellanea","page":"Miscellanea","title":"Miscellanea","text":"","category":"section"},{"location":"reference/misc/#Functions","page":"Miscellanea","title":"Functions","text":"","category":"section"},{"location":"reference/misc/","page":"Miscellanea","title":"Miscellanea","text":"MPI.Get_processor_name","category":"page"},{"location":"reference/misc/#MPI.Get_processor_name","page":"Miscellanea","title":"MPI.Get_processor_name","text":"Get_processor_name()\n\nReturn the name of the processor, as a String.\n\nExternal links\n\nMPI_Get_processor_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/api/#Low-level-API","page":"Low-level API","title":"Low-level API","text":"","category":"section"},{"location":"reference/api/","page":"Low-level API","title":"Low-level API","text":"The MPI.API submodule provides a low-level interface which closely matches the MPI C API. While these functions are not intended for general usage, they are useful for calling MPI routines not yet available in MPI.jl main interface, and is the basis for the high-level wrappers. The methods suffixed with _c allow MPI_count typed arguments (vs int for the standard ones). The size of MPI_count depends on the implementation, but usually allows 64bit integer offsets.","category":"page"},{"location":"reference/api/","page":"Low-level API","title":"Low-level API","text":"Modules = [MPI.API]\nOrder = [:function]","category":"page"},{"location":"reference/api/#MPI.API.MPI_Abort-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Abort","text":"MPI_Abort(comm, errorcode)\n\nMPI_Abort man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Accumulate-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Accumulate","text":"MPI_Accumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Accumulate_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Accumulate_c","text":"MPI_Accumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_class-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_class","text":"MPI_Add_error_class(errorclass)\n\nMPI_Add_error_class man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_code-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_code","text":"MPI_Add_error_code(errorclass, errorcode)\n\nMPI_Add_error_code man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_string-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_string","text":"MPI_Add_error_string(errorcode, string)\n\nMPI_Add_error_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Address-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Address","text":"MPI_Address(location, address)\n\nMPI_Address man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Aint_add-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Aint_add","text":"MPI_Aint_add(base, disp)\n\nMPI_Aint_add man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Aint_diff-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Aint_diff","text":"MPI_Aint_diff(addr1, addr2)\n\nMPI_Aint_diff man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather","text":"MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_c","text":"MPI_Allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_init","text":"MPI_Allgather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Allgather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_init_c","text":"MPI_Allgather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Allgather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv","text":"MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_c","text":"MPI_Allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_init","text":"MPI_Allgatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Allgatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_init_c","text":"MPI_Allgatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Allgatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alloc_mem-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Alloc_mem","text":"MPI_Alloc_mem(size, info, baseptr)\n\nMPI_Alloc_mem man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce","text":"MPI_Allreduce(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_c","text":"MPI_Allreduce_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Allreduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_init","text":"MPI_Allreduce_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Allreduce_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_init_c","text":"MPI_Allreduce_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Allreduce_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall","text":"MPI_Alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_c","text":"MPI_Alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_init","text":"MPI_Alltoall_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Alltoall_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_init_c","text":"MPI_Alltoall_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Alltoall_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv","text":"MPI_Alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_c","text":"MPI_Alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_init","text":"MPI_Alltoallv_init(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Alltoallv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_init_c","text":"MPI_Alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Alltoallv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw","text":"MPI_Alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_c","text":"MPI_Alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_init","text":"MPI_Alltoallw_init(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Alltoallw_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_init_c","text":"MPI_Alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Alltoallw_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_delete","text":"MPI_Attr_delete(comm, keyval)\n\nMPI_Attr_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_get-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_get","text":"MPI_Attr_get(comm, keyval, attribute_val, flag)\n\nMPI_Attr_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_put-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_put","text":"MPI_Attr_put(comm, keyval, attribute_val)\n\nMPI_Attr_put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Barrier-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Barrier","text":"MPI_Barrier(comm)\n\nMPI_Barrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Barrier_init-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Barrier_init","text":"MPI_Barrier_init(comm, info, request)\n\nMPI_Barrier_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast","text":"MPI_Bcast(buffer, count, datatype, root, comm)\n\nMPI_Bcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_c","text":"MPI_Bcast_c(buffer, count, datatype, root, comm)\n\nMPI_Bcast_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_init","text":"MPI_Bcast_init(buffer, count, datatype, root, comm, info, request)\n\nMPI_Bcast_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_init_c","text":"MPI_Bcast_init_c(buffer, count, datatype, root, comm, info, request)\n\nMPI_Bcast_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend","text":"MPI_Bsend(buf, count, datatype, dest, tag, comm)\n\nMPI_Bsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_c","text":"MPI_Bsend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Bsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_init","text":"MPI_Bsend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Bsend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_init_c","text":"MPI_Bsend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Bsend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_attach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_attach","text":"MPI_Buffer_attach(buffer, size)\n\nMPI_Buffer_attach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_attach_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_attach_c","text":"MPI_Buffer_attach_c(buffer, size)\n\nMPI_Buffer_attach_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_detach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_detach","text":"MPI_Buffer_detach(buffer_addr, size)\n\nMPI_Buffer_detach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_detach_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_detach_c","text":"MPI_Buffer_detach_c(buffer_addr, size)\n\nMPI_Buffer_detach_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cancel-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Cancel","text":"MPI_Cancel(request)\n\nMPI_Cancel man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_coords-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_coords","text":"MPI_Cart_coords(comm, rank, maxdims, coords)\n\nMPI_Cart_coords man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_create","text":"MPI_Cart_create(comm_old, ndims, dims, periods, reorder, comm_cart)\n\nMPI_Cart_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_get","text":"MPI_Cart_get(comm, maxdims, dims, periods, coords)\n\nMPI_Cart_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_map-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_map","text":"MPI_Cart_map(comm, ndims, dims, periods, newrank)\n\nMPI_Cart_map man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_rank-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_rank","text":"MPI_Cart_rank(comm, coords, rank)\n\nMPI_Cart_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_shift-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_shift","text":"MPI_Cart_shift(comm, direction, disp, rank_source, rank_dest)\n\nMPI_Cart_shift man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_sub-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_sub","text":"MPI_Cart_sub(comm, remain_dims, newcomm)\n\nMPI_Cart_sub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cartdim_get-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cartdim_get","text":"MPI_Cartdim_get(comm, ndims)\n\nMPI_Cartdim_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Close_port-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Close_port","text":"MPI_Close_port(port_name)\n\nMPI_Close_port man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_accept-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_accept","text":"MPI_Comm_accept(port_name, info, root, comm, newcomm)\n\nMPI_Comm_accept man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_call_errhandler","text":"MPI_Comm_call_errhandler(comm, errorcode)\n\nMPI_Comm_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_compare-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_compare","text":"MPI_Comm_compare(comm1, comm2, result)\n\nMPI_Comm_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_connect-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_connect","text":"MPI_Comm_connect(port_name, info, root, comm, newcomm)\n\nMPI_Comm_connect man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create","text":"MPI_Comm_create(comm, group, newcomm)\n\nMPI_Comm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_errhandler","text":"MPI_Comm_create_errhandler(comm_errhandler_fn, errhandler)\n\nMPI_Comm_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_from_group-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_from_group","text":"MPI_Comm_create_from_group(group, stringtag, info, errhandler, newcomm)\n\nMPI_Comm_create_from_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_group-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_group","text":"MPI_Comm_create_group(comm, group, tag, newcomm)\n\nMPI_Comm_create_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_keyval","text":"MPI_Comm_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn, comm_keyval, extra_state)\n\nMPI_Comm_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_delete_attr","text":"MPI_Comm_delete_attr(comm, comm_keyval)\n\nMPI_Comm_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_disconnect-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_disconnect","text":"MPI_Comm_disconnect(comm)\n\nMPI_Comm_disconnect man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_dup","text":"MPI_Comm_dup(comm, newcomm)\n\nMPI_Comm_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_dup_with_info-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_dup_with_info","text":"MPI_Comm_dup_with_info(comm, info, newcomm)\n\nMPI_Comm_dup_with_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_free","text":"MPI_Comm_free(comm)\n\nMPI_Comm_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_free_keyval","text":"MPI_Comm_free_keyval(comm_keyval)\n\nMPI_Comm_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_attr","text":"MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag)\n\nMPI_Comm_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_errhandler","text":"MPI_Comm_get_errhandler(comm, errhandler)\n\nMPI_Comm_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_info","text":"MPI_Comm_get_info(comm, info_used)\n\nMPI_Comm_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_name","text":"MPI_Comm_get_name(comm, comm_name, resultlen)\n\nMPI_Comm_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_parent-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_parent","text":"MPI_Comm_get_parent(parent)\n\nMPI_Comm_get_parent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_group","text":"MPI_Comm_group(comm, group)\n\nMPI_Comm_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_idup-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_idup","text":"MPI_Comm_idup(comm, newcomm, request)\n\nMPI_Comm_idup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_idup_with_info-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_idup_with_info","text":"MPI_Comm_idup_with_info(comm, info, newcomm, request)\n\nMPI_Comm_idup_with_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_join-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_join","text":"MPI_Comm_join(fd, intercomm)\n\nMPI_Comm_join man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_rank-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_rank","text":"MPI_Comm_rank(comm, rank)\n\nMPI_Comm_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_remote_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_remote_group","text":"MPI_Comm_remote_group(comm, group)\n\nMPI_Comm_remote_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_remote_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_remote_size","text":"MPI_Comm_remote_size(comm, size)\n\nMPI_Comm_remote_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_attr","text":"MPI_Comm_set_attr(comm, comm_keyval, attribute_val)\n\nMPI_Comm_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_errhandler","text":"MPI_Comm_set_errhandler(comm, errhandler)\n\nMPI_Comm_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_info","text":"MPI_Comm_set_info(comm, info)\n\nMPI_Comm_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_name","text":"MPI_Comm_set_name(comm, comm_name)\n\nMPI_Comm_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_size","text":"MPI_Comm_size(comm, size)\n\nMPI_Comm_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_spawn-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_spawn","text":"MPI_Comm_spawn(command, argv, maxprocs, info, root, comm, intercomm, array_of_errcodes)\n\nMPI_Comm_spawn man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_spawn_multiple-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_spawn_multiple","text":"MPI_Comm_spawn_multiple(count, array_of_commands, array_of_argv, array_of_maxprocs, array_of_info, root, comm, intercomm, array_of_errcodes)\n\nMPI_Comm_spawn_multiple man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_split-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_split","text":"MPI_Comm_split(comm, color, key, newcomm)\n\nMPI_Comm_split man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_split_type-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_split_type","text":"MPI_Comm_split_type(comm, split_type, key, info, newcomm)\n\nMPI_Comm_split_type man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_test_inter-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_test_inter","text":"MPI_Comm_test_inter(comm, flag)\n\nMPI_Comm_test_inter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Compare_and_swap-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Compare_and_swap","text":"MPI_Compare_and_swap(origin_addr, compare_addr, result_addr, datatype, target_rank, target_disp, win)\n\nMPI_Compare_and_swap man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dims_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Dims_create","text":"MPI_Dims_create(nnodes, ndims, dims)\n\nMPI_Dims_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_create-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_create","text":"MPI_Dist_graph_create(comm_old, n, sources, degrees, destinations, weights, info, reorder, comm_dist_graph)\n\nMPI_Dist_graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_create_adjacent-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_create_adjacent","text":"MPI_Dist_graph_create_adjacent(comm_old, indegree, sources, sourceweights, outdegree, destinations, destweights, info, reorder, comm_dist_graph)\n\nMPI_Dist_graph_create_adjacent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_neighbors-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_neighbors","text":"MPI_Dist_graph_neighbors(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights)\n\nMPI_Dist_graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_neighbors_count-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_neighbors_count","text":"MPI_Dist_graph_neighbors_count(comm, indegree, outdegree, weighted)\n\nMPI_Dist_graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_create-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_create","text":"MPI_Errhandler_create(comm_errhandler_fn, errhandler)\n\nMPI_Errhandler_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_free","text":"MPI_Errhandler_free(errhandler)\n\nMPI_Errhandler_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_get-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_get","text":"MPI_Errhandler_get(comm, errhandler)\n\nMPI_Errhandler_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_set-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_set","text":"MPI_Errhandler_set(comm, errhandler)\n\nMPI_Errhandler_set man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Error_class-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Error_class","text":"MPI_Error_class(errorcode, errorclass)\n\nMPI_Error_class man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Error_string-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Error_string","text":"MPI_Error_string(errorcode, string, resultlen)\n\nMPI_Error_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan","text":"MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Exscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_c","text":"MPI_Exscan_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Exscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_init","text":"MPI_Exscan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Exscan_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_init_c","text":"MPI_Exscan_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Exscan_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Fetch_and_op-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Fetch_and_op","text":"MPI_Fetch_and_op(origin_addr, result_addr, datatype, target_rank, target_disp, op, win)\n\nMPI_Fetch_and_op man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_c2f-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_c2f","text":"MPI_File_c2f(file)\n\nMPI_File_c2f man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_call_errhandler","text":"MPI_File_call_errhandler(fh, errorcode)\n\nMPI_File_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_close-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_close","text":"MPI_File_close(fh)\n\nMPI_File_close man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_create_errhandler","text":"MPI_File_create_errhandler(file_errhandler_fn, errhandler)\n\nMPI_File_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_delete","text":"MPI_File_delete(filename, info)\n\nMPI_File_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_f2c-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_f2c","text":"MPI_File_f2c(file)\n\nMPI_File_f2c man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_amode-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_amode","text":"MPI_File_get_amode(fh, amode)\n\nMPI_File_get_amode man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_atomicity-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_atomicity","text":"MPI_File_get_atomicity(fh, flag)\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_byte_offset-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_byte_offset","text":"MPI_File_get_byte_offset(fh, offset, disp)\n\nMPI_File_get_byte_offset man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_errhandler","text":"MPI_File_get_errhandler(file, errhandler)\n\nMPI_File_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_group","text":"MPI_File_get_group(fh, group)\n\nMPI_File_get_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_info","text":"MPI_File_get_info(fh, info_used)\n\nMPI_File_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_position-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_position","text":"MPI_File_get_position(fh, offset)\n\nMPI_File_get_position man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_position_shared-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_position_shared","text":"MPI_File_get_position_shared(fh, offset)\n\nMPI_File_get_position_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_size","text":"MPI_File_get_size(fh, size)\n\nMPI_File_get_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_type_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_type_extent","text":"MPI_File_get_type_extent(fh, datatype, extent)\n\nMPI_File_get_type_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_type_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_type_extent_c","text":"MPI_File_get_type_extent_c(fh, datatype, extent)\n\nMPI_File_get_type_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_view-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_view","text":"MPI_File_get_view(fh, disp, etype, filetype, datarep)\n\nMPI_File_get_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread","text":"MPI_File_iread(fh, buf, count, datatype, request)\n\nMPI_File_iread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_all","text":"MPI_File_iread_all(fh, buf, count, datatype, request)\n\nMPI_File_iread_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_all_c","text":"MPI_File_iread_all_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at","text":"MPI_File_iread_at(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_all","text":"MPI_File_iread_at_all(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_all_c","text":"MPI_File_iread_at_all_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_c","text":"MPI_File_iread_at_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_c","text":"MPI_File_iread_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_shared","text":"MPI_File_iread_shared(fh, buf, count, datatype, request)\n\nMPI_File_iread_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_shared_c","text":"MPI_File_iread_shared_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite","text":"MPI_File_iwrite(fh, buf, count, datatype, request)\n\nMPI_File_iwrite man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_all","text":"MPI_File_iwrite_all(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_all_c","text":"MPI_File_iwrite_all_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at","text":"MPI_File_iwrite_at(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_all","text":"MPI_File_iwrite_at_all(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_all_c","text":"MPI_File_iwrite_at_all_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_c","text":"MPI_File_iwrite_at_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_c","text":"MPI_File_iwrite_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_shared","text":"MPI_File_iwrite_shared(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_shared_c","text":"MPI_File_iwrite_shared_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_open-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_open","text":"MPI_File_open(comm, filename, amode, info, fh)\n\nMPI_File_open man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_preallocate-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_preallocate","text":"MPI_File_preallocate(fh, size)\n\nMPI_File_preallocate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read","text":"MPI_File_read(fh, buf, count, datatype, status)\n\nMPI_File_read man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all","text":"MPI_File_read_all(fh, buf, count, datatype, status)\n\nMPI_File_read_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_begin","text":"MPI_File_read_all_begin(fh, buf, count, datatype)\n\nMPI_File_read_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_begin_c","text":"MPI_File_read_all_begin_c(fh, buf, count, datatype)\n\nMPI_File_read_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_c","text":"MPI_File_read_all_c(fh, buf, count, datatype, status)\n\nMPI_File_read_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_end","text":"MPI_File_read_all_end(fh, buf, status)\n\nMPI_File_read_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at","text":"MPI_File_read_at(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all","text":"MPI_File_read_at_all(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_begin-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_begin","text":"MPI_File_read_at_all_begin(fh, offset, buf, count, datatype)\n\nMPI_File_read_at_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_begin_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_begin_c","text":"MPI_File_read_at_all_begin_c(fh, offset, buf, count, datatype)\n\nMPI_File_read_at_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_c","text":"MPI_File_read_at_all_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_end","text":"MPI_File_read_at_all_end(fh, buf, status)\n\nMPI_File_read_at_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_c","text":"MPI_File_read_at_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_c","text":"MPI_File_read_c(fh, buf, count, datatype, status)\n\nMPI_File_read_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered","text":"MPI_File_read_ordered(fh, buf, count, datatype, status)\n\nMPI_File_read_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_begin","text":"MPI_File_read_ordered_begin(fh, buf, count, datatype)\n\nMPI_File_read_ordered_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_begin_c","text":"MPI_File_read_ordered_begin_c(fh, buf, count, datatype)\n\nMPI_File_read_ordered_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_c","text":"MPI_File_read_ordered_c(fh, buf, count, datatype, status)\n\nMPI_File_read_ordered_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_end","text":"MPI_File_read_ordered_end(fh, buf, status)\n\nMPI_File_read_ordered_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_shared","text":"MPI_File_read_shared(fh, buf, count, datatype, status)\n\nMPI_File_read_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_shared_c","text":"MPI_File_read_shared_c(fh, buf, count, datatype, status)\n\nMPI_File_read_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_seek-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_seek","text":"MPI_File_seek(fh, offset, whence)\n\nMPI_File_seek man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_seek_shared-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_seek_shared","text":"MPI_File_seek_shared(fh, offset, whence)\n\nMPI_File_seek_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_atomicity-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_atomicity","text":"MPI_File_set_atomicity(fh, flag)\n\nMPI_File_set_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_errhandler","text":"MPI_File_set_errhandler(file, errhandler)\n\nMPI_File_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_info","text":"MPI_File_set_info(fh, info)\n\nMPI_File_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_size","text":"MPI_File_set_size(fh, size)\n\nMPI_File_set_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_view-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_view","text":"MPI_File_set_view(fh, disp, etype, filetype, datarep, info)\n\nMPI_File_set_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_sync-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_sync","text":"MPI_File_sync(fh)\n\nMPI_File_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write","text":"MPI_File_write(fh, buf, count, datatype, status)\n\nMPI_File_write man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all","text":"MPI_File_write_all(fh, buf, count, datatype, status)\n\nMPI_File_write_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_begin","text":"MPI_File_write_all_begin(fh, buf, count, datatype)\n\nMPI_File_write_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_begin_c","text":"MPI_File_write_all_begin_c(fh, buf, count, datatype)\n\nMPI_File_write_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_c","text":"MPI_File_write_all_c(fh, buf, count, datatype, status)\n\nMPI_File_write_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_end","text":"MPI_File_write_all_end(fh, buf, status)\n\nMPI_File_write_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at","text":"MPI_File_write_at(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all","text":"MPI_File_write_at_all(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_begin-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_begin","text":"MPI_File_write_at_all_begin(fh, offset, buf, count, datatype)\n\nMPI_File_write_at_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_begin_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_begin_c","text":"MPI_File_write_at_all_begin_c(fh, offset, buf, count, datatype)\n\nMPI_File_write_at_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_c","text":"MPI_File_write_at_all_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_end","text":"MPI_File_write_at_all_end(fh, buf, status)\n\nMPI_File_write_at_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_c","text":"MPI_File_write_at_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_c","text":"MPI_File_write_c(fh, buf, count, datatype, status)\n\nMPI_File_write_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered","text":"MPI_File_write_ordered(fh, buf, count, datatype, status)\n\nMPI_File_write_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_begin","text":"MPI_File_write_ordered_begin(fh, buf, count, datatype)\n\nMPI_File_write_ordered_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_begin_c","text":"MPI_File_write_ordered_begin_c(fh, buf, count, datatype)\n\nMPI_File_write_ordered_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_c","text":"MPI_File_write_ordered_c(fh, buf, count, datatype, status)\n\nMPI_File_write_ordered_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_end","text":"MPI_File_write_ordered_end(fh, buf, status)\n\nMPI_File_write_ordered_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_shared","text":"MPI_File_write_shared(fh, buf, count, datatype, status)\n\nMPI_File_write_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_shared_c","text":"MPI_File_write_shared_c(fh, buf, count, datatype, status)\n\nMPI_File_write_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Finalize-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Finalize","text":"MPI_Finalize()\n\nMPI_Finalize man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Finalized-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Finalized","text":"MPI_Finalized(flag)\n\nMPI_Finalized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Free_mem-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Free_mem","text":"MPI_Free_mem(base)\n\nMPI_Free_mem man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather","text":"MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_c","text":"MPI_Gather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Gather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_init","text":"MPI_Gather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Gather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_init_c","text":"MPI_Gather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Gather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv","text":"MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)\n\nMPI_Gatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_c","text":"MPI_Gatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)\n\nMPI_Gatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_init","text":"MPI_Gatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)\n\nMPI_Gatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_init_c","text":"MPI_Gatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)\n\nMPI_Gatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Get","text":"MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_accumulate-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_accumulate","text":"MPI_Get_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Get_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_accumulate_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_accumulate_c","text":"MPI_Get_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Get_accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_address-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_address","text":"MPI_Get_address(location, address)\n\nMPI_Get_address man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_c","text":"MPI_Get_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Get_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_count-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_count","text":"MPI_Get_count(status, datatype, count)\n\nMPI_Get_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_count_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_count_c","text":"MPI_Get_count_c(status, datatype, count)\n\nMPI_Get_count_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements","text":"MPI_Get_elements(status, datatype, count)\n\nMPI_Get_elements man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements_c","text":"MPI_Get_elements_c(status, datatype, count)\n\nMPI_Get_elements_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements_x","text":"MPI_Get_elements_x(status, datatype, count)\n\nMPI_Get_elements_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_library_version-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_library_version","text":"MPI_Get_library_version(version, resultlen)\n\nMPI_Get_library_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_processor_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_processor_name","text":"MPI_Get_processor_name(name, resultlen)\n\nMPI_Get_processor_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_version-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_version","text":"MPI_Get_version(version, subversion)\n\nMPI_Get_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_create","text":"MPI_Graph_create(comm_old, nnodes, indx, edges, reorder, comm_graph)\n\nMPI_Graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_get","text":"MPI_Graph_get(comm, maxindex, maxedges, indx, edges)\n\nMPI_Graph_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_map-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_map","text":"MPI_Graph_map(comm, nnodes, indx, edges, newrank)\n\nMPI_Graph_map man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_neighbors-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_neighbors","text":"MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors)\n\nMPI_Graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_neighbors_count-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_neighbors_count","text":"MPI_Graph_neighbors_count(comm, rank, nneighbors)\n\nMPI_Graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graphdims_get-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Graphdims_get","text":"MPI_Graphdims_get(comm, nnodes, nedges)\n\nMPI_Graphdims_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Grequest_complete-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Grequest_complete","text":"MPI_Grequest_complete(request)\n\nMPI_Grequest_complete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Grequest_start-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Grequest_start","text":"MPI_Grequest_start(query_fn, free_fn, cancel_fn, extra_state, request)\n\nMPI_Grequest_start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_compare-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_compare","text":"MPI_Group_compare(group1, group2, result)\n\nMPI_Group_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_difference-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_difference","text":"MPI_Group_difference(group1, group2, newgroup)\n\nMPI_Group_difference man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_excl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_excl","text":"MPI_Group_excl(group, n, ranks, newgroup)\n\nMPI_Group_excl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Group_free","text":"MPI_Group_free(group)\n\nMPI_Group_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_incl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_incl","text":"MPI_Group_incl(group, n, ranks, newgroup)\n\nMPI_Group_incl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_intersection-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_intersection","text":"MPI_Group_intersection(group1, group2, newgroup)\n\nMPI_Group_intersection man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_range_excl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_range_excl","text":"MPI_Group_range_excl(group, n, ranges, newgroup)\n\nMPI_Group_range_excl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_range_incl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_range_incl","text":"MPI_Group_range_incl(group, n, ranges, newgroup)\n\nMPI_Group_range_incl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_rank-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_rank","text":"MPI_Group_rank(group, rank)\n\nMPI_Group_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_size","text":"MPI_Group_size(group, size)\n\nMPI_Group_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_translate_ranks-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_translate_ranks","text":"MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2)\n\nMPI_Group_translate_ranks man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_union-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_union","text":"MPI_Group_union(group1, group2, newgroup)\n\nMPI_Group_union man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgather","text":"MPI_Iallgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Iallgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgather_c","text":"MPI_Iallgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Iallgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgatherv","text":"MPI_Iallgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Iallgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgatherv_c","text":"MPI_Iallgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Iallgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallreduce-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallreduce","text":"MPI_Iallreduce(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iallreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallreduce_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallreduce_c","text":"MPI_Iallreduce_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iallreduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoall-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoall","text":"MPI_Ialltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ialltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoall_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoall_c","text":"MPI_Ialltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ialltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallv","text":"MPI_Ialltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ialltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallv_c","text":"MPI_Ialltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ialltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallw-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallw","text":"MPI_Ialltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ialltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallw_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallw_c","text":"MPI_Ialltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ialltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibarrier-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibarrier","text":"MPI_Ibarrier(comm, request)\n\nMPI_Ibarrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibcast-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibcast","text":"MPI_Ibcast(buffer, count, datatype, root, comm, request)\n\nMPI_Ibcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibcast_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibcast_c","text":"MPI_Ibcast_c(buffer, count, datatype, root, comm, request)\n\nMPI_Ibcast_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibsend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibsend","text":"MPI_Ibsend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ibsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibsend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibsend_c","text":"MPI_Ibsend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ibsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iexscan-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iexscan","text":"MPI_Iexscan(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iexscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iexscan_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iexscan_c","text":"MPI_Iexscan_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iexscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igather-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Igather","text":"MPI_Igather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Igather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igather_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Igather_c","text":"MPI_Igather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Igather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igatherv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Igatherv","text":"MPI_Igatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)\n\nMPI_Igatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igatherv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Igatherv_c","text":"MPI_Igatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)\n\nMPI_Igatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Improbe-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Improbe","text":"MPI_Improbe(source, tag, comm, flag, message, status)\n\nMPI_Improbe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Imrecv-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Imrecv","text":"MPI_Imrecv(buf, count, datatype, message, request)\n\nMPI_Imrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Imrecv_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Imrecv_c","text":"MPI_Imrecv_c(buf, count, datatype, message, request)\n\nMPI_Imrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgather","text":"MPI_Ineighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgather_c","text":"MPI_Ineighbor_allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgatherv","text":"MPI_Ineighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Ineighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgatherv_c","text":"MPI_Ineighbor_allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Ineighbor_allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoall-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoall","text":"MPI_Ineighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoall_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoall_c","text":"MPI_Ineighbor_alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallv","text":"MPI_Ineighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ineighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallv_c","text":"MPI_Ineighbor_alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ineighbor_alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallw-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallw","text":"MPI_Ineighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ineighbor_alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallw_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallw_c","text":"MPI_Ineighbor_alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ineighbor_alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_create-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Info_create","text":"MPI_Info_create(info)\n\nMPI_Info_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_create_env-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_create_env","text":"MPI_Info_create_env(argc, argv, info)\n\nMPI_Info_create_env man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_delete","text":"MPI_Info_delete(info, key)\n\nMPI_Info_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_dup","text":"MPI_Info_dup(info, newinfo)\n\nMPI_Info_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Info_free","text":"MPI_Info_free(info)\n\nMPI_Info_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get","text":"MPI_Info_get(info, key, valuelen, value, flag)\n\nMPI_Info_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_nkeys-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_nkeys","text":"MPI_Info_get_nkeys(info, nkeys)\n\nMPI_Info_get_nkeys man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_nthkey-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_nthkey","text":"MPI_Info_get_nthkey(info, n, key)\n\nMPI_Info_get_nthkey man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_string-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_string","text":"MPI_Info_get_string(info, key, buflen, value, flag)\n\nMPI_Info_get_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_valuelen-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_valuelen","text":"MPI_Info_get_valuelen(info, key, valuelen, flag)\n\nMPI_Info_get_valuelen man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_set-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_set","text":"MPI_Info_set(info, key, value)\n\nMPI_Info_set man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Init-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Init","text":"MPI_Init(argc, argv)\n\nMPI_Init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Init_thread-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Init_thread","text":"MPI_Init_thread(argc, argv, required, provided)\n\nMPI_Init_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Initialized-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Initialized","text":"MPI_Initialized(flag)\n\nMPI_Initialized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_create","text":"MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm)\n\nMPI_Intercomm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_create_from_groups-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_create_from_groups","text":"MPI_Intercomm_create_from_groups(local_group, local_leader, remote_group, remote_leader, stringtag, info, errhandler, newintercomm)\n\nMPI_Intercomm_create_from_groups man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_merge-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_merge","text":"MPI_Intercomm_merge(intercomm, high, newintracomm)\n\nMPI_Intercomm_merge man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iprobe-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Iprobe","text":"MPI_Iprobe(source, tag, comm, flag, status)\n\nMPI_Iprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irecv-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irecv","text":"MPI_Irecv(buf, count, datatype, source, tag, comm, request)\n\nMPI_Irecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irecv_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irecv_c","text":"MPI_Irecv_c(buf, count, datatype, source, tag, comm, request)\n\nMPI_Irecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce","text":"MPI_Ireduce(sendbuf, recvbuf, count, datatype, op, root, comm, request)\n\nMPI_Ireduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_c","text":"MPI_Ireduce_c(sendbuf, recvbuf, count, datatype, op, root, comm, request)\n\nMPI_Ireduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter","text":"MPI_Ireduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, request)\n\nMPI_Ireduce_scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_block-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_block","text":"MPI_Ireduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_block_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_block_c","text":"MPI_Ireduce_scatter_block_c(sendbuf, recvbuf, recvcount, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_c","text":"MPI_Ireduce_scatter_c(sendbuf, recvbuf, recvcounts, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irsend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irsend","text":"MPI_Irsend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Irsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irsend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irsend_c","text":"MPI_Irsend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Irsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Is_thread_main-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Is_thread_main","text":"MPI_Is_thread_main(flag)\n\nMPI_Is_thread_main man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscan-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscan","text":"MPI_Iscan(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscan_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscan_c","text":"MPI_Iscan_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatter-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatter","text":"MPI_Iscatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatter_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatter_c","text":"MPI_Iscatter_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatterv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatterv","text":"MPI_Iscatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatterv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatterv_c","text":"MPI_Iscatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatterv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Isend","text":"MPI_Isend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Isend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Isend_c","text":"MPI_Isend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Isend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv","text":"MPI_Isendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)\n\nMPI_Isendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_c","text":"MPI_Isendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)\n\nMPI_Isendrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_replace-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_replace","text":"MPI_Isendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, request)\n\nMPI_Isendrecv_replace man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_replace_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_replace_c","text":"MPI_Isendrecv_replace_c(buf, count, datatype, dest, sendtag, source, recvtag, comm, request)\n\nMPI_Isendrecv_replace_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Issend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Issend","text":"MPI_Issend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Issend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Issend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Issend_c","text":"MPI_Issend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Issend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Keyval_create-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Keyval_create","text":"MPI_Keyval_create(copy_fn, delete_fn, keyval, extra_state)\n\nMPI_Keyval_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Keyval_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Keyval_free","text":"MPI_Keyval_free(keyval)\n\nMPI_Keyval_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Lookup_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Lookup_name","text":"MPI_Lookup_name(service_name, info, port_name)\n\nMPI_Lookup_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mprobe-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mprobe","text":"MPI_Mprobe(source, tag, comm, message, status)\n\nMPI_Mprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mrecv-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mrecv","text":"MPI_Mrecv(buf, count, datatype, message, status)\n\nMPI_Mrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mrecv_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mrecv_c","text":"MPI_Mrecv_c(buf, count, datatype, message, status)\n\nMPI_Mrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather","text":"MPI_Neighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_c","text":"MPI_Neighbor_allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_init","text":"MPI_Neighbor_allgather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_allgather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_init_c","text":"MPI_Neighbor_allgather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_allgather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv","text":"MPI_Neighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Neighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_c","text":"MPI_Neighbor_allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Neighbor_allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_init","text":"MPI_Neighbor_allgatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Neighbor_allgatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_init_c","text":"MPI_Neighbor_allgatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Neighbor_allgatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall","text":"MPI_Neighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_c","text":"MPI_Neighbor_alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_init","text":"MPI_Neighbor_alltoall_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoall_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_init_c","text":"MPI_Neighbor_alltoall_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoall_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv","text":"MPI_Neighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Neighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_c","text":"MPI_Neighbor_alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Neighbor_alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_init","text":"MPI_Neighbor_alltoallv_init(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoallv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_init_c","text":"MPI_Neighbor_alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoallv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw","text":"MPI_Neighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Neighbor_alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_c","text":"MPI_Neighbor_alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Neighbor_alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_init","text":"MPI_Neighbor_alltoallw_init(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Neighbor_alltoallw_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_init_c","text":"MPI_Neighbor_alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Neighbor_alltoallw_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_commutative-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_commutative","text":"MPI_Op_commutative(op, commute)\n\nMPI_Op_commutative man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_create","text":"MPI_Op_create(user_fn, commute, op)\n\nMPI_Op_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_create_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_create_c","text":"MPI_Op_create_c(user_fn, commute, op)\n\nMPI_Op_create_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Op_free","text":"MPI_Op_free(op)\n\nMPI_Op_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Open_port-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Open_port","text":"MPI_Open_port(info, port_name)\n\nMPI_Open_port man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack","text":"MPI_Pack(inbuf, incount, datatype, outbuf, outsize, position, comm)\n\nMPI_Pack man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_c","text":"MPI_Pack_c(inbuf, incount, datatype, outbuf, outsize, position, comm)\n\nMPI_Pack_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external","text":"MPI_Pack_external(datarep, inbuf, incount, datatype, outbuf, outsize, position)\n\nMPI_Pack_external man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_c","text":"MPI_Pack_external_c(datarep, inbuf, incount, datatype, outbuf, outsize, position)\n\nMPI_Pack_external_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_size-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_size","text":"MPI_Pack_external_size(datarep, incount, datatype, size)\n\nMPI_Pack_external_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_size_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_size_c","text":"MPI_Pack_external_size_c(datarep, incount, datatype, size)\n\nMPI_Pack_external_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_size-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_size","text":"MPI_Pack_size(incount, datatype, comm, size)\n\nMPI_Pack_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_size_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_size_c","text":"MPI_Pack_size_c(incount, datatype, comm, size)\n\nMPI_Pack_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Parrived-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Parrived","text":"MPI_Parrived(request, partition, flag)\n\nMPI_Parrived man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready","text":"MPI_Pready(partition, request)\n\nMPI_Pready man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready_list-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready_list","text":"MPI_Pready_list(length, array_of_partitions, request)\n\nMPI_Pready_list man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready_range-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready_range","text":"MPI_Pready_range(partition_low, partition_high, request)\n\nMPI_Pready_range man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Precv_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Precv_init","text":"MPI_Precv_init(buf, partitions, count, datatype, dest, tag, comm, info, request)\n\nMPI_Precv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Probe-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Probe","text":"MPI_Probe(source, tag, comm, status)\n\nMPI_Probe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Psend_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Psend_init","text":"MPI_Psend_init(buf, partitions, count, datatype, dest, tag, comm, info, request)\n\nMPI_Psend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Publish_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Publish_name","text":"MPI_Publish_name(service_name, info, port_name)\n\nMPI_Publish_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Put-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Put","text":"MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Put_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Put_c","text":"MPI_Put_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Put_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Query_thread-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Query_thread","text":"MPI_Query_thread(provided)\n\nMPI_Query_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Raccumulate-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Raccumulate","text":"MPI_Raccumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Raccumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Raccumulate_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Raccumulate_c","text":"MPI_Raccumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Raccumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv","text":"MPI_Recv(buf, count, datatype, source, tag, comm, status)\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_c","text":"MPI_Recv_c(buf, count, datatype, source, tag, comm, status)\n\nMPI_Recv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_init","text":"MPI_Recv_init(buf, count, datatype, source, tag, comm, request)\n\nMPI_Recv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_init_c","text":"MPI_Recv_init_c(buf, count, datatype, source, tag, comm, request)\n\nMPI_Recv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce","text":"MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm)\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_c","text":"MPI_Reduce_c(sendbuf, recvbuf, count, datatype, op, root, comm)\n\nMPI_Reduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_init","text":"MPI_Reduce_init(sendbuf, recvbuf, count, datatype, op, root, comm, info, request)\n\nMPI_Reduce_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_init_c","text":"MPI_Reduce_init_c(sendbuf, recvbuf, count, datatype, op, root, comm, info, request)\n\nMPI_Reduce_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_local-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_local","text":"MPI_Reduce_local(inbuf, inoutbuf, count, datatype, op)\n\nMPI_Reduce_local man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_local_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_local_c","text":"MPI_Reduce_local_c(inbuf, inoutbuf, count, datatype, op)\n\nMPI_Reduce_local_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter","text":"MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm)\n\nMPI_Reduce_scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block","text":"MPI_Reduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm)\n\nMPI_Reduce_scatter_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_c","text":"MPI_Reduce_scatter_block_c(sendbuf, recvbuf, recvcount, datatype, op, comm)\n\nMPI_Reduce_scatter_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_init","text":"MPI_Reduce_scatter_block_init(sendbuf, recvbuf, recvcount, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_block_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_init_c","text":"MPI_Reduce_scatter_block_init_c(sendbuf, recvbuf, recvcount, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_block_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_c","text":"MPI_Reduce_scatter_c(sendbuf, recvbuf, recvcounts, datatype, op, comm)\n\nMPI_Reduce_scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_init","text":"MPI_Reduce_scatter_init(sendbuf, recvbuf, recvcounts, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_init_c","text":"MPI_Reduce_scatter_init_c(sendbuf, recvbuf, recvcounts, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Register_datarep-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Register_datarep","text":"MPI_Register_datarep(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state)\n\nMPI_Register_datarep man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Register_datarep_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Register_datarep_c","text":"MPI_Register_datarep_c(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state)\n\nMPI_Register_datarep_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Request_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Request_free","text":"MPI_Request_free(request)\n\nMPI_Request_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Request_get_status-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Request_get_status","text":"MPI_Request_get_status(request, flag, status)\n\nMPI_Request_get_status man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget","text":"MPI_Rget(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rget man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_accumulate-NTuple{13, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_accumulate","text":"MPI_Rget_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Rget_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_accumulate_c-NTuple{13, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_accumulate_c","text":"MPI_Rget_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Rget_accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_c","text":"MPI_Rget_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rget_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rput-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rput","text":"MPI_Rput(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rput man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rput_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rput_c","text":"MPI_Rput_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rput_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend","text":"MPI_Rsend(buf, count, datatype, dest, tag, comm)\n\nMPI_Rsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_c","text":"MPI_Rsend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Rsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_init","text":"MPI_Rsend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Rsend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_init_c","text":"MPI_Rsend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Rsend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan","text":"MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_c","text":"MPI_Scan_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Scan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_init","text":"MPI_Scan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Scan_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_init_c","text":"MPI_Scan_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Scan_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter","text":"MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_c","text":"MPI_Scatter_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_init","text":"MPI_Scatter_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatter_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_init_c","text":"MPI_Scatter_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatter_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv","text":"MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_c","text":"MPI_Scatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatterv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_init","text":"MPI_Scatterv_init(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatterv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_init_c","text":"MPI_Scatterv_init_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatterv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Send","text":"MPI_Send(buf, count, datatype, dest, tag, comm)\n\nMPI_Send man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_c","text":"MPI_Send_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Send_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_init","text":"MPI_Send_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Send_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_init_c","text":"MPI_Send_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Send_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv","text":"MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)\n\nMPI_Sendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_c","text":"MPI_Sendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)\n\nMPI_Sendrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_replace-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_replace","text":"MPI_Sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, status)\n\nMPI_Sendrecv_replace man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_replace_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_replace_c","text":"MPI_Sendrecv_replace_c(buf, count, datatype, dest, sendtag, source, recvtag, comm, status)\n\nMPI_Sendrecv_replace_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend","text":"MPI_Ssend(buf, count, datatype, dest, tag, comm)\n\nMPI_Ssend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_c","text":"MPI_Ssend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Ssend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_init","text":"MPI_Ssend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ssend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_init_c","text":"MPI_Ssend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ssend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Start-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Start","text":"MPI_Start(request)\n\nMPI_Start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Startall-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Startall","text":"MPI_Startall(count, array_of_requests)\n\nMPI_Startall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_c2f-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_c2f","text":"MPI_Status_c2f(c_status, f_status)\n\nMPI_Status_c2f man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_f2c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_f2c","text":"MPI_Status_f2c(f_status, c_status)\n\nMPI_Status_f2c man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_cancelled-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_cancelled","text":"MPI_Status_set_cancelled(status, flag)\n\nMPI_Status_set_cancelled man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_elements-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_elements","text":"MPI_Status_set_elements(status, datatype, count)\n\nMPI_Status_set_elements man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_elements_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_elements_x","text":"MPI_Status_set_elements_x(status, datatype, count)\n\nMPI_Status_set_elements_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Test-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Test","text":"MPI_Test(request, flag, status)\n\nMPI_Test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Test_cancelled-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Test_cancelled","text":"MPI_Test_cancelled(status, flag)\n\nMPI_Test_cancelled man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testall-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Testall","text":"MPI_Testall(count, array_of_requests, flag, array_of_statuses)\n\nMPI_Testall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testany-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Testany","text":"MPI_Testany(count, array_of_requests, indx, flag, status)\n\nMPI_Testany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testsome-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Testsome","text":"MPI_Testsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses)\n\nMPI_Testsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Topo_test-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Topo_test","text":"MPI_Topo_test(comm, status)\n\nMPI_Topo_test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_commit-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_commit","text":"MPI_Type_commit(datatype)\n\nMPI_Type_commit man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_contiguous-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_contiguous","text":"MPI_Type_contiguous(count, oldtype, newtype)\n\nMPI_Type_contiguous man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_contiguous_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_contiguous_c","text":"MPI_Type_contiguous_c(count, oldtype, newtype)\n\nMPI_Type_contiguous_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_darray-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_darray","text":"MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)\n\nMPI_Type_create_darray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_darray_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_darray_c","text":"MPI_Type_create_darray_c(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)\n\nMPI_Type_create_darray_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_complex-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_complex","text":"MPI_Type_create_f90_complex(p, r, newtype)\n\nMPI_Type_create_f90_complex man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_integer-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_integer","text":"MPI_Type_create_f90_integer(r, newtype)\n\nMPI_Type_create_f90_integer man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_real-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_real","text":"MPI_Type_create_f90_real(p, r, newtype)\n\nMPI_Type_create_f90_real man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed","text":"MPI_Type_create_hindexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_block-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_block","text":"MPI_Type_create_hindexed_block(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_block_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_block_c","text":"MPI_Type_create_hindexed_block_c(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_c","text":"MPI_Type_create_hindexed_c(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hvector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hvector","text":"MPI_Type_create_hvector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_create_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hvector_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hvector_c","text":"MPI_Type_create_hvector_c(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_create_hvector_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_indexed_block-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_indexed_block","text":"MPI_Type_create_indexed_block(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_indexed_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_indexed_block_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_indexed_block_c","text":"MPI_Type_create_indexed_block_c(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_indexed_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_keyval","text":"MPI_Type_create_keyval(type_copy_attr_fn, type_delete_attr_fn, type_keyval, extra_state)\n\nMPI_Type_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_resized-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_resized","text":"MPI_Type_create_resized(oldtype, lb, extent, newtype)\n\nMPI_Type_create_resized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_resized_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_resized_c","text":"MPI_Type_create_resized_c(oldtype, lb, extent, newtype)\n\nMPI_Type_create_resized_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_struct-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_struct","text":"MPI_Type_create_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_create_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_struct_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_struct_c","text":"MPI_Type_create_struct_c(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_create_struct_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_subarray-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_subarray","text":"MPI_Type_create_subarray(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype)\n\nMPI_Type_create_subarray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_subarray_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_subarray_c","text":"MPI_Type_create_subarray_c(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype)\n\nMPI_Type_create_subarray_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_delete_attr","text":"MPI_Type_delete_attr(datatype, type_keyval)\n\nMPI_Type_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_dup","text":"MPI_Type_dup(oldtype, newtype)\n\nMPI_Type_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_extent-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_extent","text":"MPI_Type_extent(datatype, extent)\n\nMPI_Type_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_free","text":"MPI_Type_free(datatype)\n\nMPI_Type_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_free_keyval","text":"MPI_Type_free_keyval(type_keyval)\n\nMPI_Type_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_attr","text":"MPI_Type_get_attr(datatype, type_keyval, attribute_val, flag)\n\nMPI_Type_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_contents-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_contents","text":"MPI_Type_get_contents(datatype, max_integers, max_addresses, max_datatypes, array_of_integers, array_of_addresses, array_of_datatypes)\n\nMPI_Type_get_contents man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_contents_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_contents_c","text":"MPI_Type_get_contents_c(datatype, max_integers, max_addresses, max_large_counts, max_datatypes, array_of_integers, array_of_addresses, array_of_large_counts, array_of_datatypes)\n\nMPI_Type_get_contents_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_envelope-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_envelope","text":"MPI_Type_get_envelope(datatype, num_integers, num_addresses, num_datatypes, combiner)\n\nMPI_Type_get_envelope man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_envelope_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_envelope_c","text":"MPI_Type_get_envelope_c(datatype, num_integers, num_addresses, num_large_counts, num_datatypes, combiner)\n\nMPI_Type_get_envelope_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent","text":"MPI_Type_get_extent(datatype, lb, extent)\n\nMPI_Type_get_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent_c","text":"MPI_Type_get_extent_c(datatype, lb, extent)\n\nMPI_Type_get_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent_x","text":"MPI_Type_get_extent_x(datatype, lb, extent)\n\nMPI_Type_get_extent_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_name","text":"MPI_Type_get_name(datatype, type_name, resultlen)\n\nMPI_Type_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent","text":"MPI_Type_get_true_extent(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent_c","text":"MPI_Type_get_true_extent_c(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent_x","text":"MPI_Type_get_true_extent_x(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_hindexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_hindexed","text":"MPI_Type_hindexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_hindexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_hvector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_hvector","text":"MPI_Type_hvector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_indexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_indexed","text":"MPI_Type_indexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_indexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_indexed_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_indexed_c","text":"MPI_Type_indexed_c(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_indexed_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_lb-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_lb","text":"MPI_Type_lb(datatype, displacement)\n\nMPI_Type_lb man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_match_size-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_match_size","text":"MPI_Type_match_size(typeclass, size, datatype)\n\nMPI_Type_match_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_set_attr","text":"MPI_Type_set_attr(datatype, type_keyval, attribute_val)\n\nMPI_Type_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_set_name","text":"MPI_Type_set_name(datatype, type_name)\n\nMPI_Type_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size","text":"MPI_Type_size(datatype, size)\n\nMPI_Type_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size_c","text":"MPI_Type_size_c(datatype, size)\n\nMPI_Type_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size_x-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size_x","text":"MPI_Type_size_x(datatype, size)\n\nMPI_Type_size_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_struct-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_struct","text":"MPI_Type_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_ub-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_ub","text":"MPI_Type_ub(datatype, displacement)\n\nMPI_Type_ub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_vector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_vector","text":"MPI_Type_vector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_vector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_vector_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_vector_c","text":"MPI_Type_vector_c(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_vector_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack","text":"MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm)\n\nMPI_Unpack man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_c","text":"MPI_Unpack_c(inbuf, insize, position, outbuf, outcount, datatype, comm)\n\nMPI_Unpack_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_external-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_external","text":"MPI_Unpack_external(datarep, inbuf, insize, position, outbuf, outcount, datatype)\n\nMPI_Unpack_external man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_external_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_external_c","text":"MPI_Unpack_external_c(datarep, inbuf, insize, position, outbuf, outcount, datatype)\n\nMPI_Unpack_external_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpublish_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpublish_name","text":"MPI_Unpublish_name(service_name, info, port_name)\n\nMPI_Unpublish_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wait-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Wait","text":"MPI_Wait(request, status)\n\nMPI_Wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitall-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitall","text":"MPI_Waitall(count, array_of_requests, array_of_statuses)\n\nMPI_Waitall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitany-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitany","text":"MPI_Waitany(count, array_of_requests, indx, status)\n\nMPI_Waitany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitsome-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitsome","text":"MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses)\n\nMPI_Waitsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate","text":"MPI_Win_allocate(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_c","text":"MPI_Win_allocate_c(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_shared-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_shared","text":"MPI_Win_allocate_shared(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_shared_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_shared_c","text":"MPI_Win_allocate_shared_c(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_attach-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_attach","text":"MPI_Win_attach(win, base, size)\n\nMPI_Win_attach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_call_errhandler","text":"MPI_Win_call_errhandler(win, errorcode)\n\nMPI_Win_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_complete-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_complete","text":"MPI_Win_complete(win)\n\nMPI_Win_complete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create","text":"MPI_Win_create(base, size, disp_unit, info, comm, win)\n\nMPI_Win_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_c","text":"MPI_Win_create_c(base, size, disp_unit, info, comm, win)\n\nMPI_Win_create_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_dynamic-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_dynamic","text":"MPI_Win_create_dynamic(info, comm, win)\n\nMPI_Win_create_dynamic man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_errhandler","text":"MPI_Win_create_errhandler(win_errhandler_fn, errhandler)\n\nMPI_Win_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_keyval","text":"MPI_Win_create_keyval(win_copy_attr_fn, win_delete_attr_fn, win_keyval, extra_state)\n\nMPI_Win_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_delete_attr","text":"MPI_Win_delete_attr(win, win_keyval)\n\nMPI_Win_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_detach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_detach","text":"MPI_Win_detach(win, base)\n\nMPI_Win_detach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_fence-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_fence","text":"MPI_Win_fence(assert, win)\n\nMPI_Win_fence man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush","text":"MPI_Win_flush(rank, win)\n\nMPI_Win_flush man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_all","text":"MPI_Win_flush_all(win)\n\nMPI_Win_flush_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_local-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_local","text":"MPI_Win_flush_local(rank, win)\n\nMPI_Win_flush_local man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_local_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_local_all","text":"MPI_Win_flush_local_all(win)\n\nMPI_Win_flush_local_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_free","text":"MPI_Win_free(win)\n\nMPI_Win_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_free_keyval","text":"MPI_Win_free_keyval(win_keyval)\n\nMPI_Win_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_attr","text":"MPI_Win_get_attr(win, win_keyval, attribute_val, flag)\n\nMPI_Win_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_errhandler","text":"MPI_Win_get_errhandler(win, errhandler)\n\nMPI_Win_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_group","text":"MPI_Win_get_group(win, group)\n\nMPI_Win_get_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_info","text":"MPI_Win_get_info(win, info_used)\n\nMPI_Win_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_name","text":"MPI_Win_get_name(win, win_name, resultlen)\n\nMPI_Win_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_lock-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_lock","text":"MPI_Win_lock(lock_type, rank, assert, win)\n\nMPI_Win_lock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_lock_all-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_lock_all","text":"MPI_Win_lock_all(assert, win)\n\nMPI_Win_lock_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_post-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_post","text":"MPI_Win_post(group, assert, win)\n\nMPI_Win_post man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_attr","text":"MPI_Win_set_attr(win, win_keyval, attribute_val)\n\nMPI_Win_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_errhandler","text":"MPI_Win_set_errhandler(win, errhandler)\n\nMPI_Win_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_info","text":"MPI_Win_set_info(win, info)\n\nMPI_Win_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_name","text":"MPI_Win_set_name(win, win_name)\n\nMPI_Win_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_shared_query-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_shared_query","text":"MPI_Win_shared_query(win, rank, size, disp_unit, baseptr)\n\nMPI_Win_shared_query man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_shared_query_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_shared_query_c","text":"MPI_Win_shared_query_c(win, rank, size, disp_unit, baseptr)\n\nMPI_Win_shared_query_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_start-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_start","text":"MPI_Win_start(group, assert, win)\n\nMPI_Win_start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_sync-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_sync","text":"MPI_Win_sync(win)\n\nMPI_Win_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_test-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_test","text":"MPI_Win_test(win, flag)\n\nMPI_Win_test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_unlock-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_unlock","text":"MPI_Win_unlock(rank, win)\n\nMPI_Win_unlock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_unlock_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_unlock_all","text":"MPI_Win_unlock_all(win)\n\nMPI_Win_unlock_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_wait-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_wait","text":"MPI_Win_wait(win)\n\nMPI_Win_wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wtick-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Wtick","text":"MPI_Wtick()\n\nMPI_Wtick man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wtime-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Wtime","text":"MPI_Wtime()\n\nMPI_Wtime man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/collective/#Collective-communication","page":"Collective communication","title":"Collective communication","text":"","category":"section"},{"location":"reference/collective/#Synchronization","page":"Collective communication","title":"Synchronization","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Barrier\nMPI.Ibarrier","category":"page"},{"location":"reference/collective/#MPI.Barrier","page":"Collective communication","title":"MPI.Barrier","text":"Barrier(comm::Comm)\n\nBlocks until comm is synchronized.\n\nIf comm is an intracommunicator, then it blocks until all members of the group have called it.\n\nIf comm is an intercommunicator, then it blocks until all members of the other group have called it.\n\nExternal links\n\nMPI_Barrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Ibarrier","page":"Collective communication","title":"MPI.Ibarrier","text":"Ibarrier(comm::Comm[, req::AbstractRequest = Request())\n\nBlocks until comm is synchronized.\n\nIf comm is an intracommunicator, then it blocks until all members of the group have called it.\n\nIf comm is an intercommunicator, then it blocks until all members of the other group have called it.\n\nExternal links\n\nMPI_Ibarrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Broadcast","page":"Collective communication","title":"Broadcast","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Bcast!\nMPI.Bcast\nMPI.bcast","category":"page"},{"location":"reference/collective/#MPI.Bcast!","page":"Collective communication","title":"MPI.Bcast!","text":"Bcast!(buf, comm::Comm; root::Integer=0)\n\nBroadcast the buffer buf from root to all processes in comm.\n\nSee also\n\nbcast\n\nExternal links\n\nMPI_Bcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Bcast","page":"Collective communication","title":"MPI.Bcast","text":"Bcast(obj, root::Integer, comm::Comm)\n\nBroadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.bcast","page":"Collective communication","title":"MPI.bcast","text":"bcast(obj, comm::Comm; root::Integer=0)\n\nBroadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.\n\nSee also\n\nBcast!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Gather/Scatter","page":"Collective communication","title":"Gather/Scatter","text":"","category":"section"},{"location":"reference/collective/#Gather","page":"Collective communication","title":"Gather","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Gather!\nMPI.Gather\nMPI.gather\nMPI.Gatherv!\nMPI.Allgather!\nMPI.Allgather\nMPI.Allgatherv!\nMPI.Neighbor_allgather!\nMPI.Neighbor_allgatherv!","category":"page"},{"location":"reference/collective/#MPI.Gather!","page":"Collective communication","title":"MPI.Gather!","text":"Gather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.\n\nsendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.\n\nOn the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), comm; root=root)\nelse\n MPI.Gather!(buf, nothing, comm; root=root)\nend\n\nrecvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.\n\nSee also\n\nGather for the allocating operation.\nGatherv! if the number of elements varies between processes.\nAllgather! to send the result to all processes.\n\nExternal links\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Gather","page":"Collective communication","title":"MPI.Gather","text":"Gather(sendbuf, comm::Comm; root=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.\n\nsendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.\n\nSee also\n\nGather! for the mutating operation.\nGatherv! if the number of elements varies between processes.\nAllgather!/Allgather to send the result to all processes.\n\nExternal links\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.gather","page":"Collective communication","title":"MPI.gather","text":"gather(obj, comm::Comm; root::Integer=0)\n\nGather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.\n\nSee also\n\nGather!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Gatherv!","page":"Collective communication","title":"MPI.Gatherv!","text":"Gatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.\n\nsendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.\n\nOn the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example\n\nif root == MPI.Comm_rank(comm)\n Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), comm; root=root)\nelse\n Gatherv!(buf, nothing, comm; root=root)\nend\n\nrecvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.\n\nSee also\n\nGather! if the number of elements is the same between processes.\nAllgatherv! to send the result to all processes.\n\nExternal links\n\nMPI_Gatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgather!","page":"Collective communication","title":"MPI.Allgather!","text":"Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)\nAllgather!(sendrecvbuf::UBuffer, comm::Comm)\n\nEach process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.\n\nsendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.\n\nrecvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.\n\nIf only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.\n\nSee also\n\nAllgather for the allocating operation\nAllgatherv! if the number of elements varies between processes.\nGather! to send only to a single root process\n\nExternal links\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgather","page":"Collective communication","title":"MPI.Allgather","text":"Allgather(sendbuf, comm)\n\nEach process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.\n\nsendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.\n\nSee also\n\nAllgather! for the mutating operation\nAllgatherv! if the number of elements varies between processes.\nGather! to send only to a single root process\n\nExternal links\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgatherv!","page":"Collective communication","title":"MPI.Allgatherv!","text":"Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)\nAllgatherv!(sendrecvbuf::VBuffer, comm::Comm)\n\nEach process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.\n\nsendbuf can be a Buffer object, or any object for which Buffer_send is defined.\n\nIf only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.\n\nSee also\n\nGatherv! to send the result to a single process\n\nExternal links\n\nMPI_Allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_allgather!","page":"Collective communication","title":"MPI.Neighbor_allgather!","text":"Neighbor_allgather!(sendbuf::Buffer, recvbuf::UBuffer, comm::Comm)\n\nPerform an all-gather communication along the directed edges of the graph.\n\nSee also MPI.Allgather!.\n\nExternal links\n\nMPI_Neighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_allgatherv!","page":"Collective communication","title":"MPI.Neighbor_allgatherv!","text":"Neighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)\n\nPerform an all-gather communication along the directed edges of the graph with variable sized data.\n\nSee also MPI.Allgatherv!.\n\nExternal links\n\nMPI_Neighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Scatter","page":"Collective communication","title":"Scatter","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Scatter!\nMPI.Scatter\nMPI.scatter\nMPI.Scatterv!","category":"page"},{"location":"reference/collective/#MPI.Scatter!","page":"Collective communication","title":"MPI.Scatter!","text":"Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;\n root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.\n\nsendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.\n\nrecvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, comm; root=root)\nelse\n MPI.Scatter!(nothing, buf, comm; root=root)\nend\n\nSee also\n\nScatterv! if the number of elements varies between processes.\n\nExternal links\n\nMPI_Scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scatter","page":"Collective communication","title":"MPI.Scatter","text":"Scatter(sendbuf, T, comm::Comm; root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.\n\nSee also\n\nScatter!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.scatter","page":"Collective communication","title":"MPI.scatter","text":"scatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)\n\nSends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.\n\nThis method can handle arbitrary data.\n\nSee also\n\nScatter!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scatterv!","page":"Collective communication","title":"MPI.Scatterv!","text":"Scatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.\n\nsendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.\n\nrecvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, comm; root=root)\nelse\n MPI.Scatterv!(nothing, buf, comm; root=root)\nend\n\nSee also\n\nScatter! if the number of elements are the same for all processes\n\nExternal links\n\nMPI_Scatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#All-to-all","page":"Collective communication","title":"All-to-all","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Alltoall!\nMPI.Alltoall\nMPI.Alltoallv!\nMPI.Neighbor_alltoall!\nMPI.Neighbor_alltoallv!","category":"page"},{"location":"reference/collective/#MPI.Alltoall!","page":"Collective communication","title":"MPI.Alltoall!","text":"Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)\nAlltoall!(sendrecvbuf::UBuffer, comm::Comm)\n\nEvery process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.\n\nrank send buf recv buf\n---- -------- --------\n 0 a,b,c,d,e,f Alltoall a,b,A,B,α,β\n 1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ\n 2 α,β,γ,ψ,η,ν e,f,E,F,η,ν\n\nIf only one buffer sendrecvbuf is used, then data is overwritten.\n\nSee also\n\nAlltoall for the allocating operation\n\nExternal links\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Alltoall","page":"Collective communication","title":"MPI.Alltoall","text":"Alltoall(sendbuf::UBuffer, comm::Comm)\n\nEvery process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.\n\nrank send buf recv buf\n---- -------- --------\n 0 a,b,c,d,e,f Alltoall a,b,A,B,α,β\n 1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ\n 2 α,β,γ,ψ,η,ν e,f,E,F,η,ν\n\nSee also\n\nAlltoall! for the mutating operation\n\nExternal links\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Alltoallv!","page":"Collective communication","title":"MPI.Alltoallv!","text":"Alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, comm::Comm)\n\nSimilar to Alltoall!, except with different size chunks per process.\n\nSee also\n\nVBuffer\n\nExternal links\n\nMPI_Alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_alltoall!","page":"Collective communication","title":"MPI.Neighbor_alltoall!","text":"Neighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)\n\nPerform an all-to-all communication along the directed edges of the graph with fixed size messages.\n\nSee also MPI.Alltoall!.\n\nExternal links\n\nMPI_Neighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_alltoallv!","page":"Collective communication","title":"MPI.Neighbor_alltoallv!","text":"Neighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)\n\nPerform an all-to-all communication along the directed edges of the graph with variable size messages.\n\nSee also MPI.Alltoallv!.\n\nExternal links\n\nMPI_Neighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Reduce/Scan","page":"Collective communication","title":"Reduce/Scan","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Reduce!\nMPI.Reduce\nMPI.Allreduce!\nMPI.Allreduce\nMPI.Scan!\nMPI.Scan\nMPI.Exscan!\nMPI.Exscan","category":"page"},{"location":"reference/collective/#MPI.Reduce!","page":"Collective communication","title":"MPI.Reduce!","text":"Reduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)\nReduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.\n\nOn non-root processes recvbuf is ignored, and can be nothing.\n\nTo perform the reduction in place, provide a single buffer sendrecvbuf.\n\nSee also\n\nReduce to handle allocation of the output buffer.\nAllreduce!/Allreduce to send reduction to all ranks.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Reduce","page":"Collective communication","title":"MPI.Reduce","text":"recvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.\n\nsendbuf can also be a scalar, in which case recvbuf will be a value of the same type.\n\nSee also\n\nReduce! for mutating and in-place operations\nAllreduce!/Allreduce to send reduction to all ranks.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allreduce!","page":"Collective communication","title":"MPI.Allreduce!","text":"Allreduce!(sendbuf, recvbuf, op, comm::Comm)\nAllreduce!(sendrecvbuf, op, comm::Comm)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.\n\nAllreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.\n\nIf only one sendrecvbuf buffer is provided, then the operation is performed in-place.\n\nSee also\n\nAllreduce, to handle allocation of the output buffer.\nReduce!/Reduce to send reduction to a single rank.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allreduce","page":"Collective communication","title":"MPI.Allreduce","text":"recvbuf = Allreduce(sendbuf, op, comm)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.\n\nsendbuf can also be a scalar, in which case recvbuf will be a value of the same type.\n\nSee also\n\nAllreduce! for mutating or in-place operations.\nReduce!/Reduce to send reduction to a single rank.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scan!","page":"Collective communication","title":"MPI.Scan!","text":"Scan!(sendbuf, recvbuf, op, comm::Comm)\nScan!(sendrecvbuf, op, comm::Comm)\n\nInclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.\n\nIf only a single buffer sendrecvbuf is provided, then operations will be performed in-place.\n\nSee also\n\nScan to handle allocation of the output buffer\nExscan!/Exscan for exclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scan","page":"Collective communication","title":"MPI.Scan","text":"recvbuf = Scan(sendbuf, op, comm::Comm)\n\nInclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.\n\nsendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.\n\nSee also\n\nScan! for mutating or in-place operations\nExscan!/Exscan for exclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Exscan!","page":"Collective communication","title":"MPI.Exscan!","text":"Exscan!(sendbuf, recvbuf, op, comm::Comm)\nExscan!(sendrecvbuf, op, comm::Comm)\n\nExclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.\n\nIf only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.\n\nSee also\n\nExscan to handle allocation of the output buffer\nScan!/Scan for inclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Exscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Exscan","page":"Collective communication","title":"MPI.Exscan","text":"recvbuf = Exscan(sendbuf, op, comm::Comm)\n\nExclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.\n\nSee also\n\nExscan! for mutating and in-place operations\nScan!/Scan for inclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"#MPI.jl","page":"MPI.jl","title":"MPI.jl","text":"","category":"section"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)","category":"page"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"If you use MPI.jl in your work, please cite the following paper:","category":"page"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"Simon Byrne, Lucas C. Wilcox, and Valentin Churavy (2021) \"MPI.jl: Julia bindings for the Message Passing Interface\". JuliaCon Proceedings, 1(1), 68, doi: 10.21105/jcon.00068","category":"page"}] +[{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/07-rma_active.jl\"","category":"page"},{"location":"examples/07-rma_active/#Active-RMA","page":"Active RMA","title":"Active RMA","text":"","category":"section"},{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"# examples/07-rma_active.jl\n# This example demonstrates one-sided communication,\n# specifically activate Remote Memory Access (RMA)\n\nusing MPI\n\nMPI.Init()\nconst world_sz = MPI.Comm_size(MPI.COMM_WORLD)\nconst rank = MPI.Comm_rank(MPI.COMM_WORLD)\n\n# allocate memory\nall_ranks = fill(-1, world_sz)\n# create RMA window on all ranks\nwin = MPI.Win_create(all_ranks, MPI.COMM_WORLD)\n\n#### first, let's MPI.Put on all ranks\n\n# start the communication epoch\nMPI.Win_fence(0, win)\n# each rank writes to exposed windows of rank 0\n# Signature: obj, target_rank, target_displacement, window\nMPI.Put(rank, 0, rank, win)\n# finish the communication epoch\nMPI.Win_fence(0, win)\n# print window content on all ranks\nfor j in 0:world_sz-1\n if rank == j\n println(\"After Put, Rank $rank:\")\n @show all_ranks\n end\n MPI.Barrier(MPI.COMM_WORLD)\nend\nrank == 0 && println()\n\n#### now, let's MPI.Get on all ranks\n\n# start the communication epoch\nMPI.Win_fence(0, win)\n# each rank reads from exposed windows of rank 0\nMPI.Get(all_ranks, 0, win)\n# finish the communication epoch\nMPI.Win_fence(0, win)\n# print window content on all ranks\nfor j in 0:world_sz-1\n if rank == j\n println(\"After Get, Rank $rank:\")\n @show all_ranks\n end\n MPI.Barrier(MPI.COMM_WORLD)\nend\n\n# free window\nMPI.free(win)","category":"page"},{"location":"examples/07-rma_active/","page":"Active RMA","title":"Active RMA","text":"> mpiexecjl -n 4 julia examples/07-rma_active.jl\nAfter Put, Rank 0:\nall_ranks = [0, 1, 2, 3]\nAfter Put, Rank 1:\nall_ranks = [-1, -1, -1, -1]\nAfter Put, Rank 2:\nall_ranks = [-1, -1, -1, -1]\nAfter Put, Rank 3:\nall_ranks = [-1, -1, -1, -1]\n\nAfter Get, Rank 0:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 1:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 2:\nall_ranks = [0, 1, 2, 3]\nAfter Get, Rank 3:\nall_ranks = [0, 1, 2, 3]","category":"page"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/08-rma_passive.jl\"","category":"page"},{"location":"examples/08-rma_passive/#Passive-RMA","page":"Passive RMA","title":"Passive RMA","text":"","category":"section"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"# examples/08-rma_passive.jl\n# This example demonstrates one-sided communication,\n# specifically passive Remote Memory Access (RMA)\n\nusing MPI\n\nMPI.Init()\nconst world_sz = MPI.Comm_size(MPI.COMM_WORLD)\nconst rank = MPI.Comm_rank(MPI.COMM_WORLD)\n\n# allocate memory\nall_ranks = fill(-1, world_sz)\n# create RMA window on all ranks\nwin = MPI.Win_create(all_ranks, MPI.COMM_WORLD)\n\n# let each rank write its rank number into window\nif rank != 0\n # lock window (MPI.LOCK_SHARED works as well)\n MPI.Win_lock(MPI.LOCK_EXCLUSIVE, 0, 0, win)\n # each rank writes to exposed windows of rank 0\n # Signature: obj, target_rank, target_displacement, window\n MPI.Put(rank, 0, rank, win)\n # finish the communication epoch\n MPI.Win_unlock(0, win)\nelse\n all_ranks[1] = 0\nend\n\n# wait with printing\nMPI.Win_fence(0, win)\n\n# print window content on all ranks\nif rank == 0\n println(\"After Put with lock / unlock, window content on rank 0:\")\n @show all_ranks\nend\n\n# free window\nMPI.free(win)","category":"page"},{"location":"examples/08-rma_passive/","page":"Passive RMA","title":"Passive RMA","text":"> mpiexecjl -n 4 julia examples/08-rma_passive.jl\nAfter Put with lock / unlock, window content on rank 0:\nall_ranks = [0, 1, 2, 3]","category":"page"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/04-sendrecv.jl\"","category":"page"},{"location":"examples/04-sendrecv/#Send/receive","page":"Send/receive","title":"Send/receive","text":"","category":"section"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"# examples/04-sendrecv.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\nsize = MPI.Comm_size(comm)\n\ndst = mod(rank+1, size)\nsrc = mod(rank-1, size)\n\nN = 4\n\nsend_mesg = Array{Float64}(undef, N)\nrecv_mesg = Array{Float64}(undef, N)\n\nfill!(send_mesg, Float64(rank))\n\nrreq = MPI.Irecv!(recv_mesg, comm; source=src, tag=src+32)\n\nprint(\"$rank: Sending $rank -> $dst = $send_mesg\\n\")\nsreq = MPI.Isend(send_mesg, comm; dest=dst, tag=rank+32)\n\nstats = MPI.Waitall([rreq, sreq])\n\nprint(\"$rank: Received $src -> $rank = $recv_mesg\\n\")\n\nMPI.Barrier(comm)","category":"page"},{"location":"examples/04-sendrecv/","page":"Send/receive","title":"Send/receive","text":"> mpiexecjl -n 4 julia examples/04-sendrecv.jl\n0: Sending 0 -> 1 = [0.0, 0.0, 0.0, 0.0]\n1: Sending 1 -> 2 = [1.0, 1.0, 1.0, 1.0]\n2: Sending 2 -> 3 = [2.0, 2.0, 2.0, 2.0]\n3: Sending 3 -> 0 = [3.0, 3.0, 3.0, 3.0]\n0: Received 3 -> 0 = [3.0, 3.0, 3.0, 3.0]\n1: Received 0 -> 1 = [0.0, 0.0, 0.0, 0.0]\n2: Received 1 -> 2 = [1.0, 1.0, 1.0, 1.0]\n3: Received 2 -> 3 = [2.0, 2.0, 2.0, 2.0]","category":"page"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/09-graph_communication.jl\"","category":"page"},{"location":"examples/09-graph_communication/#Graph-Communication","page":"Graph Communication","title":"Graph Communication","text":"","category":"section"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"# examples/09-graph_communication.jl\nusing Test\nusing MPI\n\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nsize = MPI.Comm_size(comm)\nrank = MPI.Comm_rank(comm)\n\n#\n# Setup the following communication graph\n#\n# +-----+\n# | |\n# v v\n# 0<-+ 3\n# ^ | ^\n# | | |\n# v | v\n# 1 +--2\n# ^ |\n# | |\n# +-----+\n#\n#\n\nif rank == 0\n dest = Cint[1,3]\n degree = Cint[length(dest)]\nelseif rank == 1\n dest = Cint[0]\n degree = Cint[length(dest)]\nelseif rank == 2\n dest = Cint[3,0,1]\n degree = Cint[length(dest)]\nelseif rank == 3\n dest = Cint[0,2,1]\n degree = Cint[length(dest)]\nend\n\nsource = Cint[rank]\ngraph_comm = MPI.Dist_graph_create(comm, source, degree, dest)\n\n# Query number of ranks that point to this rank, and number of ranks this rank point to\nindegree, outdegree, _ = MPI.Dist_graph_neighbors_count(graph_comm)\n\n# Query which ranks that point to this rank, and which ranks this rank point to\ninranks = Vector{Cint}(undef, indegree)\noutranks = Vector{Cint}(undef, outdegree)\nMPI.Dist_graph_neighbors!(graph_comm, inranks, outranks)\n\n#\n# Now send the rank across the edges.\n#\n# Version 1: use allgather primitive\n#\n\nsend = Cint[rank]\nrecv = Vector{Cint}(undef, indegree)\n\nMPI.Neighbor_allgather!(send, recv, graph_comm);\n\nprint(\"rank = $(rank): $(recv)\\n\")\n\n#\n# Version 2: use alltoall primitive\n#\n\nsend = fill(Cint(rank), outdegree)\nrecv = Vector{Cint}(undef, indegree)\n\nMPI.Neighbor_alltoall!(UBuffer(send,1), UBuffer(recv,1), graph_comm);\n\nprint(\"rank = $(rank): $(recv)\\n\")\n\n#\n# Now send the this rank \"destination rank\"+1 times across the edges.\n# Rank i receives i+1 values from each adjacent process\n#\n\nsend_count = outranks .+ Cint(1)\nsend = fill(Cint(rank), sum(send_count))\nrecv_count = fill(Cint(rank + 1), length(inranks))\nrecv = Vector{Cint}(undef, sum(recv_count))\n\nMPI.Neighbor_alltoallv!(VBuffer(send,send_count), VBuffer(recv,recv_count), graph_comm);\nprint(\"rank = $(rank): $(recv)\\n\")\n\nMPI.Finalize()","category":"page"},{"location":"examples/09-graph_communication/","page":"Graph Communication","title":"Graph Communication","text":"> mpiexecjl -n 4 julia examples/09-graph_communication.jl\nrank = 0: Int32[1, 2, 3]\nrank = 1: Int32[0, 2, 3]\nrank = 2: Int32[3]\nrank = 3: Int32[0, 2]\nrank = 0: Int32[1, 2, 3]\nrank = 1: Int32[0, 2, 3]\nrank = 2: Int32[3]\nrank = 3: Int32[0, 2]\nrank = 0: Int32[1, 2, 3]\nrank = 1: Int32[0, 0, 2, 2, 3, 3]\nrank = 2: Int32[3, 3, 3]\nrank = 3: Int32[0, 0, 0, 0, 2, 2, 2, 2]","category":"page"},{"location":"knownissues/#Known-issues","page":"Known issues","title":"Known issues","text":"","category":"section"},{"location":"knownissues/#Julia-module-precompilation","page":"Known issues","title":"Julia module precompilation","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"If multiple MPI ranks trigger Julia's module precompilation, then a race condition can result in an error such as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ERROR: LoadError: IOError: mkdir: file already exists (EEXIST)\nStacktrace:\n [1] uv_error at ./libuv.jl:97 [inlined]\n [2] mkdir(::String; mode::UInt16) at ./file.jl:177\n [3] mkpath(::String; mode::UInt16) at ./file.jl:227\n [4] mkpath at ./file.jl:222 [inlined]\n [5] compilecache_path(::Base.PkgId) at ./loading.jl:1210\n [6] compilecache(::Base.PkgId, ::String) at ./loading.jl:1240\n [7] _require(::Base.PkgId) at ./loading.jl:1029\n [8] require(::Base.PkgId) at ./loading.jl:927\n [9] require(::Module, ::Symbol) at ./loading.jl:922\n [10] include(::Module, ::String) at ./Base.jl:377\n [11] exec_options(::Base.JLOptions) at ./client.jl:288\n [12] _start() at ./client.jl:484","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"See julia issue #30174 for more discussion of this problem. There are similar issues with Pkg operations, see Pkg issue #1219.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This can be worked around be either:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Triggering precompilation before launching MPI processes, for example:\njulia --project -e 'using Pkg; pkg\"instantiate\"'\njulia --project -e 'using Pkg; pkg\"precompile\"'\nmpiexec julia --project script.jl\nLaunching julia with the --compiled-modules=no option. This can result in much longer package load times.","category":"page"},{"location":"knownissues/#Open-MPI","page":"Known issues","title":"Open MPI","text":"","category":"section"},{"location":"knownissues/#Segmentation-fault-when-loading-the-library","page":"Known issues","title":"Segmentation fault when loading the library","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When attempting to use a system-provided Open MPI implementation, you may encounter a segmentation fault upon loading the library, or whenever the value of an environment variable is requested. This can be fixed by setting the environment variable ZES_ENABLE_SYSMAN=1. See Open MPI issue #10142 for more details.","category":"page"},{"location":"knownissues/#Segmentation-fault-in-HCOLL","page":"Known issues","title":"Segmentation fault in HCOLL","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"If Open MPI was built with support for HCOLL, you may encounter a segmentation fault in certain operations involving custom datatypes. The stacktrace may look something like","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"hcoll_create_mpi_type at /opt/mellanox/hcoll/lib/libhcoll.so.1 (unknown line)\nompi_dtype_2_hcoll_dtype at /lustre/software/openmpi/llvm14/4.1.4/lib/openmpi/mca_coll_hcoll.so (unknown line)\nmca_coll_hcoll_allgather at /lustre/software/openmpi/llvm14/4.1.4/lib/openmpi/mca_coll_hcoll.so (unknown line)\nMPI_Allgather at /lustre/software/openmpi/llvm14/4.1.4/lib/libmpi.so (unknown line)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This is due to a bug in HCOLL, see Open MPI issue #11201 for more details. You can disable HCOLL by exporting the environment variable","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export OMPI_MCA_coll_hcoll_enable=\"0\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"before starting the MPI process.","category":"page"},{"location":"knownissues/#MPICH","page":"Known issues","title":"MPICH","text":"","category":"section"},{"location":"knownissues/#gethostbyname-failure-in-internal_Init_thread","page":"Known issues","title":"gethostbyname failure in internal_Init_thread","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When your internal network stack/route is not correctly configured for the local loopback device, MPICH may fail to initialize with an error message which looks like the following:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Fatal error in internal_Init_thread: Other MPI error, error stack:\ninternal_Init_thread(67)...........: MPI_Init_thread(argc=0x0, argv=0x0, required=2, provided=0x16db94160) failed\nMPII_Init_thread(234)..............:\nMPID_Init(67)......................:\ninit_world(171)....................: channel initialization failed\nMPIDI_CH3_Init(84).................:\nMPID_nem_init(314).................:\nMPID_nem_tcp_init(175).............:\nMPID_nem_tcp_get_business_card(397):\nGetSockInterfaceAddr(370)..........: gethostbyname failed, bogon (errno 0)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"A workaround is provided in the documentation of the MOOSE framework and we report it here for reference:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"obtain your hostname\n$ hostname\nmycoolname\nfor both Linux and macOS systems, in your /etc/hosts file map the hostname you obtained at the previous step to the localhost address 127.0.0.1, if not already present. Note: this step requires root access, to modify the system configuration file /etc/hosts, if you don't have it talk to your system administrator. For example, open the file /etc/hosts with sudo access with your favorite text editor (e.g. sudo vi /etc/hosts, or sudo emacs /etc/hosts) and add the line\n127.0.0.1 mycoolname\nto the end of the file\nas an alternative to the previous step, only for macOS systems, run the command\nsudo scutil --set HostName mycoolname\nHowever it has been reported that this method may not always be effective.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"For further information see","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"MPI.jl issue #824\nMOOSE discussion #23610","category":"page"},{"location":"knownissues/#UCX","page":"Known issues","title":"UCX","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"UCX is a communication framework used by several MPI implementations.","category":"page"},{"location":"knownissues/#Memory-cache","page":"Known issues","title":"Memory cache","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When used with CUDA, UCX intercepts cudaMalloc so it can determine whether the pointer passed to MPI is on the host (main memory) or the device (GPU). Unfortunately, there are several known issues with how this works with Julia:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"UCX issue #5061\nUCX issue #4001 (fixed in UCX v1.7.0)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"By default, MPI.jl disables this by setting","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ENV[\"UCX_MEMTYPE_CACHE\"] = \"no\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"at __init__ which may result in reduced performance, especially for smaller messages.","category":"page"},{"location":"knownissues/#Multi-threading-and-signal-handling","page":"Known issues","title":"Multi-threading and signal handling","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"When using Julia multi-threading, the Julia garbage collector internally uses SIGSEGV to synchronize threads.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"By default, UCX will error if this signal is raised (#337), resulting in a message such as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Caught signal 11 (Segmentation fault: invalid permissions for mapped object at address 0xXXXXXXXX)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This signal interception can be controlled by setting the environment variable UCX_ERROR_SIGNALS: if not already defined, MPI.jl will set it as:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"ENV[\"UCX_ERROR_SIGNALS\"] = \"SIGILL,SIGBUS,SIGFPE\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"at __init__. If set externally, it should be modified to exclude SIGSEGV from the list. Note that in some cases even if UCX_ERROR_SIGNALS is not set explicitly, UCX might still take SIGSEGV as an error signal. In this case, it might be needed to explicitly set UCX_ERROR_SIGNALS with","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export UCX_ERROR_SIGNALS=\"SIGILL,SIGBUS,SIGFPE\"","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"before calling mpiexec.","category":"page"},{"location":"knownissues/#CUDA-aware-MPI","page":"Known issues","title":"CUDA-aware MPI","text":"","category":"section"},{"location":"knownissues/#Memory-pool","page":"Known issues","title":"Memory pool","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Using CUDA-aware MPI on multi-GPU nodes with recent CUDA.jl may trigger (see here)","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"The call to cuIpcGetMemHandle failed. This means the GPU RDMA protocol\ncannot be used.\n cuIpcGetMemHandle return value: 1","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"in the MPI layer, or fail on a segmentation fault (see here) with","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"[1642930332.032032] [gcn19:4087661:0] gdr_copy_md.c:122 UCX ERROR gdr_pin_buffer failed. length :65536 ret:22","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"This is due to the MPI implementation using legacy cuIpc* APIs, which are incompatible with stream-ordered allocator, now default in CUDA.jl, see UCX issue #7110.","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"To circumvent this, one has to ensure the CUDA memory pool to be set to none:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"export JULIA_CUDA_MEMORY_POOL=none","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"More about CUDA.jl memory environment-variables.","category":"page"},{"location":"knownissues/#Hints-to-ensure-CUDA-aware-MPI-to-be-functional","page":"Known issues","title":"Hints to ensure CUDA-aware MPI to be functional","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Make sure to:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Have MPI and CUDA on path (or module loaded) that were used to build the CUDA-aware MPI\nSet the following environment variables: export JULIA_CUDA_MEMORY_POOL=none export JULIA_CUDA_USE_BINARYBUILDER=false\nAdd CUDA, MPIPreferences, and MPI packages in Julia. Switch to using the system binary\njulia --project -e 'using Pkg; Pkg.add([\"CUDA\", \"MPIPreferences\", \"MPI\"]); using MPIPreferences; MPIPreferences.use_system_binary()'\nThen in Julia, upon loading MPI and CUDA modules, you can check\nCUDA version: CUDA.versioninfo()\nIf MPI has CUDA: MPI.has_cuda()\nIf you are using correct MPI library: MPI.libmpi","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"After that, it may be preferred to run the Julia MPI script (as suggested here) launching it from a shell script (as suggested here).","category":"page"},{"location":"knownissues/#ROCm-aware-MPI","page":"Known issues","title":"ROCm-aware MPI","text":"","category":"section"},{"location":"knownissues/#Hints-to-ensure-ROCm-aware-MPI-to-be-functional","page":"Known issues","title":"Hints to ensure ROCm-aware MPI to be functional","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Make sure to:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Have MPI and ROCm on path (or module loaded) that were used to build the ROCm-aware MPI\nAdd AMDGPU, MPIPreferences, and MPI packages in Julia:\njulia --project -e 'using Pkg; Pkg.add([\"AMDGPU\", \"MPIPreferences\", \"MPI\"]); using MPIPreferences; MPIPreferences.use_system_binary()'\nThen in Julia, upon loading MPI and CUDA modules, you can check\nAMDGPU version: AMDGPU.versioninfo()\nIf MPI has ROCm: MPI.has_rocm()\nIf you are using correct MPI implementation: MPI.identify_implementation()","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"After that, this script can be used to verify if ROCm-aware MPI is functional (modified after the CUDA-aware version from here). It may be preferred to run the Julia ROCm-aware MPI script launching it from a shell script (as suggested here).","category":"page"},{"location":"knownissues/#Custom-reduction-operators","page":"Known issues","title":"Custom reduction operators","text":"","category":"section"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"It is not possible to use custom reduction operators with 32-bit Microsoft MPI on Windows and on ARM CPUs with any operating system. These issues are due to due how custom operators are currently implemented in MPI.jl, that is by using closure cfunctions. However they have two limitations:","category":"page"},{"location":"knownissues/","page":"Known issues","title":"Known issues","text":"Julia's C-compatible function pointers cannot be used where the stdcall calling convention is expected, which is the case for 32-bit Microsoft MPI,\nclosure cfunctions in Julia are based on LLVM trampolines, which are not supported on ARM architecture.","category":"page"},{"location":"reference/onesided/#One-sided-communication","page":"One-sided communication","title":"One-sided communication","text":"","category":"section"},{"location":"reference/onesided/","page":"One-sided communication","title":"One-sided communication","text":"MPI.Win_create\nMPI.Win_create_dynamic\nMPI.Win_allocate_shared\nMPI.Win_shared_query\nMPI.Win_flush\nMPI.Win_lock\nMPI.Win_unlock\nMPI.Get!\nMPI.Put!\nMPI.Accumulate!\nMPI.Get_accumulate!","category":"page"},{"location":"reference/onesided/#MPI.Win_create","page":"One-sided communication","title":"MPI.Win_create","text":"MPI.Win_create(base[, size::Integer, disp_unit::Integer], comm::Comm; infokws...)\n\nCreate a window over the array base, returning a Win object used by these processes to perform RMA operations. This is a collective call over comm.\n\nsize is the size of the window in bytes (default = sizeof(base))\ndisp_unit is the size of address scaling in bytes (default = sizeof(eltype(base)))\ninfokws are info keys providing optimization hints to the runtime.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_create_dynamic","page":"One-sided communication","title":"MPI.Win_create_dynamic","text":"MPI.Win_create_dynamic(comm::Comm; infokws...)\n\nCreate a dynamic window returning a Win object used by these processes to perform RMA operations\n\nThis is a collective call over comm.\n\ninfokws are info keys providing optimization hints.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_allocate_shared","page":"One-sided communication","title":"MPI.Win_allocate_shared","text":"win, array = MPI.Win_allocate_shared(Array{T}, dims, comm::Comm; infokws...)\n\nCreate and allocate a shared memory window for objects of type T of dimension dims (either an integer or tuple of integers), returning a Win and the Array{T} attached to the local process.\n\nThis is a collective call over comm, but dims can differ for each call (and can be zero).\n\nUse MPI.Win_shared_query to obtain the Array attached to a different process in the same shared memory space.\n\ninfokws are info keys providing optimization hints.\n\nMPI.free should be called on the Win object once operations have been completed.\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_shared_query","page":"One-sided communication","title":"MPI.Win_shared_query","text":"array = Win_shared_query(Array{T}, [dims,] win; rank)\n\nObtain the shared memory allocated by Win_allocate_shared of the process rank in win. Returns an Array{T} of size dims (being a Vector{T} if no dims argument is provided).\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_flush","page":"One-sided communication","title":"MPI.Win_flush","text":"Win_flush(win::Win; rank)\n\nCompletes all outstanding RMA operations initiated by the calling process to the target rank on the specified window.\n\nExternal links\n\nMPI_Win_flush man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_lock","page":"One-sided communication","title":"MPI.Win_lock","text":"Win_lock(win::Win; rank::Integer, type=:exclusive/:shared, nocheck=false)\n\nStarts an RMA access epoch. The window at the process with rank rank can be accessed by RMA operations on win during that epoch.\n\nMultiple RMA access epochs (with calls to MPI.Win_lock) can occur simultaneously; however, each access epoch must target a different process.\n\nAccesses that are protected by an exclusive lock (type=:exclusive) will not be concurrent at the window site with other accesses to the same window that are lock protected. Accesses that are protected by a shared lock (type=:shared) will not be concurrent at the window site with accesses protected by an exclusive lock to the same window.\n\nIf nocheck=true, no other process holds, or will attempt to acquire, a conflicting lock, while the caller holds the window lock. This is useful when mutual exclusion is achieved by other means, but the coherence operations that may be attached to the lock and unlock calls are still required.\n\nExternal links\n\nMPI_Win_lock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Win_unlock","page":"One-sided communication","title":"MPI.Win_unlock","text":"Win_unlock(win::Win; rank::Integer)\n\nCompletes an RMA access epoch started by a call to Win_lock.\n\nExternal links\n\nMPI_Win_unlock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Get!","page":"One-sided communication","title":"MPI.Get!","text":"Get!(origin, win::Win; rank::Integer, disp::Integer=0)\n\nCopies data from the memory window win on the remote rank rank, with displacement disp, into origin using remote memory access. origin can be a Buffer, or any object for which Buffer(origin) is defined.\n\nExternal links\n\nMPI_Get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Put!","page":"One-sided communication","title":"MPI.Put!","text":"Put!(origin, win::Win; rank::Integer, disp::Integer=0)\n\nCopies data from origin into memory window win on remote rank rank at displacement disp using remote memory access. origin can be a Buffer, or any object for which Buffer_send(origin) is defined.\n\nExternal links\n\nMPI_Put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Accumulate!","page":"One-sided communication","title":"MPI.Accumulate!","text":"Accumulate!(origin, op, win::Win; rank::Integer, disp::Integer=0)\n\nCombine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access.\n\norigin can be a Buffer, or any object for which Buffer_send(origin) is defined. op can be any predefined Op (custom operators are not supported).\n\nExternal links\n\nMPI_Accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/onesided/#MPI.Get_accumulate!","page":"One-sided communication","title":"MPI.Get_accumulate!","text":"Get_accumulate!(origin, result, target_rank::Integer, target_disp::Integer, op::Op, win::Win)\n\nCombine the content of the origin buffer into the target buffer (specified by win and displacement target_disp) with reduction operator op on the remote rank target_rank using remote memory access. Get_accumulate also returns the content of the target buffer before accumulation into the result buffer.\n\norigin can be a Buffer, or any object for which Buffer_send(origin) is defined, result can be a Buffer, or any object for which Buffer(result) is defined. op can be any predefined Op (custom operators are not supported).\n\nExternal links\n\nMPI_Get_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/buffers/#Buffers","page":"Buffers","title":"Buffers","text":"","category":"section"},{"location":"reference/buffers/","page":"Buffers","title":"Buffers","text":"Buffers are used for sending and receiving data. MPI.jl provides the following buffer types:","category":"page"},{"location":"reference/buffers/","page":"Buffers","title":"Buffers","text":"MPI.IN_PLACE\nMPI.Buffer\nMPI.Buffer_send\nMPI.UBuffer\nMPI.VBuffer\nMPI.RBuffer\nMPI.MPIPtr","category":"page"},{"location":"reference/buffers/#MPI.IN_PLACE","page":"Buffers","title":"MPI.IN_PLACE","text":"MPI.IN_PLACE\n\nA sentinel value that can be passed as a buffer argument for certain collective operations to use the same buffer for send and receive operations.\n\nScatter! and Scatterv!: can be used as the recvbuf argument on the root process.\nGather! and Gatherv!: can be used as the sendbuf argument on the root process.\nAllgather!, Allgatherv!, Alltoall! and Alltoallv!: can be used as the sendbuf argument on all processes.\nReduce! (root only), Allreduce!, Scan! and Exscan!: can be used as sendbuf argument.\n\n\n\n\n\n","category":"constant"},{"location":"reference/buffers/#MPI.Buffer","page":"Buffers","title":"MPI.Buffer","text":"MPI.Buffer\n\nAn MPI buffer for communication with a single rank. It is used for point-to-point and one-sided operations, as well as some collective operations. Operations will implicitly construct a Buffer when required via the generic constructor, but it can be advantageous to manually construct Buffers when doing so incurs additional overhead, for example when using a non-predefined MPI.Datatype.\n\nFields\n\ndata: a Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncount: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.\ndatatype: the MPI.Datatype stored in the buffer.\n\nUsage\n\nBuffer(data, count::Integer, datatype::Datatype)\n\nGeneric constructor.\n\nBuffer(data)\n\nConstruct a Buffer backed by data, automatically determining the appropriate count and datatype. Methods are provided for\n\nRef\nArray\nCUDA.CuArray if CUDA.jl is loaded.\nAMDGPU.ROCArray if AMDGPU.jl is loaded.\nSubArrays of an Array, CUDA.CuArray or AMDGPU.ROCArray where the layout is contiguous, sequential or blocked.\n\nSee also\n\nBuffer_send\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.Buffer_send","page":"Buffers","title":"MPI.Buffer_send","text":"Buffer_send(data)\n\nConstruct a Buffer object for a send operation from data, allowing cases where isbits(data).\n\n\n\n\n\n","category":"function"},{"location":"reference/buffers/#MPI.UBuffer","page":"Buffers","title":"MPI.UBuffer","text":"MPI.UBuffer\n\nAn MPI buffer for chunked collective communication, where all chunks are of uniform size.\n\nFields\n\ndata: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncount: The number of elements of datatype in each chunk.\nnchunks: The maximum number of chunks stored in the buffer. This is used only for validation, and can be set to nothing to disable checks.\ndatatype: The MPI.Datatype stored in the buffer.\n\nUsage\n\nUBuffer(data, count::Integer, nchunks::Union{Nothing, Integer}, datatype::Datatype)\n\nGeneric constructor.\n\nUBuffer(data, count::Integer)\n\nConstruct a UBuffer backed by data, where count is the number of elements in each chunk.\n\nSee also\n\nVBuffer: similar, but supports chunks of non-uniform sizes.\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.VBuffer","page":"Buffers","title":"MPI.VBuffer","text":"MPI.VBuffer\n\nAn MPI buffer for chunked collective communication, where chunks can be of different sizes and at different offsets.\n\nFields\n\ndata: A Julia object referencing a region of memory to be used for communication. It is required that the object can be cconverted to an MPIPtr.\ncounts: An array containing the length of each chunk.\ndispls: An array containing the (0-based) displacements of each chunk.\ndatatype: The MPI.Datatype stored in the buffer.\n\nUsage\n\nVBuffer(data, counts[, displs[, datatype]])\n\nConstruct a VBuffer backed by data, where counts[j] is the number of elements in the jth chunk, and displs[j] is the 0-based displacement. In other words, the jth chunk occurs in indices displs[j]+1:displs[j]+counts[j].\n\nThe default value for displs[j] = sum(counts[1:j-1]).\n\nSee also\n\nUBuffer when chunks are all of the same size.\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.RBuffer","page":"Buffers","title":"MPI.RBuffer","text":"MPI.RBuffer\n\nAn MPI buffer for reduction operations (MPI.Reduce!, MPI.Allreduce!, MPI.Scan!, MPI.Exscan!).\n\nFields\n\nsenddata: A Julia object referencing a region of memory to be used for the send buffer. It is required that the object can be cconverted to an MPIPtr.\nrecvdata: A Julia object referencing a region of memory to be used for the receive buffer. It is required that the object can be cconverted to an MPIPtr.\ncount: the number of elements of datatype in the buffer. Note that this may not correspond to the number of elements in the array if derived types are used.\ndatatype: the MPI.Datatype stored in the buffer.\n\nUsage\n\nRBuffer(senddata, recvdata[, count, datatype])\n\nGeneric constructor.\n\nRBuffer(senddata, recvdata)\n\nConstruct a Buffer backed by senddata and recvdata, automatically determining the appropriate count and datatype.\n\nsenddata can be MPI.IN_PLACE\nrecvdata can be nothing on a non-root node with MPI.Reduce!\n\n\n\n\n\n","category":"type"},{"location":"reference/buffers/#MPI.API.MPIPtr","page":"Buffers","title":"MPI.API.MPIPtr","text":"MPI.MPIPtr\n\nA pointer to an MPI buffer. This type is used only as part of the implicit conversion in ccall: a Julia object can be passed to MPI by defining methods for Base.cconvert(::Type{MPIPtr}, ...)/Base.unsafe_convert(::Type{MPIPtr}, ...).\n\nCurrently supported are:\n\nPtr\nRef\nArray\nSubArray\nCUDA.CuArray if CUDA.jl is loaded.\nAMDGPU.ROCArray if AMDGPU.jl is loaded.\n\nAdditionally, certain sentinel values can be used, e.g. MPI_IN_PLACE or MPI_BOTTOM.\n\n\n\n\n\n","category":"type"},{"location":"reference/comm/#Communicators","page":"Communicators","title":"Communicators","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"An MPI communicator specifies the communication context for a communication operation. In particular, it specifies the set of processes which share the context, and assigns each each process a unique rank (see MPI.Comm_rank) taking an integer value in 0:n-1, where n is the number of processes in the communicator (see MPI.Comm_size.","category":"page"},{"location":"reference/comm/#Types-and-enums","page":"Communicators","title":"Types and enums","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm","category":"page"},{"location":"reference/comm/#MPI.Comm","page":"Communicators","title":"MPI.Comm","text":"MPI.Comm\n\nAn MPI Communicator object.\n\n\n\n\n\n","category":"type"},{"location":"reference/comm/#Constants","page":"Communicators","title":"Constants","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.COMM_WORLD\nMPI.COMM_SELF","category":"page"},{"location":"reference/comm/#MPI.COMM_WORLD","page":"Communicators","title":"MPI.COMM_WORLD","text":"MPI.COMM_WORLD\n\nA communicator containing all processes with which the local rank can communicate at initialization. In a typical \"static-process\" model, this will be all processes.\n\n\n\n\n\n","category":"constant"},{"location":"reference/comm/#MPI.COMM_SELF","page":"Communicators","title":"MPI.COMM_SELF","text":"MPI.COMM_SELF\n\nA communicator containing only the local process.\n\n\n\n\n\n","category":"constant"},{"location":"reference/comm/#Functions","page":"Communicators","title":"Functions","text":"","category":"section"},{"location":"reference/comm/#Operations","page":"Communicators","title":"Operations","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm_size\nMPI.Comm_rank\nMPI.Comm_compare\nMPI.Comm_group\nMPI.Comm_remote_group","category":"page"},{"location":"reference/comm/#MPI.Comm_size","page":"Communicators","title":"MPI.Comm_size","text":"Comm_size(comm::Comm)\n\nThe number of processes involved in communicator.\n\nSee also\n\nMPI.Comm_rank.\n\nExternal links\n\nMPI_Comm_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_rank","page":"Communicators","title":"MPI.Comm_rank","text":"Comm_rank(comm::Comm)\n\nThe rank of the process in the particular communicator's group.\n\nReturns an integer in the range 0:MPI.Comm_size()-1.\n\nSee also\n\nMPI.Comm_size.\n\nExternal links\n\nMPI_Comm_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_compare","page":"Communicators","title":"MPI.Comm_compare","text":"Comm_compare(comm1::Comm, comm2::Comm)::MPI.Comparison\n\nCompare two communicators and their underlying groups, returning an element of the Comparison enum.\n\nExternal links\n\nMPI_Comm_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_group","page":"Communicators","title":"MPI.Comm_group","text":"Comm_group(comm::Comm)\n\nAccesses the group associated with given communicator.\n\nExternal links\n\nMPI_Comm_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_remote_group","page":"Communicators","title":"MPI.Comm_remote_group","text":"Comm_remote_group(comm::Comm)\n\nAccesses the remote group associated with the given inter-communicator.\n\nExternal links\n\nMPI_Comm_remote_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#Constructors","page":"Communicators","title":"Constructors","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.Comm_create\nMPI.Comm_create_group\nMPI.Comm_dup\nMPI.Comm_get_parent\nMPI.Comm_spawn\nMPI.Comm_split\nMPI.Comm_split_type\nMPI.Intercomm_merge","category":"page"},{"location":"reference/comm/#MPI.Comm_create","page":"Communicators","title":"MPI.Comm_create","text":"Comm_create(comm::Comm, group::Group)\n\nCollectively creates a new communicator.\n\nSee also\n\nMPI.Comm_create_group for the noncollective operation\n\nExternal links\n\nMPI_Comm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_create_group","page":"Communicators","title":"MPI.Comm_create_group","text":"Comm_create_group(comm::Comm, group::Group, tag::Integer)\n\nNoncollectively creates a new communicator.\n\nSee also\n\nMPI.Comm_create for the noncollective operation\n\nExternal links\n\nMPI_Comm_create_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_dup","page":"Communicators","title":"MPI.Comm_dup","text":"Comm_dup(comm::Comm)\n\nExternal links\n\nMPI_Comm_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_get_parent","page":"Communicators","title":"MPI.Comm_get_parent","text":"Comm_get_parent()\n\nExternal links\n\nMPI_Comm_get_parent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_spawn","page":"Communicators","title":"MPI.Comm_spawn","text":"Comm_spawn(command, argv::Vector{String}, nprocs::Integer, comm::Comm[, errors::Vector{Cint}]; kwargs...)\n\nExternal links\n\nMPI_Comm_spawn man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_split","page":"Communicators","title":"MPI.Comm_split","text":"Comm_split(comm::Comm, color::Union{Integer,Nothing}, key::Integer)\n\nPartition the communicator comm, one for each value of color, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.\n\ncolor should be a non-negative integer, or nothing, in which case a null communicator is returned for that rank.\n\nExternal links\n\nMPI_Comm_split man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Comm_split_type","page":"Communicators","title":"MPI.Comm_split_type","text":"Comm_split_type(comm::Comm, split_type, key::Integer; kwargs...)\n\nPartitions the communicator comm based on split_type, returning a new communicator. Within each group, the processes are ranked in the order of key, with ties broken by the order of comm.\n\nCurrently only one split_type is provided:\n\nMPI.COMM_TYPE_SHARED: splits the communicator into subcommunicators, each of which can create a shared memory region.\n\nExternal links\n\nMPI_Comm_split_type man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.Intercomm_merge","page":"Communicators","title":"MPI.Intercomm_merge","text":"Intercomm_merge(intercomm::Comm, flag::Bool)\n\nExternal links\n\nMPI_Intercomm_merge man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#Miscellaneous","page":"Communicators","title":"Miscellaneous","text":"","category":"section"},{"location":"reference/comm/","page":"Communicators","title":"Communicators","text":"MPI.universe_size\nMPI.tag_ub","category":"page"},{"location":"reference/comm/#MPI.universe_size","page":"Communicators","title":"MPI.universe_size","text":"universe_size()\n\nThe total number of available slots, or nothing if it is not defined. This is determined by the MPI_UNIVERSE_SIZE attribute of COMM_WORLD.\n\nThis is typically dependent on the MPI implementation: for MPICH-based implementations, this is specified by the -usize argument. OpenMPI defines a default value based on the number of processes available.\n\n\n\n\n\n","category":"function"},{"location":"reference/comm/#MPI.tag_ub","page":"Communicators","title":"MPI.tag_ub","text":"tag_ub()\n\nThe maximum value tag value for point-to-point operations.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Point-to-point-communication","page":"Point-to-point communication","title":"Point-to-point communication","text":"","category":"section"},{"location":"reference/pointtopoint/#Types","page":"Point-to-point communication","title":"Types","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.AbstractRequest\nMPI.Request\nMPI.UnsafeRequest\nMPI.MultiRequest\nMPI.UnsafeMultiRequest\nMPI.RequestSet\nMPI.Status","category":"page"},{"location":"reference/pointtopoint/#MPI.AbstractRequest","page":"Point-to-point communication","title":"MPI.AbstractRequest","text":"MPI.AbstractRequest\n\nAn abstract type for Julia objects wrapping MPI Requests objects, which represent non-blocking MPI communication operations. The following implementations provided in MPI.jl\n\nRequest: this is the default request type.\nUnsafeRequest: similar to Request, but does not maintain a reference to the underlying communication buffer.\nMultiRequestItem: created by calling getindex on a MultiRequest / UnsafeMultiRequest object, which efficiently stores a collection of requests.\n\nHow request objects are used\n\nA request object can be passed to non-blocking communication operations, such as MPI.Isend and MPI.Irecv!. If no object is provided, then an MPI.Request is used.\n\nThe status of a Request can be checked by the Wait and Test functions or their mœultiple-request variants, which will deallocate the request once it is determined to be complete.\n\nAlternatively, it will be deallocated by calling MPI.free or at finalization, meaning that it is safe to ignore the request objects if the status of the communication can be checked by other means.\n\nIn certain cases, the operation can also be cancelled by Cancel!.\n\nImplementing new request types\n\nSubtypes R <: AbstractRequest should define the methods for the following functions:\n\nC conversion functions to MPI_Request and Ptr{MPI_Request}:\nBase.cconvert(::Type{MPI_Request}, req::R) / Base.unsafe_convert(::Type{MPI_Request}, req::R)\nBase.cconvert(::Type{Ptr{MPI_Request}}, req::R) / Base.unsafe_convert(::Type{Ptr{MPI_Request}}, req::R)`\nsetbuffer!(req::R, val): keep a reference to the communication bufferval. Ifval == nothing`, then clear the reference.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Request","page":"Point-to-point communication","title":"MPI.Request","text":"MPI.Request()\n\nThe default MPI Request object, representing a non-blocking communication. This also contains a reference to the buffer used in the communication to ensure it isn't garbage-collected during communication.\n\nSee AbstractRequest for more information.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.UnsafeRequest","page":"Point-to-point communication","title":"MPI.UnsafeRequest","text":"MPI.UnsafeRequest()\n\nSimilar to MPI.Request, but does not maintain a reference to the underlying communication buffer. This may have improve performance by reducing memory allocations.\n\nwarning: Warning\nThe user should ensure that another reference to the communication buffer is maintained so that it is not cleaned up by the garbage collector before the communication operation is complete.For example ```julia buf = MPI.Buffer(zeros(10)) GC.@preserve buf begin req = MPI.Isend(buf, comm, UnsafeRequest(); rank=1) # ... MPI.Wait(req) end\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.MultiRequest","page":"Point-to-point communication","title":"MPI.MultiRequest","text":"MPI.MultiRequest(n::Integer=0)\n\nA collection of MPI Requests. This is useful when operating on multiple MPI requests at the same time. MultiRequest objects can be passed directly to MPI.Waitall, MPI.Testall, etc.\n\nreq[i] will return a MultiRequestItem which adheres to the [AbstractRequest] interface.\n\nUsage\n\nreqs = MPI.MultiRequest(n)\nfor i = 1:n\n MPI.Isend(buf, comm, reqs[i]; rank=dest[i])\nend\nMPI.Waitall(reqs)\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.UnsafeMultiRequest","page":"Point-to-point communication","title":"MPI.UnsafeMultiRequest","text":"MPI.UnsafeMultiRequest(n::Integer=0)\n\nSimilar to MPI.MultiRequest, except that it does not maintain references to the underlying communication buffers. The same caveats apply as MPI.UnsafeRequest.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.RequestSet","page":"Point-to-point communication","title":"MPI.RequestSet","text":"RequestSet(requests::Vector{Request})\nRequestSet() # create an empty RequestSet\n\nA wrapper for an array of Requests that can be used to reduce intermediate memory allocations in Waitall, Testall, Waitany, Testany, Waitsome or Testsome.\n\nConsider using a MultiRequest or UnsafeMultiRequest instead.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Status","page":"Point-to-point communication","title":"MPI.Status","text":"MPI.Status\n\nThe status of an MPI receive communication. It has 3 accessible fields\n\nsource: source of the received message\ntag: tag of the received message\nerror: error code. This is only set if a function returns multiple statuses.\n\nAdditionally, the accessor function MPI.Get_count can be used to determine the number of entries received.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#Accessors","page":"Point-to-point communication","title":"Accessors","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Get_count","category":"page"},{"location":"reference/pointtopoint/#MPI.Get_count","page":"Point-to-point communication","title":"MPI.Get_count","text":"MPI.Get_count(status::Status, T)\n\nThe number of entries received. T should match the argument provided by the receive call that set the status variable.\n\nIf the number of entries received exceeds the limits of the count parameter, then it returns MPI_UNDEFINED.\n\nExternal links\n\nMPI_Get_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Constants","page":"Point-to-point communication","title":"Constants","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.PROC_NULL\nMPI.ANY_SOURCE\nMPI.ANY_TAG","category":"page"},{"location":"reference/pointtopoint/#MPI.PROC_NULL","page":"Point-to-point communication","title":"MPI.PROC_NULL","text":"MPI.PROC_NULL\n\nA dummy value that can be used instead of a rank wherever a source or a destination argument is required in a call. A send\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#MPI.ANY_SOURCE","page":"Point-to-point communication","title":"MPI.ANY_SOURCE","text":"MPI.ANY_SOURCE\n\nA wild card value for receive or probe operations that matches any source rank.\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#MPI.ANY_TAG","page":"Point-to-point communication","title":"MPI.ANY_TAG","text":"MPI.ANY_TAG\n\nA wild card value for receive or probe operations that matches any tag.\n\n\n\n\n\n","category":"constant"},{"location":"reference/pointtopoint/#Blocking-communication","page":"Point-to-point communication","title":"Blocking communication","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Send\nMPI.send\nMPI.Recv!\nMPI.Recv\nMPI.recv\nMPI.Sendrecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Send","page":"Point-to-point communication","title":"MPI.Send","text":"Send(buf, comm::Comm; dest::Integer, tag::Integer=0)\n\nPerform a blocking send from the buffer buf to MPI rank dest of communicator comm using the message tag tag.\n\nSend(obj, comm::Comm; dest::Integer, tag::Integer=0)\n\nComplete a blocking send of an isbits object obj to MPI rank dest of communicator comm using with the message tag tag.\n\nExternal links\n\nMPI_Send man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.send","page":"Point-to-point communication","title":"MPI.send","text":"send(obj, comm::Comm; dest::Integer, tag::Integer=0)\n\nComplete a blocking send using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv!","page":"Point-to-point communication","title":"MPI.Recv!","text":"data = Recv!(recvbuf, comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\ndata, status = Recv!(recvbuf, comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive into the buffer recvbuf from MPI rank source of communicator comm using with the message tag tag.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nOptionally returns the Status object of the receive.\n\nSee also\n\nRecv\nrecv\n\nExternal links\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv","page":"Point-to-point communication","title":"MPI.Recv","text":"data = Recv(::Type{T}, comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\ndata, status = Recv(::Type{T}, comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive of a single isbits object of type T from MPI rank source of communicator comm using with the message tag tag.\n\nReturns a tuple of the object of type T and optionally the Status of the receive.\n\nSee also\n\nRecv!\nrecv\n\nExternal links\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.recv","page":"Point-to-point communication","title":"MPI.recv","text":"obj = recv(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nobj, status = recv(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nCompletes a blocking receive of a serialized object from MPI rank source of communicator comm using with the message tag tag.\n\nReturns the deserialized object and optionally the Status of the receive.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Sendrecv!","page":"Point-to-point communication","title":"MPI.Sendrecv!","text":"data = Sendrecv!(sendbuf, recvbuf, comm;\n dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)\ndata, status = Sendrecv!(sendbuf, recvbuf, comm, MPI.Status;\n dest::Integer, sendtag::Integer=0, source::Integer=MPI.ANY_SOURCE, recvtag::Integer=MPI.ANY_TAG)\n\nComplete a blocking send-receive operation over the MPI communicator comm. Send sendbuf to the MPI rank dest using message tag sendtag, and receive from MPI rank source into the buffer recvbuf using message tag recvtag. Return a Status object.\n\nExternal links\n\nMPI_Sendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Non-blocking-communication","page":"Point-to-point communication","title":"Non-blocking communication","text":"","category":"section"},{"location":"reference/pointtopoint/#Initiation","page":"Point-to-point communication","title":"Initiation","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Isend\nMPI.isend\nMPI.Irecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Isend","page":"Point-to-point communication","title":"MPI.Isend","text":"Isend(data, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)\n\nStarts a nonblocking send of data to MPI rank dest of communicator comm using with the message tag tag.\n\ndata can be a Buffer, or any object for which Buffer_send is defined.\n\nReturns the AbstractRequest object for the nonblocking send.\n\nExternal links\n\nMPI_Isend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.isend","page":"Point-to-point communication","title":"MPI.isend","text":"isend(obj, comm::Comm[, req::AbstractRequest = Request()]; dest::Integer, tag::Integer=0)\n\nStarts a nonblocking send of using a serialized version of obj to MPI rank dest of communicator comm using with the message tag tag.\n\nReturns the communication Request for the nonblocking send.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Irecv!","page":"Point-to-point communication","title":"MPI.Irecv!","text":"req = Irecv!(recvbuf, comm::Comm[, req::AbstractRequest = Request()];\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nStarts a nonblocking receive into the buffer data from MPI rank source of communicator comm using with the message tag tag.\n\ndata can be a Buffer, or any object for which Buffer(data) is defined.\n\nReturns the AbstractRequest object for the nonblocking receive.\n\nExternal links\n\nMPI_Irecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Completion","page":"Point-to-point communication","title":"Completion","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Test\nMPI.Testall\nMPI.Testany\nMPI.Testsome\nMPI.Wait\nBase.wait(req::MPI.Request)\nMPI.Waitall\nMPI.Waitany\nMPI.Waitsome","category":"page"},{"location":"reference/pointtopoint/#MPI.Test","page":"Point-to-point communication","title":"MPI.Test","text":"flag = Test(req::AbstractRequest)\nflag, status = Test(req::AbstractRequest, Status)\n\nCheck if the request req is complete. If so, the request is deallocated and flag = true is returned. Otherwise flag = false.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testall","page":"Point-to-point communication","title":"MPI.Testall","text":"flag = Testall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\nflag, statuses = Testall(reqs::AbstractVector{Request}, Status)\n\nCheck if all active requests in the array reqs are complete. If so, the requests are deallocated and true is returned. Otherwise no requests are modified, and false is returned.\n\nThe optional statuses or Status argument can be used to obtain the return Status of each request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testany","page":"Point-to-point communication","title":"MPI.Testany","text":"flag, idx = Testany(reqs::AbstractVector{Request}[, status::Ref{Status}])\nflag, idx, status = Testany(reqs::AbstractVector{Request}, Status)\n\nChecks if any one of the requests in the array reqs is complete.\n\nIf one or more requests are complete, then one is chosen arbitrarily, deallocated. flag = true and its (1-based) index idx is returned.\n\nIf there are no completed requests, then flag = false and idx = nothing is returned.\n\nIf there are no active requests, flag = true and idx = nothing.\n\nThe optional status argument can be used to obtain the return Status of the request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Testsome","page":"Point-to-point communication","title":"MPI.Testsome","text":"inds = Testsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\n\nSimilar to Waitsome except that if no operations have completed it will return an empty array.\n\nIf there are no active requests, then the function returns nothing.\n\nThe optional statuses argument can be used to obtain the return Status of each completed request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Testsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Wait","page":"Point-to-point communication","title":"MPI.Wait","text":"Wait(req::AbstractRequest)\nstatus = Wait(req::AbstractRequest, Status)\n\nBlock until the request req is complete and deallocated.\n\nThe Status argument returns the Status of the completed request.\n\nExternal links\n\nMPI_Wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Base.wait-Tuple{MPI.Request}","page":"Point-to-point communication","title":"Base.wait","text":"Base.wait(req::MPI.Request)\n\nWait for an MPI request to complete. Unlike MPI.Wait, it will yield to other Julia tasks resulting in a cooperative wait.\n\n\n\n\n\n","category":"method"},{"location":"reference/pointtopoint/#MPI.Waitall","page":"Point-to-point communication","title":"MPI.Waitall","text":"Waitall(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\nstatuses = Waitall(reqs::AbstractVector{Request}, Status)\n\nBlock until all active requests in the array reqs are complete.\n\nThe optional statuses or Status argument can be used to obtain the return Status of each request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Waitany","page":"Point-to-point communication","title":"MPI.Waitany","text":"i = Waitany(reqs::AbstractVector{Request}[, status::Ref{Status}])\ni, status = Waitany(reqs::AbstractVector{Request}, Status)\n\nBlocks until one of the requests in the array reqs is complete: if more than one is complete, one is chosen arbitrarily. The request is deallocated and the (1-based) index i of the completed request is returned.\n\nIf there are no active requests, then i = nothing.\n\nThe optional status argument can be used to obtain the return Status of the request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Waitsome","page":"Point-to-point communication","title":"MPI.Waitsome","text":"inds = Waitsome(reqs::AbstractVector{Request}[, statuses::Vector{Status}])\n\nBlock until at least one of the active requests in the array reqs is complete. The completed requests are deallocated, and an array inds of their indices in reqs is returned.\n\nIf there are no active requests, then inds = nothing.\n\nThe optional statuses argument can be used to obtain the return Status of each completed request.\n\nSee also\n\nRequestSet can be used to minimize allocations\n\nExternal links\n\nMPI_Waitsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Probe/Cancel","page":"Point-to-point communication","title":"Probe/Cancel","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.isnull\nMPI.Cancel!\nMPI.Iprobe\nMPI.Probe","category":"page"},{"location":"reference/pointtopoint/#MPI.isnull","page":"Point-to-point communication","title":"MPI.isnull","text":"isnull(req::AbstractRequest)\n\nIs req is a null request.\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Cancel!","page":"Point-to-point communication","title":"MPI.Cancel!","text":"Cancel!(req::Request)\n\nMarks a pending Irecv! operation for cancellation (cancelling a Isend, while supported in some implementations, is deprecated as of MPI 3.1). Note that the request is not deallocated, and can still be queried using the test or wait functions.\n\nExternal links\n\nMPI_Cancel man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Iprobe","page":"Point-to-point communication","title":"MPI.Iprobe","text":"ismsg = Iprobe(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nismsg, status = Iprobe(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nChecks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true. The Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Iprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Probe","page":"Point-to-point communication","title":"MPI.Probe","text":"Probe(comm::Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nstatus = Probe(comm::Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nBlocks until there is a message that can be received matching source, tag and comm. Optionally returns the corresponding Status object.\n\nExternal links\n\nMPI_Probe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Persistent-requests","page":"Point-to-point communication","title":"Persistent requests","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Send_init\nMPI.Recv_init\nMPI.Start\nMPI.Startall","category":"page"},{"location":"reference/pointtopoint/#MPI.Send_init","page":"Point-to-point communication","title":"MPI.Send_init","text":"Send_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];\n dest, tag=0)\n\nAllocate a persistent send request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.\n\nExternal links\n\nMPI_Send_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Recv_init","page":"Point-to-point communication","title":"MPI.Recv_init","text":"Recv_init(buf, comm::MPI.Comm[, req::AbstractRequest = Request()];\n source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)\n\nAllocate a persistent receive request, returning a AbstractRequest object. Use Start or Startall to start the communication operation, and free to deallocate the request.\n\nExternal links\n\nMPI_Recv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Start","page":"Point-to-point communication","title":"MPI.Start","text":"Start(request::AbstractRequest)\n\nStart a persistent communication request created by Send_init or Recv_init. Call Wait to complete the request.\n\nExternal links\n\nMPI_Start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Startall","page":"Point-to-point communication","title":"MPI.Startall","text":"Startall(reqs::AbstractVector{Request})\n\nStart a set of persistent communication requests created by Send_init or Recv_init. Call Waitall to complete the requests.\n\nExternal links\n\nMPI_Startall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#Matching-probes-and-receives","page":"Point-to-point communication","title":"Matching probes and receives","text":"","category":"section"},{"location":"reference/pointtopoint/","page":"Point-to-point communication","title":"Point-to-point communication","text":"MPI.Message\nMPI.Mprobe\nMPI.Improbe\nMPI.Mrecv!\nMPI.Imrecv!","category":"page"},{"location":"reference/pointtopoint/#MPI.Message","page":"Point-to-point communication","title":"MPI.Message","text":"MPI.Message\n\nAn MPI message handle object, used by matched receive operations. These are returned by MPI.Mprobe and MPI.Improbe operations, and must be received by either MPI.Mrecv! or MPI.Imrecv!.\n\n\n\n\n\n","category":"type"},{"location":"reference/pointtopoint/#MPI.Mprobe","page":"Point-to-point communication","title":"MPI.Mprobe","text":"msg = MPI.Mprobe(comm::MPI.Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nmsg, status = MPI.Mprobe(comm::MPI.Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nMatching blocking probe. Similar to MPI.Probe, except that it also returns msg, an MPI.Message object. \n\nBlocks until a message that can be received matching source, tag and comm, returning a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Mprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Improbe","page":"Point-to-point communication","title":"MPI.Improbe","text":"ismsg, msg = MPI.Improbe(comm::MPI.Comm;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\nismsg, msg, status = MPI.Improbe(comm::MPI.Comm, MPI.Status;\n source::Integer=MPI.ANY_SOURCE, tag::Integer=MPI.ANY_TAG)\n\nMatching non-blocking probe. Similar to MPI.Iprobe, except that it also returns msg, an MPI.Message object. \n\nChecks if there is a message that can be received matching source, tag and comm. If so, returns ismsg = true, and a Message object msg, which must be received by either MPI.Mrecv! or MPI.Imrecv!. Otherwise msg is set to be a null Message.\n\nThe Status argument additionally returns the Status of the completed request.\n\nExternal links\n\nMPI_Improbe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Mrecv!","page":"Point-to-point communication","title":"MPI.Mrecv!","text":"data = MPI.Mrecv!(recvbuf, msg::MPI.Message)\ndata, status = MPI.Mrecv!(recvbuf, msg::MPI.Message, MPI.Status)\n\nCompletes a blocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nOptionally returns the Status object of the receive.\n\nExternal links\n\nMPI_Mrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/pointtopoint/#MPI.Imrecv!","page":"Point-to-point communication","title":"MPI.Imrecv!","text":"req = MPI.Imrecv!(recvbuf, msg::MPI.Message[, req::AbstractRequest=Request()])\n\nStarts a nonblocking receive matched by a matching probe operation into the buffer recvbuf, and the Message msg.\n\nrecvbuf can be a Buffer, or any object for which Buffer(recvbuf) is defined.\n\nReturns req, an AbstractRequest object for the nonblocking receive.\n\nExternal links\n\nMPI_Imrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"external/#External-libraries-and-packages","page":"External libraries and packages","title":"External libraries and packages","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"Other libraries and packages may also make use of MPI. There are several concerns to ensure things are set up correctly.","category":"page"},{"location":"external/#Binary-requirements","page":"External libraries and packages","title":"Binary requirements","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"You need to ensure that external libraries are built correctly. In particular, if you are using a system-provided MPI backend in Julia, you also need to use the same system-provided binary for all packages and external libraries you use.","category":"page"},{"location":"external/#Passing-MPI-handles-via-ccall","page":"External libraries and packages","title":"Passing MPI handles via ccall","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"When passing MPI.jl handle objects (MPI.Comm, MPI.Info, etc) to C/C++ functions via ccall, you should pass the object directly as an argument, and specify the argument type as either the underlying handle type (MPI.MPI_Comm, MPI.MPI_Info, etc.), or a pointer (Ptr{MPI.MPI_Comm}, Ptr{MPI.MPI_Info}, etc.). This will internally handle the unwrapping, but ensure that a reference is kept to avoid premature garbage collection.","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"For example the C function signatures","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"int cfunc1(MPI_Comm comm);\nint cfunc2(MPI_Comm * comm);","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"would be called as","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"ccall((:cfunc1, lib), Cint, (MPI.MPI_Comm,), comm)\nccall((:cfunc2, lib), Cint, (Ptr{MPI.MPI_Comm},), comm)","category":"page"},{"location":"external/#Object-finalizers-and-MPI.Finalize","page":"External libraries and packages","title":"Object finalizers and MPI.Finalize","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"External libraries may allocate their own MPI handles (e.g., create or duplicate MPI communicators), which need to be cleaned up before MPI is finalized. If these are attached to object finalizers, they may not be guaranteed to be called before MPI.Finalize, which can result in an error upon program exit. (By default, MPI.jl will install an atexit hook that calls MPI.Finalize if it hasn't already been invoked.)","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"There are two typical solutions to this problem:","category":"page"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"Gate the clean up functions behind an MPI.Finalized call, e.g.\nfinalizer(obj) do obj\n if !MPI.Finalized\n # call clean up function\n end\nend\nKeep track of all such objects, clean them up via MPI.add_finalize_hook!, e.g.\nfinalizer(obj) do obj\n # call clean up function\nend\nMPI.add_finalize_hook!(() -> finalize(obj))\nA variant of this is to keep track of all such objects, for example, using a WeakKeyDict, and use a hook to clean them all:\nconst REFS = WeakKeyDict{ObjType, Nothing}()\nMPI.add_finalize_hook!() do\n for obj in keys(REFS)\n finalize(obj)\n end\nend\n\n# for each object `obj`\nfinalizer(obj) do obj\n # call clean up function\nend\nREFS[obj] = nothing","category":"page"},{"location":"external/#Externally-initialized-MPI","page":"External libraries and packages","title":"Externally initialized MPI","text":"","category":"section"},{"location":"external/","page":"External libraries and packages","title":"External libraries and packages","text":"When working with non-Julia libraries or tools, MPI_Init may be invoked in another part of the execution flow and not via MPI.jl's MPI.Init function. This leaves some package-internal settings uninitialized. In this case, you need to call [MPI.run_init_hooks())(@ref) manually to fully initialize MPI.jl. You may also want to consider calling MPI.set_default_error_handler_return().","category":"page"},{"location":"reference/io/#I/O","page":"I/O","title":"I/O","text":"","category":"section"},{"location":"reference/io/#File-manipulation","page":"I/O","title":"File manipulation","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.open","category":"page"},{"location":"reference/io/#MPI.File.open","page":"I/O","title":"MPI.File.open","text":"MPI.File.open(comm::Comm, filename::AbstractString; keywords...)\n\nOpen the file identified by filename. This is a collective operation on comm.\n\nSupported keywords are as follows:\n\nread, write, create, append have the same behaviour and defaults as Base.open.\nsequential: file will only be accessed sequentially (default: false)\nuniqueopen: file will not be concurrently opened elsewhere (default: false)\ndeleteonclose: delete file on close (default: false)\n\nAny additional keywords are passed via an Info object, and are implementation dependent.\n\nExternal links\n\nMPI_File_open man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Views","page":"I/O","title":"Views","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.set_view!\nMPI.File.get_byte_offset","category":"page"},{"location":"reference/io/#MPI.File.set_view!","page":"I/O","title":"MPI.File.set_view!","text":"MPI.File.set_view!(file::FileHandle, disp::Integer, etype::Datatype, filetype::Datatype, datarep::AbstractString; kwargs...)\n\nSet the current process's view of file.\n\nThe start of the view is set to disp; the type of data is set to etype; the distribution of data to processes is set to filetype; and the representation of data in the file is set to datarep: one of \"native\" (default), \"internal\", or \"external32\".\n\nExternal links\n\nMPI_File_set_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_byte_offset","page":"I/O","title":"MPI.File.get_byte_offset","text":"MPI.File.get_byte_offset(file::FileHandle, offset::Integer)\n\nConverts a view-relative offset into an absolute byte position. Returns the absolute byte position (from the beginning of the file) of offset relative to the current view of file.\n\nExternal links\n\nMPI_File_get_byte_offset man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Consistency","page":"I/O","title":"Consistency","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.sync\nMPI.File.get_atomicity\nMPI.File.set_atomicity","category":"page"},{"location":"reference/io/#MPI.File.sync","page":"I/O","title":"MPI.File.sync","text":"MPI.File.sync(fh::FileHandle)\n\nA collective operation causing all previous writes to fh by the calling process to be transferred to the storage device. If other processes have made updates to the storage device, then all such updates become visible to subsequent reads of fh by the calling process.\n\nExternal links\n\nMPI_File_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_atomicity","page":"I/O","title":"MPI.File.get_atomicity","text":"MPI.File.get_atomicity(file::FileHandle)\n\nGet the consistency option for the fh. If false it is non-atomic.\n\nExternal links\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.set_atomicity","page":"I/O","title":"MPI.File.set_atomicity","text":"MPI.File.set_atomicity(file::FileHandle, flag::Bool)\n\nSet the consistency option for the fh.\n\nExternal links\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Data-access","page":"I/O","title":"Data access","text":"","category":"section"},{"location":"reference/io/#Individual-pointer","page":"I/O","title":"Individual pointer","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read!\nMPI.File.read_all!\nMPI.File.write\nMPI.File.write_all","category":"page"},{"location":"reference/io/#MPI.File.read!","page":"I/O","title":"MPI.File.read!","text":"MPI.File.read!(file::FileHandle, data)\n\nReads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.read_all! for the collective operation\n\nExternal links\n\nMPI_File_read man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_all!","page":"I/O","title":"MPI.File.read_all!","text":"MPI.File.read_all!(file::FileHandle, data)\n\nReads current view of file into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.read! for the noncollective operation\n\nExternal links\n\nMPI_File_read_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write","page":"I/O","title":"MPI.File.write","text":"MPI.File.write(file::FileHandle, data)\n\nWrites data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.write_all for the collective operation\n\nExternal links\n\nMPI_File_write man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_all","page":"I/O","title":"MPI.File.write_all","text":"MPI.File.write_all(file::FileHandle, data)\n\nWrites data to the current view of file. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.set_view! to set the current view of the file\nMPI.File.write for the noncollective operation\n\nExternal links\n\nMPI_File_write_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Explicit-offsets","page":"I/O","title":"Explicit offsets","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read_at!\nMPI.File.read_at_all!\nMPI.File.write_at\nMPI.File.write_at_all","category":"page"},{"location":"reference/io/#MPI.File.read_at!","page":"I/O","title":"MPI.File.read_at!","text":"MPI.File.read_at!(file::FileHandle, offset::Integer, data)\n\nReads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.read_at_all! for the collective operation\n\nExternal links\n\nMPI_File_read_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_at_all!","page":"I/O","title":"MPI.File.read_at_all!","text":"MPI.File.read_at_all!(file::FileHandle, offset::Integer, data)\n\nReads from file at position offset into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.read_at! for the noncollective operation\n\nExternal links\n\nMPI_File_read_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_at","page":"I/O","title":"MPI.File.write_at","text":"MPI.File.write_at(file::FileHandle, offset::Integer, data)\n\nWrites data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined.\n\nSee also\n\nMPI.File.write_at_all for the collective operation\n\nExternal links\n\nMPI_File_write_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_at_all","page":"I/O","title":"MPI.File.write_at_all","text":"MPI.File.write_at_all(file::FileHandle, offset::Integer, data)\n\nWrites from data to file at position offset. data can be a Buffer, or any object for which Buffer_send(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.write_at for the noncollective operation\n\nExternal links\n\nMPI_File_write_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#Shared-pointer","page":"I/O","title":"Shared pointer","text":"","category":"section"},{"location":"reference/io/","page":"I/O","title":"I/O","text":"MPI.File.read_shared!\nMPI.File.write_shared\nMPI.File.read_ordered!\nMPI.File.write_ordered\nMPI.File.seek_shared\nMPI.File.get_position_shared","category":"page"},{"location":"reference/io/#MPI.File.read_shared!","page":"I/O","title":"MPI.File.read_shared!","text":"MPI.File.read_shared!(file::FileHandle, data)\n\nReads from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.read_ordered! for the collective operation\n\nExternal links\n\nMPI_File_read_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_shared","page":"I/O","title":"MPI.File.write_shared","text":"MPI.File.write_shared(file::FileHandle, data)\n\nWrites to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined.\n\nSee also\n\nMPI.File.write_ordered for the collective operation\n\nExternal links\n\nMPI_File_write_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.read_ordered!","page":"I/O","title":"MPI.File.read_ordered!","text":"MPI.File.read_ordered!(file::FileHandle, data)\n\nCollectively reads in rank order from file using the shared file pointer into data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.read_shared! for the noncollective operation\n\nExternal links\n\nMPI_File_read_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.write_ordered","page":"I/O","title":"MPI.File.write_ordered","text":"MPI.File.write_ordered(file::FileHandle, data)\n\nCollectively writes in rank order to file using the shared file pointer from data. data can be a Buffer, or any object for which Buffer(data) is defined. This is a collective operation, so must be called on all ranks in the communicator on which file was opened.\n\nSee also\n\nMPI.File.write_shared for the noncollective operation\n\nExternal links\n\nMPI_File_write_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.seek_shared","page":"I/O","title":"MPI.File.seek_shared","text":"MPI.File.seek_shared(file::FileHandle, offset::Integer, whence::Seek=SEEK_SET)\n\nUpdates the shared file pointer according to whence, which has the following possible values:\n\nMPI.File.SEEK_SET (default): the pointer is set to offset\nMPI.File.SEEK_CUR: the pointer is set to the current pointer position plus offset\nMPI.File.SEEK_END: the pointer is set to the end of file plus offset\n\nThis is a collective operation, and must be called with the same value on all processes in the communicator.\n\nExternal links\n\nMPI_File_seek_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/io/#MPI.File.get_position_shared","page":"I/O","title":"MPI.File.get_position_shared","text":"MPI.File.get_position_shared(file::FileHandle)\n\nThe current position of the shared file pointer (in etype units) relative to the current view.\n\nExternal links\n\nMPI_File_get_position_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Environment","page":"Environment","title":"Environment","text":"","category":"section"},{"location":"reference/environment/#Launching-MPI-programs","page":"Environment","title":"Launching MPI programs","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"mpiexec\nMPI.install_mpiexecjl","category":"page"},{"location":"reference/environment/#MPICH_jll.mpiexec","page":"Environment","title":"MPICH_jll.mpiexec","text":"mpiexec(fn)\n\nA wrapper function for the MPI launcher executable. Calls fn(cmd), where cmd is a Cmd object of the MPI launcher.\n\nUsage\n\njulia> mpiexec(cmd -> run(`$cmd -n 3 echo hello world`));\nhello world\nhello world\nhello world\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.install_mpiexecjl","page":"Environment","title":"MPI.install_mpiexecjl","text":"MPI.install_mpiexecjl(; command::String = \"mpiexecjl\",\n destdir::String = joinpath(DEPOT_PATH[1], \"bin\"),\n force::Bool = false, verbose::Bool = true)\n\nInstall the mpiexec wrapper to destdir directory, with filename command. Set force to true to overwrite an existing destination file with the same path. If verbose is true, the installation prints information about the progress of the process.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Enums","page":"Environment","title":"Enums","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.ThreadLevel","category":"page"},{"location":"reference/environment/#MPI.ThreadLevel","page":"Environment","title":"MPI.ThreadLevel","text":"ThreadLevel\n\nAn Enum denoting the level of threading support in the current process:\n\nMPI.THREAD_SINGLE: Only one thread will execute.\nMPI.THREAD_FUNNELED: The process may be multi-threaded, but the application must ensure that only the main thread makes MPI calls. See Is_thread_main.\nMPI.THREAD_SERIALIZED: The process may be multi-threaded, and multiple threads may make MPI calls, but only one at a time (i.e. all MPI calls are serialized).\nMPI.THREAD_MULTIPLE: Multiple threads may call MPI, with no restrictions.\n\nSee also\n\nInit\nQuery_thread\n\n\n\n\n\n","category":"type"},{"location":"reference/environment/#Functions","page":"Environment","title":"Functions","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.Abort\nMPI.Init\nMPI.Query_thread\nMPI.Is_thread_main\nMPI.Initialized\nMPI.Finalize\nMPI.Finalized\nMPI.add_init_hook!\nMPI.run_init_hooks\nMPI.add_finalize_hook!","category":"page"},{"location":"reference/environment/#MPI.Abort","page":"Environment","title":"MPI.Abort","text":"Abort(comm::Comm, errcode::Integer)\n\nMake a “best attempt” to abort all tasks in the group of comm. This function does not require that the invoking environment take any action with the error code. However, a Unix or POSIX environment should handle this as a return errorcode from the main program.\n\nExternal links\n\nMPI_Abort man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Init","page":"Environment","title":"MPI.Init","text":"Init(;threadlevel=:serialized, finalize_atexit=true, errors_return=true)\n\nInitialize MPI in the current process. The keyword options:\n\nthreadlevel: either :single, :funneled, :serialized (default), :multiple, or an instance of ThreadLevel.\nfinalize_atexit: if true (default), adds an atexit hook to call MPI.Finalize if it hasn't already been called.\nerrors_return: if true (default), will set the default error handlers for MPI.COMM_SELF and MPI.COMM_WORLD to be MPI.ERRORS_RETURN. MPI errors will then appear as Julia exceptions.\n\nIt will return the ThreadLevel value which MPI is initialized at.\n\nAll MPI programs must call this function at least once before calling any other MPI operations: the only MPI functions that may be called before MPI.Init are MPI.Initialized and MPI.Finalized.\n\nIt is safe to call MPI.Init multiple times, however it is not valid to call it after calling MPI.Finalize.\n\nExternal links\n\nMPI_Init man page: OpenMPI, MPICH\nMPI_Init_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Query_thread","page":"Environment","title":"MPI.Query_thread","text":"Query_thread()\n\nQuery the level of threading support in the current process. Returns a ThreadLevel value denoting\n\nExternal links\n\nMPI_Query_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Is_thread_main","page":"Environment","title":"MPI.Is_thread_main","text":"Is_thread_main()\n\nQueries whether the current thread is the main thread according to MPI. This can be called by any thread, and is useful for the THREAD_FUNNELED ThreadLevel.\n\nExternal links\n\nMPI_Is_thread_main man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Initialized","page":"Environment","title":"MPI.Initialized","text":"Initialized()\n\nReturns true if MPI.Init has been called, false otherwise.\n\nIt is unaffected by MPI.Finalize, and is one of the few functions that may be called before MPI.Init.\n\nExternal links\n\nMPI_Initialized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Finalize","page":"Environment","title":"MPI.Finalize","text":"Finalize()\n\nMarks MPI state for cleanup. This should be called after MPI.Init, and can be called at most once. No further MPI calls (other than Initialized or Finalized) should be made after it is called.\n\nMPI.Init will automatically insert a hook to call this function when Julia exits, if it hasn't already been called.\n\nExternal links\n\nMPI_Finalize man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.Finalized","page":"Environment","title":"MPI.Finalized","text":"Finalized()\n\nReturns true if MPI.Finalize has completed, false otherwise.\n\nIt is safe to call before MPI.Init and after MPI.Finalize.\n\nExternal links\n\nMPI_Finalized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.add_init_hook!","page":"Environment","title":"MPI.add_init_hook!","text":"MPI.add_init_hook!(f)\n\nRegister a function f that will be called as f() when MPI.Init is called. These are invoked in a first-in, first-out (FIFO) order.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.run_init_hooks","page":"Environment","title":"MPI.run_init_hooks","text":"MPI.run_init_hooks()\n\nExecute all functions that have been registered using MPI.add_init_hook!().\n\nThis function is executed automatically by MPI.Init() but must be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times (subsequent runs will be a no-op).\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#MPI.add_finalize_hook!","page":"Environment","title":"MPI.add_finalize_hook!","text":"MPI.add_finalize_hook!(f)\n\nRegister a function f that will be called as f() when MPI.Finalizer is called. These are invoked in a last-in, first-out (LIFO) order.\n\n\n\n\n\n","category":"function"},{"location":"reference/environment/#Errors","page":"Environment","title":"Errors","text":"","category":"section"},{"location":"reference/environment/","page":"Environment","title":"Environment","text":"MPI.MPIError\nMPI.FeatureLevelError","category":"page"},{"location":"reference/environment/#MPI.MPIError","page":"Environment","title":"MPI.MPIError","text":"MPIError\n\nError thrown when an MPI function returns an error code. The code field contains the MPI error code.\n\n\n\n\n\n","category":"type"},{"location":"reference/environment/#MPI.API.FeatureLevelError","page":"Environment","title":"MPI.API.FeatureLevelError","text":"FeatureLevelError\n\nError thrown if a feature is not implemented in the current MPI backend.\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#Groups","page":"Groups","title":"Groups","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"An MPI group is a set of process identifiers identified by their rank (see MPI.Comm_rank and MPI.Group_rank). They are used within a communicator to describe the participants in a communication universe.","category":"page"},{"location":"reference/group/#Types-and-enums","page":"Groups","title":"Types and enums","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"MPI.Group\nMPI.Comparison","category":"page"},{"location":"reference/group/#MPI.Group","page":"Groups","title":"MPI.Group","text":"MPI.Group\n\nAn MPI Group object.\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#MPI.Comparison","page":"Groups","title":"MPI.Comparison","text":"Comparison\n\nAn enum denoting the result of Comm_compare:\n\nMPI.IDENT: the objects are handles for the same object (identical groups and same contexts).\nMPI.CONGRUENT: the underlying groups are identical in constituents and rank order; these communicators differ only by context.\nMPI.SIMILAR: members of both objects are the same but the rank order differs.\nMPI.UNEQUAL: otherwise\n\n\n\n\n\n","category":"type"},{"location":"reference/group/#Functions","page":"Groups","title":"Functions","text":"","category":"section"},{"location":"reference/group/#Operations","page":"Groups","title":"Operations","text":"","category":"section"},{"location":"reference/group/","page":"Groups","title":"Groups","text":"MPI.Group_size\nMPI.Group_rank","category":"page"},{"location":"reference/group/#MPI.Group_size","page":"Groups","title":"MPI.Group_size","text":"Group_size(group::Group)\n\nThe number of processes involved in group.\n\nExternal links\n\nMPI_Group_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/group/#MPI.Group_rank","page":"Groups","title":"MPI.Group_rank","text":"Group_rank(group::Group)\n\nThe rank of the process in the particular group.\n\nReturns an integer in the range 0:MPI.Group_size()-1.\n\nExternal links\n\nMPI_Group_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#Topology","page":"Topology","title":"Topology","text":"","category":"section"},{"location":"reference/topology/#Cartesian","page":"Topology","title":"Cartesian","text":"","category":"section"},{"location":"reference/topology/","page":"Topology","title":"Topology","text":"MPI.Dims_create\nMPI.Cart_create\nMPI.Cart_get\nMPI.Cart_coords\nMPI.Cart_rank\nMPI.Cart_shift\nMPI.Cart_sub\nMPI.Cartdim_get","category":"page"},{"location":"reference/topology/#MPI.Dims_create","page":"Topology","title":"MPI.Dims_create","text":"newdims = Dims_create(nnodes::Integer, dims)\n\nA convenience function for selecting a balanced Cartesian grid of a total of nnodes nodes, for example to use with MPI.Cart_create.\n\ndims is an array or tuple of integers specifying the number of nodes in each dimension. The function returns an array newdims of the same length, such that if newdims[i] = dims[i] if dims[i] is non-zero, and prod(newdims) == nnodes, and values newdims are as close to each other as possible.\n\nnnodes should be divisible by the product of the non-zero entries of dims.\n\nExternal links\n\nMPI_Dims_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_create","page":"Topology","title":"MPI.Cart_create","text":"comm_cart = Cart_create(comm::Comm, dims; periodic=map(_->false, dims), reorder=false)\n\nCreate new MPI communicator with Cartesian topology information attached.\n\ndims is an array or tuple of integers specifying the number of MPI processes in each coordinate direction, and periodic is an array or tuple of Bools indicating the periodicity of each coordinate. prod(dims) must be less than or equal to the size of comm; if it is smaller than some processes are returned a null communicator.\n\nIf reorder == false then the rank of each process in the new group is identical to its rank in the old group, otherwise the function may reorder the processes.\n\nSee also MPI.Dims_create.\n\nExternal links\n\nMPI_Cart_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_get","page":"Topology","title":"MPI.Cart_get","text":"dims, periods, coords = Cart_get(comm::Comm)\n\nObtain information on the Cartesian topology of dimension N underlying the communicator comm. This is specified by two Cint arrays of N elements for the number of processes and periodicity properties along each Cartesian dimension. A third Cint array is returned, containing the Cartesian coordinates of the calling process.\n\nExternal links\n\nMPI_Cart_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_coords","page":"Topology","title":"MPI.Cart_coords","text":"coords = Cart_coords(comm::Comm, rank::Integer=Comm_rank(comm))\n\nDetermine coordinates of a process with rank rank in the Cartesian communicator comm. If no rank is provided, it returns the coordinates of the current process.\n\nReturns an integer array of the 0-based coordinates. The inverse of Cart_rank.\n\nExternal links\n\nMPI_Cart_coords man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_rank","page":"Topology","title":"MPI.Cart_rank","text":"rank = Cart_rank(comm::Comm, coords)\n\nDetermine process rank in communicator comm with Cartesian structure. The coords array specifies the 0-based Cartesian coordinates of the process. This is the inverse of MPI.Cart_coords\n\nExternal links\n\nMPI_Cart_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_shift","page":"Topology","title":"MPI.Cart_shift","text":"rank_source, rank_dest = Cart_shift(comm::Comm, direction::Integer, disp::Integer)\n\nReturn the source and destination ranks associated to a shift along a given direction.\n\nExternal links\n\nMPI_Cart_shift man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cart_sub","page":"Topology","title":"MPI.Cart_sub","text":"comm_sub = Cart_sub(comm::Comm, remain_dims)\n\nCreate lower-dimensional Cartesian communicator from existent Cartesian topology.\n\nremain_dims should be a boolean vector specifying the dimensions that should be kept in the generated subgrid.\n\nExternal links\n\nMPI_Cart_sub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Cartdim_get","page":"Topology","title":"MPI.Cartdim_get","text":"ndims = Cartdim_get(comm::Comm)\n\nReturn number of dimensions of the Cartesian topology associated with the communicator comm.\n\nExternal links\n\nMPI_Cartdim_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#Graph-topology","page":"Topology","title":"Graph topology","text":"","category":"section"},{"location":"reference/topology/","page":"Topology","title":"Topology","text":"MPI.UNWEIGHTED\nMPI.Dist_graph_create\nMPI.Dist_graph_create_adjacent\nMPI.Dist_graph_neighbors_count\nMPI.Dist_graph_neighbors!\nMPI.Dist_graph_neighbors","category":"page"},{"location":"reference/topology/#MPI.UNWEIGHTED","page":"Topology","title":"MPI.UNWEIGHTED","text":"MPI.UNWEIGHTED :: MPI.Unweighted\n\nThis is used to indicate that a graph topology is unweighted. It can be supplied as an argument to Dist_graph_create_adjacent, Dist_graph_create, and Dist_graph_neighbors!; or obtained as the return value from Dist_graph_neighbors.\n\n\n\n\n\n","category":"constant"},{"location":"reference/topology/#MPI.Dist_graph_create","page":"Topology","title":"MPI.Dist_graph_create","text":"graph_comm = Dist_graph_create(comm::Comm, sources::Vector{Cint}, degrees::Vector{Cint}, destinations::Vector{Cint}; weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, reorder=false, infokws...)\n\nCreate a new communicator from a given directed graph topology, described by incoming and outgoing edges on an existing communicator.\n\nArguments\n\ncomm::Comm: The communicator on which the distributed graph topology should be induced.\nsources::Vector{Cint}: An array with the ranks for which this call will specify outgoing edges.\ndegrees::Vector{Cint}: An array with the number of outgoing edges for each entry in the sources array.\ndestinations::Vector{Cint}: An array containing destination nodes for the source nodes in the source node array, of lengthsum(sources).\nweights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the specified edges. The default is MPI.UNWEIGHTED.\nreorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.\n\nExample\n\nWe can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows\n\njulia> rank = MPI.Comm_rank(comm);\njulia> N = MPI.Comm_size(comm);\njulia> sources = Cint[rank];\njulia> degrees = Cint[1];\njulia> destinations = Cint[mod(rank-1, N)];\njulia> graph_comm = Dist_graph_create(comm, sources, degrees, destinations)\n\nExternal links\n\nMPI_Dist_graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_create_adjacent","page":"Topology","title":"MPI.Dist_graph_create_adjacent","text":"graph_comm = Dist_graph_create_adjacent(comm::Comm,\n sources::Vector{Cint}, destinations::Vector{Cint};\n source_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED, destination_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}=UNWEIGHTED,\n reorder=false, infokws...)\n\nCreate a new communicator from a given directed graph topology, described by local incoming and outgoing edges on an existing communicator.\n\nArguments\n\ncomm::Comm: The communicator on which the distributed graph topology should be induced.\nsources::Vector{Cint}: The local, incoming edges on the rank of the calling process.\ndestinations::Vector{Cint}: The local, outgoing edges on the rank of the calling process.\nsource_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, incoming edges. The default is MPI.UNWEIGHTED.\ndestinations_weights::Union{Vector{Cint}, Unweighted, WeightsEmpty}: The edge weights of the local, outgoing edges. The default is MPI.UNWEIGHTED.\nreorder::Bool=false: If set true, then the MPI implementation can reorder the source and destination indices.\n\nExample\n\nWe can generate a ring graph 1 --> 2 --> ... --> N --> 1, where N is the number of ranks in the communicator, as follows\n\njulia> rank = MPI.Comm_rank(comm);\njulia> N = MPI.Comm_size(comm);\njulia> sources = Cint[mod(rank-1, N)];\njulia> destinations = Cint[mod(rank+1, N)];\njulia> graph_comm = Dist_graph_create_adjacent(comm, sources, destinations);\n\nExternal links\n\nMPI_Dist_graph_create_adjacent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors_count","page":"Topology","title":"MPI.Dist_graph_neighbors_count","text":"indegree, outdegree, weighted = Dist_graph_neighbors_count(graph_comm::Comm)\n\nReturn the number of in and out edges for the calling processes in a distributed graph topology and a flag indicating whether the distributed graph is weighted.\n\nArguments\n\ngraph_comm::Comm: The communicator of the distributed graph topology.\n\nExample\n\nLet us assume the following graph 0 <--> 1 --> 2, which has no weights on its edges, then the process with rank 1 will obtain the following result from calling the function\n\njulia> Dist_graph_neighbors_count(graph_comm)\n(1,2,false)\n\nExternal links\n\nMPI_Dist_graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors!","page":"Topology","title":"MPI.Dist_graph_neighbors!","text":"Dist_graph_neighbors!(graph_comm::MPI.Comm,\n sources::Vector{Cint}, source_weights::Union{Vector{Cint}, Unweighted},\n destinations::Vector{Cint}, destination_weights::Union{Vector{Cint}, Unweighted},\n)\nDist_graph_neighbors!(graph_comm::Comm, sources::Vector{Cint}, destinations::Vector{Cint})\n\nQuery the neighbors and edge weights (optional) of the calling process in a distributed graph topology.\n\nArguments\n\ngraph_comm::Comm: The communicator of the distributed graph topology.\nsources: A preallocated Vector{Cint}, which will be filled with the ranks of the processes whose edges pointing towards the calling process. The length is exactly the indegree returned by MPI.Dist_graph_neighbors_count.\nsource_weights: A preallocated Vector{Cint}, which will be filled with the weights associated to the edges pointing towards the calling process. The length is exactly the indegree returned by MPI.Dist_graph_neighbors_count. Alternatively, MPI.UNWEIGHTED can be used if weight information is not required.\ndestinations: A preallocated Vector{Cint}, which will be filled with the ranks of the processes towards which the edges of the calling process point. The length is exactly the outdegree returned by [MPI.Distgraphneighbors_count`](@ref).\ndestination_weights: A preallocated Vector{Cint}, which will be filled with the weights associated to the edges of the outgoing edges of the calling process point. The length is exactly the outdegree returned by MPI.Dist_graph_neighbors_count. Alternatively, MPI.UNWEIGHTED can be used if weight information is not required.\n\nExample\n\nLet us assume the following graph:\n\n rank 0 <-----> rank 1 ------> rank 2\nweights: 3 4\n\nthen then the process with rank 1 will need to preallocate sources and source_weights as vectors of length 1, and a destinations and destination_weights as vectors of length 2.\n\nThe call will fill the vectors as follows:\n\njulia> MPI.Dist_graph_neighbors!(graph_comm, sources, source_weights, destinations, destination_weights);\njulia> sources\n[0]\njulia> source_weights\n[3]\njulia> destinations\n[0,2]\njulia> destination_weights\n[3,4]\n\nNote that the edge between ranks 0 and 1 can have a different weight depending on whether it is the incoming edge 0 --> 1 or the outgoing one 0 <-- 1.\n\nSee also\n\nDist_graph_neighbors\n\nExternal links\n\nMPI_Dist_graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/topology/#MPI.Dist_graph_neighbors","page":"Topology","title":"MPI.Dist_graph_neighbors","text":"sources, source_weights, destinations, destination_weights = Dist_graph_neighbors(graph_comm::MPI.Comm)\n\nReturn (sources, source_weights, destinations, destination_weights) of the graph communicator graph_comm. For unweighted graphs source_weights and destination_weights are returned as MPI.UNWEIGHTED.\n\nThis function is a wrapper around MPI.Dist_graph_neighbors_count and MPI.Dist_graph_neighbors! that automatically handles the allocation of the result vectors.\n\n\n\n\n\n","category":"function"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/05-job_schedule.jl\"","category":"page"},{"location":"examples/05-job_schedule/#Job-Scheduling","page":"Job Scheduling","title":"Job Scheduling","text":"","category":"section"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"# examples/05-job_schedule.jl\n# This example demonstrates a job scheduling through adding the\n# number 100 to every component of the vector data. The root\n# assigns one element to each worker to compute the operation.\n# When the worker is finished, the root sends another element\n# until each element is added 100\n# Inspired by https://www.hpc.ntnu.no/vilje/software/mpi-and-mpi-io-training-tutorial/\n# https://www.hpc.ntnu.no/vilje/software/mpi-and-mpi-io-training-tutorial/basic-mpi/job-queue/\n# an updated job_queue.c is available in the basic_mpi/04_job_queue/src subdirectory of\n# the extracted https://www.hpc.ntnu.no/wp-content/uploads/2019/09/mpiexamples.tar.gz\n\nusing MPI\n\nfunction job_queue(data,f)\n MPI.Init()\n\n comm = MPI.COMM_WORLD\n rank = MPI.Comm_rank(comm)\n world_size = MPI.Comm_size(comm)\n nworkers = world_size - 1\n\n root = 0\n\n MPI.Barrier(comm)\n T = eltype(data)\n N = size(data)[1]\n send_mesg = Array{T}(undef, 1)\n recv_mesg = Array{T}(undef, 1)\n\n if rank == root # I am root\n\n idx_recv = 0\n idx_sent = 1\n\n new_data = Array{T}(undef, N)\n # Array of workers requests\n sreqs_workers = Array{MPI.Request}(undef,nworkers)\n # -1 = start, 0 = channel not available, 1 = channel available\n status_workers = ones(nworkers).*-1\n\n # Send message to workers\n for dst in 1:nworkers\n if idx_sent > N\n break\n end\n send_mesg[1] = data[idx_sent]\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n idx_sent += 1\n sreqs_workers[dst] = sreq\n status_workers[dst] = 0\n print(\"Root: Sent number $(send_mesg[1]) to Worker $dst\\n\")\n end\n\n # Send and receive messages until all elements are added\n while idx_recv != N\n # Check to see if there is an available message to receive\n for dst in 1:nworkers\n if status_workers[dst] == 0\n flag = MPI.Test(sreqs_workers[dst])\n if flag\n status_workers[dst] = 1\n end\n end\n end\n for dst in 1:nworkers\n if status_workers[dst] == 1\n ismessage = MPI.Iprobe(comm; source=dst, tag=dst+32)\n if ismessage\n # Receives message\n MPI.Recv!(recv_mesg, comm; source=dst, tag=dst+32)\n idx_recv += 1\n new_data[idx_recv] = recv_mesg[1]\n print(\"Root: Received number $(recv_mesg[1]) from Worker $dst\\n\")\n if idx_sent <= N\n send_mesg[1] = data[idx_sent]\n # Sends new message\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n idx_sent += 1\n sreqs_workers[dst] = sreq\n status_workers[dst] = 1\n print(\"Root: Sent number $(send_mesg[1]) to Worker $dst\\n\")\n end\n end\n end\n end\n end\n\n for dst in 1:nworkers\n # Termination message to worker\n send_mesg[1] = -1\n sreq = MPI.Isend(send_mesg, comm; dest=dst, tag=dst+32)\n sreqs_workers[dst] = sreq\n status_workers[dst] = 0\n print(\"Root: Finish Worker $dst\\n\")\n end\n\n MPI.Waitall(sreqs_workers)\n print(\"Root: New data = $new_data\\n\")\n else # If rank == worker\n # -1 = start, 0 = channel not available, 1 = channel available\n status_worker = -1\n while true\n sreqs_workers = Array{MPI.Request}(undef,1)\n ismessage = MPI.Iprobe(comm; source=root, tag=rank+32)\n\n if ismessage\n # Receives message\n MPI.Recv!(recv_mesg, comm; source=root, tag=rank+32)\n # Termination message from root\n if recv_mesg[1] == -1\n print(\"Worker $rank: Finish\\n\")\n break\n end\n print(\"Worker $rank: Received number $(recv_mesg[1]) from root\\n\")\n # Apply function (add number 100) to array\n send_mesg = f(recv_mesg)\n sreq = MPI.Isend(send_mesg, comm; dest=root, tag=rank+32)\n sreqs_workers[1] = sreq\n status_worker = 0\n end\n # Check to see if there is an available message to receive\n if status_worker == 0\n flag = MPI.Test(sreqs_workers[1])\n if flag\n status_worker = 1\n end\n end\n end\n end\n MPI.Barrier(comm)\n MPI.Finalize()\nend\n\nf = x -> x.+100\ndata = collect(1:10)\njob_queue(data,f)","category":"page"},{"location":"examples/05-job_schedule/","page":"Job Scheduling","title":"Job Scheduling","text":"> mpiexecjl -n 4 julia examples/05-job_schedule.jl\nRoot: Sent number 1 to Worker 1\nWorker 1: Received number 1 from root\nRoot: Sent number 2 to Worker 2\nRoot: Sent number 3 to Worker 3\nRoot: Received number 101 from Worker 1\nRoot: Sent number 4 to Worker 1\nWorker 1: Received number 4 from root\nRoot: Received number 104 from Worker 1\nRoot: Sent number 5 to Worker 1\nWorker 1: Received number 5 from root\nRoot: Received number 105 from Worker 1\nRoot: Sent number 6 to Worker 1\nWorker 1: Received number 6 from root\nRoot: Received number 106 from Worker 1\nRoot: Sent number 7 to Worker 1\nWorker 1: Received number 7 from root\nRoot: Received number 107 from Worker 1\nRoot: Sent number 8 to Worker 1\nWorker 1: Received number 8 from root\nRoot: Received number 108 from Worker 1\nRoot: Sent number 9 to Worker 1\nWorker 1: Received number 9 from root\nRoot: Received number 109 from Worker 1\nRoot: Sent number 10 to Worker 1\nWorker 1: Received number 10 from root\nRoot: Received number 110 from Worker 1\nWorker 2: Received number 2 from root\nRoot: Received number 102 from Worker 2\nWorker 3: Received number 3 from root\nRoot: Received number 103 from Worker 3\nRoot: Finish Worker 1\nWorker 1: Finish\nRoot: Finish Worker 2\nWorker 2: Finish\nRoot: Finish Worker 3\nWorker 3: Finish\nRoot: New data = [101, 104, 105, 106, 107, 108, 109, 110, 102, 103]","category":"page"},{"location":"reference/advanced/#Advanced","page":"Advanced","title":"Advanced","text":"","category":"section"},{"location":"reference/advanced/#Object-handling","page":"Advanced","title":"Object handling","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.free","category":"page"},{"location":"reference/advanced/#MPI.free","page":"Advanced","title":"MPI.free","text":"MPI.free(obj)\n\nFree the MPI object handle obj. This is typically used as the finalizer, and so need not be called directly unless otherwise noted.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Datatype-objects","page":"Advanced","title":"Datatype objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Datatype\nMPI.to_type\nMPI.Types.extent\nMPI.Types.create_contiguous\nMPI.Types.create_vector\nMPI.Types.create_hvector\nMPI.Types.create_subarray\nMPI.Types.create_struct\nMPI.Types.create_resized\nMPI.Types.commit!\nMPI.Types.duplicate","category":"page"},{"location":"reference/advanced/#MPI.Datatype","page":"Advanced","title":"MPI.Datatype","text":"Datatype\n\nA Datatype represents the layout of the data in memory.\n\nUsage\n\nDatatype(T)\n\nEither return the predefined Datatype corresponding to T, or create a new Datatype for the Julia type T, calling Types.commit! so that it can be used for communication operations.\n\nNote that this can only be called on types for which isbitstype(T) is true.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.to_type","page":"Advanced","title":"MPI.to_type","text":"to_type(datatype::Datatype)\n\nReturn the Julia type corresponding to the MPI Datatype datatype, or nothing if it doesn't correspond directly.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.extent","page":"Advanced","title":"MPI.Types.extent","text":"lb, extent = MPI.Types.extent(dt::MPI.Datatype)\n\nGets the lowerbound lb and the extent extent in bytes.\n\nExternal links\n\nMPI_Type_get_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_contiguous","page":"Advanced","title":"MPI.Types.create_contiguous","text":"MPI.Types.create_contiguous(count::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into count contiguous locations.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_contiguous man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_vector","page":"Advanced","title":"MPI.Types.create_vector","text":"MPI.Types.create_vector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into locations that consist of equally spaced blocks.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExample\n\ndatatype = MPI.Types.create_vector(3, 2, 5, MPI.Datatype(Int64))\nMPI.Types.commit!(datatype)\n\nwill create a datatype with the following layout\n\n|<----->| block length\n\n+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+\n| X | X | | | | X | X | | | | X | X | | | |\n+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+\n\n|<---- stride ----->|\n\nwhere each segment represents an Int64.\n\n(image by Jonathan Dursi, https://stackoverflow.com/a/10788351/392585)\n\nExternal links\n\nMPI_Type_vector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_hvector","page":"Advanced","title":"MPI.Types.create_hvector","text":"MPI.Types.create_hvector(count::Integer, blocklength::Integer, stride::Integer, oldtype::MPI.Datatype)\n\nCreate a derived Datatype that replicates oldtype into locations that consist of equally spaced (bytes) blocks.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExample\n\ndatatype = MPI.Types.create_hvector(3, 2, 5, MPI.Datatype(Int64))\nMPI.Types.commit!(datatype)\n\nExternal links\n\nMPI_Type_create_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_subarray","page":"Advanced","title":"MPI.Types.create_subarray","text":"MPI.Types.create_subarray(sizes, subsizes, offset, oldtype::Datatype;\n rowmajor=false)\n\nCreates a derived Datatype describing an N-dimensional subarray of size subsizes of an N-dimensional array of size sizes and element type oldtype, with the first element offset by offset (i.e. the 0-based index of the first element).\n\nColumn-major indexing (used by Julia and Fortran) is assumed; use the keyword rowmajor=true to specify row-major layout (used by C and numpy).\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_create_subarray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_struct","page":"Advanced","title":"MPI.Types.create_struct","text":"MPI.Types.create_struct(blocklengths, displacements, types)\n\nCreates a derived Datatype describing a struct layout.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nExternal links\n\nMPI_Type_create_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.create_resized","page":"Advanced","title":"MPI.Types.create_resized","text":"MPI.Types.create_resized(oldtype::Datatype, lb::Integer, extent::Integer)\n\nCreates a new Datatype that is identical to oldtype, except that the lower bound of this new datatype is set to be lb, and its upper bound is set to be lb + extent.\n\nNote that MPI.Types.commit! must be used before the datatype can be used for communication.\n\nSee also\n\nMPI.Types.extent\n\nExternal links\n\nMPI_Type_create_resized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.commit!","page":"Advanced","title":"MPI.Types.commit!","text":"MPI.Types.commit!(newtype::Datatype)\n\nCommits the Datatype newtype so that it can be used for communication. Returns newtype.\n\nExternal links\n\nMPI_Type_commit man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.Types.duplicate","page":"Advanced","title":"MPI.Types.duplicate","text":"MPI.Types.duplicate(oldtype::Datatype)\n\nDuplicates the datatype oldtype.\n\nExternal links\n\nMPI_Type_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Operator-objects","page":"Advanced","title":"Operator objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Op","category":"page"},{"location":"reference/advanced/#MPI.Op","page":"Advanced","title":"MPI.Op","text":"Op\n\nAn MPI reduction operator, for use with Reduce/Scan collective operations to wrap binary operators. MPI.jl will perform this conversion automatically.\n\nUsage\n\nOp(op, T=Any; iscommutative=false)\n\nWrap the Julia reduction function op for arguments of type T. op is assumed to be associative, and if iscommutative is true, assumed to be commutative as well.\n\nSee also\n\nReduce!/Reduce\nAllreduce!/Allreduce\nScan!/Scan\nExscan!/Exscan\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#Info-objects","page":"Advanced","title":"Info objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Info\nMPI.infoval","category":"page"},{"location":"reference/advanced/#MPI.Info","page":"Advanced","title":"MPI.Info","text":"Info <: AbstractDict{Symbol,String}\n\nMPI.Info objects store key-value pairs, and are typically used for passing optional arguments to MPI functions.\n\nUsage\n\nThese will typically be hidden from user-facing APIs by splatting keywords, e.g.\n\nfunction f(args...; kwargs...)\n info = Info(kwargs...)\n # pass `info` object to `ccall`\nend\n\nFor manual usage, Info objects act like Julia Dict objects:\n\ninfo = Info(init=true) # keyword argument is required\ninfo[key] = value\nx = info[key]\ndelete!(info, key)\n\nIf init=false is used in the constructor (the default), a \"null\" Info object will be returned: no keys can be added to such an object.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.infoval","page":"Advanced","title":"MPI.infoval","text":"infoval(x)\n\nConvert Julia object x to a string representation for storing in an Info object.\n\nThe MPI specification allows passing strings, Boolean values, integers, and lists.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Error-handler-objects","page":"Advanced","title":"Error handler objects","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.Errhandler\nMPI.get_errorhandler\nMPI.set_errorhandler!\nMPI.set_default_error_handler_return","category":"page"},{"location":"reference/advanced/#MPI.Errhandler","page":"Advanced","title":"MPI.Errhandler","text":"MPI.Errhandler\n\nAn MPI error handler object. Currently only two are supported:\n\nERRORS_ARE_FATAL (default): program will immediately abort\nERRORS_RETURN: program will throw an MPIError.\n\n\n\n\n\n","category":"type"},{"location":"reference/advanced/#MPI.get_errorhandler","page":"Advanced","title":"MPI.get_errorhandler","text":"MPI.get_errorhandler(comm::MPI.Comm)\nMPI.get_errorhandler(win::MPI.Win)\nMPI.get_errorhandler(file::MPI.File.FileHandle)\n\nGet the current Errhandler for the relevant MPI object.\n\nSee also\n\nset_errorhandler!\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.set_errorhandler!","page":"Advanced","title":"MPI.set_errorhandler!","text":"MPI.set_errorhandler!(comm::MPI.Comm, errh::Errhandler)\nMPI.set_errorhandler!(win::MPI.Win, errh::Errhandler)\nMPI.set_errorhandler!(file::MPI.File.FileHandle, errh::Errhandler)\n\nSet the Errhandler for the relevant MPI object.\n\nSee also\n\nget_errorhandler\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#MPI.set_default_error_handler_return","page":"Advanced","title":"MPI.set_default_error_handler_return","text":"MPI.set_default_error_handler_return()\n\nSet the error handler for MPI_COMM_SELF and MPI_COMM_WORLD to MPI_ERRORS_RETURN. This will cause certain MPI errors to appear as Julia exceptions.\n\nThis function is executed automatically by MPI.Init() but may be invoked manually if MPI has been initialized externally by a direct call to MPI_Init(). It is safe to call this function multiple times.\n\n\n\n\n\n","category":"function"},{"location":"reference/advanced/#Miscellaneous","page":"Advanced","title":"Miscellaneous","text":"","category":"section"},{"location":"reference/advanced/","page":"Advanced","title":"Advanced","text":"MPI.API.@const_ref","category":"page"},{"location":"reference/advanced/#MPI.API.@const_ref","page":"Advanced","title":"MPI.API.@const_ref","text":"@const_ref name T expr\n\nDefines an constant binding\n\nconst name = Ref{T}()\n\nand adds a hook to execute\n\nname[] = expr\n\nat module initialization time.\n\n\n\n\n\n","category":"macro"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/06-scatterv.jl\"","category":"page"},{"location":"examples/06-scatterv/#Scatterv-and-Gatherv","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"","category":"section"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"# examples/06-scatterv.jl\n# This example shows how to use MPI.Scatterv! and MPI.Gatherv!\n# roughly based on the example from\n# https://stackoverflow.com/a/36082684/392585\n\nusing MPI\n\n\"\"\"\n split_count(N::Integer, n::Integer)\n\nReturn a vector of `n` integers which are approximately equally sized and sum to `N`.\n\"\"\"\nfunction split_count(N::Integer, n::Integer)\n q,r = divrem(N, n)\n return [i <= r ? q+1 : q for i = 1:n]\nend\n\n\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nrank = MPI.Comm_rank(comm)\ncomm_size = MPI.Comm_size(comm)\n\nroot = 0\n\nif rank == root\n M, N = 4, 7\n\n test = Float64[i for i = 1:M, j = 1:N]\n output = similar(test)\n \n # Julia arrays are stored in column-major order, so we need to split along the last dimension\n # dimension\n M_counts = [M for i = 1:comm_size]\n N_counts = split_count(N, comm_size)\n\n # store sizes in 2 * comm_size Array\n sizes = vcat(M_counts', N_counts')\n size_ubuf = UBuffer(sizes, 2)\n\n # store number of values to send to each rank in comm_size length Vector\n counts = vec(prod(sizes, dims=1))\n\n test_vbuf = VBuffer(test, counts) # VBuffer for scatter\n output_vbuf = VBuffer(output, counts) # VBuffer for gather\nelse\n # these variables can be set to `nothing` on non-root processes\n size_ubuf = UBuffer(nothing)\n output_vbuf = test_vbuf = VBuffer(nothing)\nend\n\nif rank == root\n println(\"Original matrix\")\n println(\"================\")\n @show test sizes counts\n println()\n println(\"Each rank\")\n println(\"================\")\nend \nMPI.Barrier(comm)\n\nlocal_size = MPI.Scatter(size_ubuf, NTuple{2,Int}, root, comm)\nlocal_test = MPI.Scatterv!(test_vbuf, zeros(Float64, local_size), root, comm)\n\nfor i = 0:comm_size-1\n if rank == i\n @show rank local_test\n end\n MPI.Barrier(comm)\nend\n\nMPI.Gatherv!(local_test, output_vbuf, root, comm)\n\nif rank == root\n println()\n println(\"Final matrix\")\n println(\"================\")\n @show output\nend ","category":"page"},{"location":"examples/06-scatterv/","page":"Scatterv and Gatherv","title":"Scatterv and Gatherv","text":"> mpiexecjl -n 4 julia examples/06-scatterv.jl\nOriginal matrix\n================\ntest = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0]\nsizes = [4 4 4 4; 2 2 2 1]\ncounts = [8, 8, 8, 4]\n\nEach rank\n================\nrank = 0\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 1\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 2\nlocal_test = [1.0 1.0; 2.0 2.0; 3.0 3.0; 4.0 4.0]\nrank = 3\nlocal_test = [1.0; 2.0; 3.0; 4.0;;]\n\nFinal matrix\n================\noutput = [1.0 1.0 1.0 1.0 1.0 1.0 1.0; 2.0 2.0 2.0 2.0 2.0 2.0 2.0; 3.0 3.0 3.0 3.0 3.0 3.0 3.0; 4.0 4.0 4.0 4.0 4.0 4.0 4.0]","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.jl","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.jl is a small package based on Preferences.jl for selecting MPI implementations. These choices are compile-time constants, and so any changes will require a Julia restart.","category":"page"},{"location":"reference/mpipreferences/#Consts","page":"MPIPreferences.jl","title":"Consts","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.binary\nMPIPreferences.abi","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.binary","page":"MPIPreferences.jl","title":"MPIPreferences.binary","text":"MPIPreferences.binary :: String\n\nThe currently selected binary. The possible values are\n\n\"MPICH_jll\": use the binary provided by MPICH_jll\n\"OpenMPI_jll\": use the binary provided by OpenMPI_jll\n\"MicrosoftMPI_jll\": use binary provided by MicrosoftMPI_jll\n\"MPItrampoline_jll\": use the binary provided by MPItrampoline_jll\n\"system\": use a system-provided binary.\n\n\n\n\n\n","category":"constant"},{"location":"reference/mpipreferences/#MPIPreferences.abi","page":"MPIPreferences.jl","title":"MPIPreferences.abi","text":"MPIPreferences.abi :: String\n\nThe ABI (application binary interface) of the currently selected binary. Supported values are:\n\n\"MPICH\": MPICH-compatible ABI (https://www.mpich.org/abi/)\n\"OpenMPI\": Open MPI compatible ABI (Open MPI, IBM Spectrum MPI, Fujitsu MPI)\n\"MicrosoftMPI\": Microsoft MPI\n\"MPItrampoline\": MPItrampoline\n\"HPE MPT\": HPE MPT\n\n\n\n\n\n","category":"constant"},{"location":"reference/mpipreferences/#Changing-implementations","page":"MPIPreferences.jl","title":"Changing implementations","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.use_system_binary\nMPIPreferences.use_jll_binary","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.use_system_binary","page":"MPIPreferences.jl","title":"MPIPreferences.use_system_binary","text":"use_system_binary(;\n library_names = [\"libmpi\", \"libmpi_ibm\", \"msmpi\", \"libmpich\", \"libmpi_cray\", \"libmpitrampoline\"],\n extra_paths = String[],\n mpiexec = \"mpiexec\",\n abi = nothing,\n vendor = nothing,\n export_prefs = false,\n force = true)\n\nSwitches the underlying MPI implementation to a system provided one. A restart of Julia is required for the changes to take effect.\n\nOptions:\n\nlibrary_names: a name or collection of names of the MPI library, passed to Libdl.find_library. If the library isn't in the library search path, you can specify the full path to the library.\nextra_paths: indicate extra directories where to search for the MPI library, besides the default ones of the dynamic linker.\nmpiexec: the MPI launcher executable. The default is mpiexec, but some clusters require using the scheduler launcher interface (e.g. srun on Slurm, aprun on PBS). It is also possible to pass a Cmd object to include specific command line options.\nabi: the ABI of the MPI library. By default this is determined automatically using identify_abi. See abi for currently supported values.\nvendor: can be either nothing or a vendor name (such a \"cray\"). If vendor has the value \"cray\", then the output from cc --cray-print-opts=all is parsed for which libraries are linked by the Cray Compiler Wrappers. Note that if mpi_gtl_* is present, then this .so will be added to the preloads. Also note that the inputs to library_names will be overwritten by the library name used by the compiler wrapper.\nexport_prefs: if true, the preferences into the Project.toml instead of LocalPreferences.toml.\nforce: if true, the preferences are set even if they are already set.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.use_jll_binary","page":"MPIPreferences.jl","title":"MPIPreferences.use_jll_binary","text":"use_jll_binary([binary]; export_prefs=false, force=true)\n\nSwitches the underlying MPI implementation to one provided by JLL packages. A restart of Julia is required for the changes to take effect.\n\nAvailable options are:\n\n\"MicrosoftMPI_jll\" (Only option and default on Windows)\n\"MPICH_jll\" (Default on all other platform)\n\"OpenMPI_jll\"\n\"MPItrampoline_jll\"\n\nThe export_prefs option determines whether the preferences being set should be stored within LocalPreferences.toml or Project.toml.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#Utils","page":"MPIPreferences.jl","title":"Utils","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences.check_unchanged\nMPIPreferences.identify_abi\nMPIPreferences.dlopen_preloads","category":"page"},{"location":"reference/mpipreferences/#MPIPreferences.check_unchanged","page":"MPIPreferences.jl","title":"MPIPreferences.check_unchanged","text":"MPIPreferences.check_unchanged()\n\nThrows an error if the preferences have been modified in the current Julia session, or if they are modified after this function is called.\n\nThis is should be called from the __init__() function of any package which relies on the values of MPIPreferences.\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.identify_abi","page":"MPIPreferences.jl","title":"MPIPreferences.identify_abi","text":"identify_abi(libmpi)\n\nIdentify the MPI implementation from the library version string\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#MPIPreferences.Preloads.dlopen_preloads","page":"MPIPreferences.jl","title":"MPIPreferences.Preloads.dlopen_preloads","text":"dlopen_preloads()\n\ndlopen's all preloads specified in the preloads section of MPIPreferences\n\n\n\n\n\n","category":"function"},{"location":"reference/mpipreferences/#Preferences-schema","page":"MPIPreferences.jl","title":"Preferences schema","text":"","category":"section"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"MPIPreferences utilizes the following keys to store information in the Preferences key-value store.","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"_format: the version number of the schema. Currently only \"1.0\" is supported.\nbinary: the choice of binary. This should be one of the strings listed in MPIPreferences.binary.","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"If binary == \"system\", then the following keys are also required (otherwise they have no effect):","category":"page"},{"location":"reference/mpipreferences/","page":"MPIPreferences.jl","title":"MPIPreferences.jl","text":"libmpi: the filename or path of the MPI dynamic library.\nabi: The ABI of the MPI implementation. This should be one of the strings listed in MPIPreferences.abi.\nmpiexec: either\na string corresponding to the MPI launcher executable\nan array of strings, with the first entry being the executable and remaining entries being additional flags that should be used with the executable.","category":"page"},{"location":"configuration/#Configuration","page":"Configuration","title":"Configuration","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"By default, MPI.jl will download and link against the following MPI implementations:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Microsoft MPI on Windows\nMPICH on all other platforms","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This is suitable for most single-node use cases, but for larger systems, such as HPC clusters or multi-GPU machines, you will probably want to configure against a system-provided MPI implementation in order to exploit features such as fast network interfaces and CUDA-aware or ROCm-aware MPI interfaces.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses Preferences.jl to configure the MPI backend for each project separately. This provides a single source of truth that can be used for JLL packages (Julia packages providing C libraries) that link against MPI. It can be installed by","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using Pkg; Pkg.add(\"MPIPreferences\")'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nThe way MPI.jl is configured has changed with MPI.jl v0.20. See Migration from MPI.jl v0.19 or earlier for more information on how to migrate your configuration from earlier MPI.jl versions.","category":"page"},{"location":"configuration/#using_system_mpi","page":"Configuration","title":"Using a system-provided MPI backend","text":"","category":"section"},{"location":"configuration/#Requirements","page":"Configuration","title":"Requirements","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MPI.jl requires a shared library installation of a C MPI library, supporting the MPI 3.0 standard or later. The following MPI implementations should work out-of-the-box with MPI.jl:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Open MPI\nMPICH (v3.1 or later)\nIntel MPI\nMicrosoft MPI\nIBM Spectrum MPI\nMVAPICH\nCray MPICH\nFujitsu MPI\nHPE MPT/HMPT","category":"page"},{"location":"configuration/#configure_system_binary","page":"Configuration","title":"Configuration","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run MPIPreferences.use_system_binary(). This will attempt to locate and to identify any available MPI implementation, and create a file called LocalPreferences.toml adjacent to the current Project.toml.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using MPIPreferences; MPIPreferences.use_system_binary()'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If the implementation is changed, you will need to call this function again. See the MPIPreferences.use_system_binary documentation for specific options.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nYou can copy LocalPreferences.toml to a different project folder, but you must list MPIPreferences in the [extras] or [deps] section of the Project.toml for the settings to take effect.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nDue to a bug in Julia (until v1.6.5 and v1.7.1), getting preferences from transitive dependencies is broken (Preferences.jl#24). To fix this update your version of Julia, or add MPIPreferences as a direct dependency to your project.","category":"page"},{"location":"configuration/#Notes-to-HPC-cluster-administrators","page":"Configuration","title":"Notes to HPC cluster administrators","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Preferences are merged across the Julia load path, such that it is feasible to provide a module file that appends a path to JULIA_LOAD_PATH variable that contains system-wide preferences. The steps are as follows:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run MPIPreferences.use_system_binary(), which will generate a file LocalPreferences.toml containing something like the following:\n[MPIPreferences]\n_format = \"1.0\"\nabi = \"OpenMPI\"\nbinary = \"system\"\nlibmpi = \"/software/mpi/lib/libmpi.so\"\nmpiexec = \"/software/mpi/bin/mpiexec\"\nCreate a file called Project.toml or JuliaProject.toml in a central location (for example /software/mpi/julia, or in the same directory as the MPI module file), and add the following contents:\n[extras]\nMPIPreferences = \"3da0fdf6-3ccc-4f1b-acd9-58baa6c99267\"\n\n[preferences.MPIPreferences]\n_format = \"1.0\"\nabi = \"OpenMPI\"\nbinary = \"system\"\nlibmpi = \"/software/mpi/lib/libmpi.so\"\nmpiexec = \"/software/mpi/bin/mpiexec\"\nupdating the contents of the [preferences.MPIPreferences] section match those of the [MPIPreferences] in LocalPreferences.toml.\nAppend the directory containing the file to the JULIA_LOAD_PATH environment variable, with a colon (:) separator.\nnote: Note\nIf this variable is not already set, it should be prefixed with a colon to ensure correct behavior of the Julia load path (e.g. JULIA_LOAD_PATH=\":/software/mpi/julia\")\nIf using environment modules, this can be achieved with\nappend-path -d {} JULIA_LOAD_PATH :/software/mpi/julia\nor if using an older version of environment modules\nif { ![info exists ::env(JULIA_LOAD_PATH)] } {\n append-path JULIA_LOAD_PATH \"\"\n}\nappend-path JULIA_LOAD_PATH /software/mpi/julia\nin the corresponding module file (preferably the module file for the MPI installation or for Julia).\nThe user can still provide differing MPI configurations for each Julia project that will take precedent by modifying the local Project.toml or by providing a LocalPreferences.toml file.","category":"page"},{"location":"configuration/#Notes-about-vendor-provided-MPI-backends","page":"Configuration","title":"Notes about vendor-provided MPI backends","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MPIPreferences can load vendor-specific libraries and settings using the vendor parameter, eg MPIPreferences.use_system_binary(mpiexec=\"srun\", vendor=\"cray\") configures MPIPreferences for use on Cray systems with srun.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nCurrently vendor only supports Cray systems.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This populates the library_names, preloads, preloads_env_switch and cclibs preferences. These are determined by parsing cc --cray-print-opts=all emitted from the Cray Compiler Wrappers. Therefore use_system_binary needs to be run on the target system, with the corresponding PrgEnv loaded.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The function of these settings are as follows:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"preloads specifies a list of libraries that are to be loaded (in order) before libmpi.\npreloads_env_switch specifies the name of an environment variable that, if set to 0, can disable the preloads\ncclibs is a list of libraries also linked by the compiler wrappers. This is recorded mainly for debugging purposes, and the libraries listed here are not explicitly loaded by MPI.jl.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If these are set, the _format key will be set to \"1.1\".","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"An example of running MPIPreferences.use_system_library(vendor=\"cray\") in PrgEnv-gnu is:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"[MPIPreferences]\n_format = \"1.1\"\nabi = \"MPICH\"\nbinary = \"system\"\ncclibs = [\"cupti\", \"cudart\", \"cuda\", \"sci_gnu_82_mpi\", \"sci_gnu_82\", \"dl\", \"dsmml\", \"xpmem\"]\nlibmpi = \"libmpi_gnu_91.so\"\nmpiexec = \"mpiexec\"\npreloads = [\"libmpi_gtl_cuda.so\"]\npreloads_env_switch = \"MPICH_GPU_SUPPORT_ENABLED\"","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"This is an example of CrayMPICH requiring libmpi_gtl_cuda.so to be preloaded, unless MPICH_GPU_SUPPORT_ENABLED=0 (the latter allowing MPI-enabled code to run on a non-GPU enabled node without needing a separate LocalPreferences.toml).","category":"page"},{"location":"configuration/#configure_jll_binary","page":"Configuration","title":"Using an alternative JLL-provided MPI library","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The following MPI implementations are provided as JLL packages and automatically obtained when installing MPI.jl:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"MicrosoftMPI_jll: Microsoft MPI Default for Windows\nMPICH_jll: MPICH. Default for all other systems\nOpenMPI_jll: Open MPI\nMPItrampoline_jll: MPItrampoline: an MPI forwarding layer.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Call MPIPreferences.use_jll_binary, for example","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"julia --project -e 'using MPIPreferences; MPIPreferences.use_jll_binary(\"MPItrampoline_jll\")'","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"If you omit the JLL binary name, the default is selected for the respective operating system.","category":"page"},{"location":"configuration/#Configuration-of-the-MPI.jl-testsuite","page":"Configuration","title":"Configuration of the MPI.jl testsuite","text":"","category":"section"},{"location":"configuration/#Testing-against-a-different-MPI-implementation","page":"Configuration","title":"Testing against a different MPI implementation","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The LocalPreferences.toml must be located within the test folder, you can either create it in place or copy it into place.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"~/MPI> julia --project=test\njulia> using MPIPreferences\njulia> MPIPreferences.use_system_binary()\n~/MPI> rm test/Manifest.toml\n~/MPI> julia --project\n(MPI) pkg> test","category":"page"},{"location":"configuration/#Testing-GPU-aware-buffers","page":"Configuration","title":"Testing GPU-aware buffers","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The test suite can target CUDA-aware interface with CUDA.CuArray and the ROCm-aware interface with AMDGPU.ROCArray upon selecting the corresponding test_args kwarg when calling Pkg.test.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Run Pkg.test with --backend=CUDA to test CUDA-aware MPI buffers","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"import Pkg; Pkg.test(\"MPI\"; test_args=[\"--backend=CUDA\"])","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"and with --backend=AMDGPU to test ROCm-aware MPI buffers","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"import Pkg; Pkg.test(\"MPI\"; test_args=[\"--backend=AMDGPU\"])","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nThe JULIA_MPI_TEST_ARRAYTYPE environment variable has no effect anymore.","category":"page"},{"location":"configuration/#Environment-variables","page":"Configuration","title":"Environment variables","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"The test suite can also be modified by the following variables:","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"JULIA_MPI_TEST_NPROCS: How many ranks to use within the tests\nJULIA_MPI_TEST_BINARY: Check that the specified MPI binary is used for the tests\nJULIA_MPI_TEST_ABI: Check that the specified MPI ABI is used for the tests","category":"page"},{"location":"configuration/#Migration-from-MPI.jl-v0.19-or-earlier","page":"Configuration","title":"Migration from MPI.jl v0.19 or earlier","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"For MPI.jl v0.20, environment variables were used to configure which MPI library to use. These have been removed and no longer have any effect. The following subsections explain how to the same effects can be achieved with v0.20 or later.","category":"page"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"note: Note\nPlease refer to Notes to HPC cluster administrators if you want to migrate your MPI.jl preferences on a cluster with a centrally managed MPI.jl configuration.","category":"page"},{"location":"configuration/#JULIA_MPI_BINARY","page":"Configuration","title":"JULIA_MPI_BINARY","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary to use a system-provided MPI binary as described here. To switch back or select a different JLL-provided MPI binary, use MPIPreferences.use_jll_binary as described here.","category":"page"},{"location":"configuration/#JULIA_MPI_PATH","page":"Configuration","title":"JULIA_MPI_PATH","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement.","category":"page"},{"location":"configuration/#JULIA_MPI_LIBRARY","page":"Configuration","title":"JULIA_MPI_LIBRARY","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument library_names to specify possible, non-standard library names. Alternatively, you can also specify the full path to the library.","category":"page"},{"location":"configuration/#JULIA_MPI_ABI","page":"Configuration","title":"JULIA_MPI_ABI","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument abi to specify which ABI to use. See MPIPreferences.abi for possible values.","category":"page"},{"location":"configuration/#JULIA_MPIEXEC","page":"Configuration","title":"JULIA_MPIEXEC","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument mpiexec to specify the MPI launcher executable.","category":"page"},{"location":"configuration/#JULIA_MPIEXEC_ARGS","page":"Configuration","title":"JULIA_MPIEXEC_ARGS","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Use MPIPreferences.use_system_binary with keyword argument mpiexec, and pass a Cmd object to set the MPI launcher executable and to include specific command line options.","category":"page"},{"location":"configuration/#JULIA_MPI_INCLUDE_PATH","page":"Configuration","title":"JULIA_MPI_INCLUDE_PATH","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"configuration/#JULIA_MPI_CFLAGS","page":"Configuration","title":"JULIA_MPI_CFLAGS","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"configuration/#JULIA_MPICC","page":"Configuration","title":"JULIA_MPICC","text":"","category":"section"},{"location":"configuration/","page":"Configuration","title":"Configuration","text":"Removed without replacement. Automatic generation of a constants file for unknown MPI ABIs is not supported anymore. See also #574.","category":"page"},{"location":"refindex/#Index","page":"Index","title":"Index","text":"","category":"section"},{"location":"refindex/","page":"Index","title":"Index","text":"","category":"page"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/02-broadcast.jl\"","category":"page"},{"location":"examples/02-broadcast/#Broadcast","page":"Broadcast","title":"Broadcast","text":"","category":"section"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"# examples/02-broadcast.jl\nimport MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nN = 5\nroot = 0\n\nif MPI.Comm_rank(comm) == root\n print(\" Running on $(MPI.Comm_size(comm)) processes\\n\")\nend\nMPI.Barrier(comm)\n\nif MPI.Comm_rank(comm) == root\n A = [i*(1.0 + im*2.0) for i = 1:N]\nelse\n A = Array{ComplexF64}(undef, N)\nend\n\nMPI.Bcast!(A, root, comm)\n\nprint(\"rank = $(MPI.Comm_rank(comm)), A = $A\\n\")\n\nif MPI.Comm_rank(comm) == root\n B = Dict(\"foo\" => \"bar\")\nelse\n B = nothing\nend\n\nB = MPI.bcast(B, root, comm)\nprint(\"rank = $(MPI.Comm_rank(comm)), B = $B\\n\")\n\nif MPI.Comm_rank(comm) == root\n f = x -> x^2 + 2x - 1\nelse\n f = nothing\nend\n\nf = MPI.bcast(f, root, comm)\nprint(\"rank = $(MPI.Comm_rank(comm)), f(3) = $(f(3))\\n\")","category":"page"},{"location":"examples/02-broadcast/","page":"Broadcast","title":"Broadcast","text":"> mpiexecjl -n 4 julia examples/02-broadcast.jl\n Running on 4 processes\nrank = 0, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 1, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 3, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 2, A = ComplexF64[1.0 + 2.0im, 2.0 + 4.0im, 3.0 + 6.0im, 4.0 + 8.0im, 5.0 + 10.0im]\nrank = 0, B = Dict(\"foo\" => \"bar\")\nrank = 1, B = Dict(\"foo\" => \"bar\")\nrank = 3, B = Dict(\"foo\" => \"bar\")\nrank = 2, B = Dict(\"foo\" => \"bar\")\nrank = 0, f(3) = 14\nrank = 2, f(3) = 14\nrank = 1, f(3) = 14\nrank = 3, f(3) = 14","category":"page"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/03-reduce.jl\"","category":"page"},{"location":"examples/03-reduce/#Reduce","page":"Reduce","title":"Reduce","text":"","category":"section"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"# examples/03-reduce.jl\n# This example shows how to use custom datatypes and reduction operators\n# It computes the variance in parallel in a numerically stable way\n\nusing MPI, Statistics\n\nMPI.Init()\nconst comm = MPI.COMM_WORLD\nconst root = 0\n\n# Define a custom struct\n# This contains the summary statistics (mean, variance, length) of a vector\nstruct SummaryStat\n mean::Float64\n var::Float64\n n::Float64\nend\nfunction SummaryStat(X::AbstractArray)\n m = mean(X)\n v = varm(X,m, corrected=false)\n n = length(X)\n SummaryStat(m,v,n)\nend\n\n# Define a custom reduction operator\n# this computes the pooled mean, pooled variance and total length\nfunction pool(S1::SummaryStat, S2::SummaryStat)\n n = S1.n + S2.n\n m = (S1.mean*S1.n + S2.mean*S2.n) / n\n v = (S1.n * (S1.var + S1.mean * (S1.mean-m)) +\n S2.n * (S2.var + S2.mean * (S2.mean-m)))/n\n SummaryStat(m,v,n)\nend\n\nX = randn(10,3) .* [1,3,7]'\n\n# Perform a scalar reduction\nsumm = MPI.Reduce(SummaryStat(X), pool, root, comm)\n\nif MPI.Comm_rank(comm) == root\n @show summ.var\nend\n\n# Perform a vector reduction:\n# the reduction operator is applied elementwise\ncol_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, root, comm)\n\nif MPI.Comm_rank(comm) == root\n col_var = map(summ -> summ.var, col_summ)\n @show col_var\nend","category":"page"},{"location":"examples/03-reduce/","page":"Reduce","title":"Reduce","text":"> mpiexecjl -n 4 julia examples/03-reduce.jl\nsumm.var = 18.551614170296823\ncol_var = [1.0190455189783263 9.001082421094319 45.60033633641226]","category":"page"},{"location":"usage/#Usage","page":"Usage","title":"Usage","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"MPI is based on a single program, multiple data (SPMD) model, where multiple processes are launched running independent programs, which then communicate as necessary via messages.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"As the main entry point for users, MPI.jl provides a high-level interface which loosely follows the MPI C API and is described in details in the following sections. The syntax should look familiar if you know MPI already, but some arguments may not be needed (e.g. the type or the number of elements of arrays, which are inferred automatically), others may be placed slightly differently, and others may be optional keyword arguments (e.g. for the index of the root process, or the source and destination of point-to-point communication functions).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"In addition to the high-level interface, MPI.jl provides a low-level API which closely matches the MPI C API and from which it has been automatically generated. This is not intended for general usage, but it can be employed if a high-level wrapper is not yet available.","category":"page"},{"location":"usage/#Basic-example","page":"Usage","title":"Basic example","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"A script should include using MPI and MPI.Init() statements before calling any MPI operations, for example","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"# examples/01-hello.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nprintln(\"Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\")\nMPI.Barrier(comm)","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Calling MPI.Finalize() at the end of the program is optional, as it will be called automatically when Julia exits.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"The program can then be launched via an MPI launch command (typically mpiexec, mpirun or srun), e.g.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ mpiexec -n 3 julia --project examples/01-hello.jl\nHello world, I am rank 0 of 3\nHello world, I am rank 2 of 3\nHello world, I am rank 1 of 3","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"The mpiexec function is provided for launching MPI programs from Julia itself.","category":"page"},{"location":"usage/#Julia-wrapper-for-mpiexec","page":"Usage","title":"Julia wrapper for mpiexec","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"Since you can configure MPI.jl to use one of several MPI implementations, you may have different Julia projects using different implementation. Thus, it may be cumbersome to find out which mpiexec executable is associated to a specific project. To make this easy, on Unix-based systems MPI.jl comes with a thin project-aware wrapper around mpiexec, called mpiexecjl.","category":"page"},{"location":"usage/#Installation","page":"Usage","title":"Installation","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"You can install mpiexecjl with MPI.install_mpiexecjl(). The default destination directory is joinpath(DEPOT_PATH[1], \"bin\"), which usually translates to ~/.julia/bin, but check the value on your system. You can also tell MPI.install_mpiexecjl to install to a different directory.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ julia\njulia> using MPI\njulia> MPI.install_mpiexecjl()","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"To quickly call this wrapper we recommend you to add the destination directory to your PATH environment variable.","category":"page"},{"location":"usage/#Usage-2","page":"Usage","title":"Usage","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"mpiexecjl has the same syntax as the mpiexec binary that will be called, but it takes in addition a --project option to call the specific binary associated to the MPI.jl version in the given project. If no --project flag is used, the MPI.jl in the global Julia environment will be used instead.","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"After installing mpiexecjl and adding its directory to PATH, you can run it with:","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"$ mpiexecjl --project=/path/to/project -n 20 julia script.jl","category":"page"},{"location":"usage/#CUDA-aware-MPI-support","page":"Usage","title":"CUDA-aware MPI support","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"If your MPI implementation has been compiled with CUDA support, then CUDA.CuArrays (from the CUDA.jl package) can be passed directly as send and receive buffers for point-to-point and collective operations (they may also work with one-sided operations, but these are not often supported).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Successfully running the alltoall_test_cuda.jl should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the alltoall_test_cuda_multigpu.jl should confirm your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"If using OpenMPI, the status of CUDA support can be checked via the MPI.has_cuda() function.","category":"page"},{"location":"usage/#ROCm-aware-MPI-support","page":"Usage","title":"ROCm-aware MPI support","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"If your MPI implementation has been compiled with ROCm support (AMDGPU), then AMDGPU.ROCArrays (from the AMDGPU.jl package) can be passed directly as send and receive buffers for point-to-point and collective operations (they may also work with one-sided operations, but these are not often supported).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"Successfully running the alltoall_test_rocm.jl should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the alltoall_test_rocm_multigpu.jl should confirm your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"If using OpenMPI, the status of ROCm support can be checked via the MPI.has_rocm() function.","category":"page"},{"location":"usage/#Writing-MPI-tests","page":"Usage","title":"Writing MPI tests","text":"","category":"section"},{"location":"usage/","page":"Usage","title":"Usage","text":"It is recommended to use the mpiexec() wrapper when writing your package tests in runtests.jl:","category":"page"},{"location":"usage/","page":"Usage","title":"Usage","text":"# test/runtests.jl\nusing MPI\nusing Test\n\n@testset \"hello\" begin\n n = 2 # number of processes\n run(`$(mpiexec()) -n $n $(Base.julia_cmd()) [...]/01-hello.jl`)\n # alternatively:\n # p = run(ignorestatus(`$(mpiexec()) ...`))\n # @test success(p)\n end\nend","category":"page"},{"location":"reference/library/#Library-information","page":"Library information","title":"Library information","text":"","category":"section"},{"location":"reference/library/#Constants","page":"Library information","title":"Constants","text":"","category":"section"},{"location":"reference/library/","page":"Library information","title":"Library information","text":"MPI.MPI_VERSION\nMPI.MPI_LIBRARY\nMPI.MPI_LIBRARY_VERSION\nMPI.MPI_LIBRARY_VERSION_STRING","category":"page"},{"location":"reference/library/#MPI.MPI_VERSION","page":"Library information","title":"MPI.MPI_VERSION","text":"MPI_VERSION :: VersionNumber\n\nThe supported version of the MPI standard.\n\nExternal links\n\nMPI_Get_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY","page":"Library information","title":"MPI.MPI_LIBRARY","text":"MPI_LIBRARY :: String\n\nThe current MPI implementation: this is determined by\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY_VERSION","page":"Library information","title":"MPI.MPI_LIBRARY_VERSION","text":"MPI_LIBRARY_VERSION :: VersionNumber\n\nThe version of the MPI library\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#MPI.MPI_LIBRARY_VERSION_STRING","page":"Library information","title":"MPI.MPI_LIBRARY_VERSION_STRING","text":"MPI_LIBRARY_VERSION_STRING :: String\n\nThe full version string provided by the library\n\nExternal links\n\nMPI_Get_library_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"constant"},{"location":"reference/library/#Functions","page":"Library information","title":"Functions","text":"","category":"section"},{"location":"reference/library/","page":"Library information","title":"Library information","text":"MPI.versioninfo\nMPI.has_cuda\nMPI.has_rocm\nMPI.has_gpu\nMPI.identify_implementation","category":"page"},{"location":"reference/library/#MPI.versioninfo","page":"Library information","title":"MPI.versioninfo","text":"MPI.versioninfo(io::IO=stdout)\n\nPrint a summary of the current MPI configuration.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_cuda","page":"Library information","title":"MPI.has_cuda","text":"MPI.has_cuda()\n\nCheck if the MPI implementation is known to have CUDA support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden). For \"IBMSpectrumMPI\" it will return true.\n\nThis can be overridden by setting the JULIA_MPI_HAS_CUDA environment variable to true or false.\n\nnote: Note\nFor OpenMPI or OpenMPI-based implementations you first need to call Init().\n\nSee also MPI.has_rocm for ROCm support.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_rocm","page":"Library information","title":"MPI.has_rocm","text":"MPI.has_rocm()\n\nCheck if the MPI implementation is known to have ROCm support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden).\n\nThis can be overridden by setting the JULIA_MPI_HAS_ROCM environment variable to true or false.\n\nSee also MPI.has_cuda for CUDA support.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.has_gpu","page":"Library information","title":"MPI.has_gpu","text":"MPI.has_gpu()\n\nChecks if the MPI implementation is known to have GPU support. Currently this checks for the following GPUs:\n\nCUDA: via MPI.has_cuda\nROCm: via MPI.has_rocm\n\nSee also MPI.has_cuda and MPI.has_rocm for more fine-grained checks.\n\n\n\n\n\n","category":"function"},{"location":"reference/library/#MPI.identify_implementation","page":"Library information","title":"MPI.identify_implementation","text":"impl, version = identify_implementation()\n\nAttempt to identify the MPI implementation based on MPI_LIBRARY_VERSION_STRING. Returns a triple of values:\n\nimpl: a String with the name of the MPI implementation, or \"unknown\" if it cannot be determined,\nversion: a VersionNumber of the library, or nothing if it cannot be determined.\n\nThis function is only intended for internal use. Users should use MPI_LIBRARY, MPI_LIBRARY_VERSION.\n\n\n\n\n\n","category":"function"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"EditURL = \"https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/01-hello.jl\"","category":"page"},{"location":"examples/01-hello/#Hello-world","page":"Hello world","title":"Hello world","text":"","category":"section"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"# examples/01-hello.jl\nusing MPI\nMPI.Init()\n\ncomm = MPI.COMM_WORLD\nprint(\"Hello world, I am rank $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))\\n\")\nMPI.Barrier(comm)","category":"page"},{"location":"examples/01-hello/","page":"Hello world","title":"Hello world","text":"> mpiexecjl -n 4 julia examples/01-hello.jl\nHello world, I am rank 0 of 4\nHello world, I am rank 1 of 4\nHello world, I am rank 2 of 4\nHello world, I am rank 3 of 4","category":"page"},{"location":"reference/misc/#Miscellanea","page":"Miscellanea","title":"Miscellanea","text":"","category":"section"},{"location":"reference/misc/#Functions","page":"Miscellanea","title":"Functions","text":"","category":"section"},{"location":"reference/misc/","page":"Miscellanea","title":"Miscellanea","text":"MPI.Get_processor_name","category":"page"},{"location":"reference/misc/#MPI.Get_processor_name","page":"Miscellanea","title":"MPI.Get_processor_name","text":"Get_processor_name()\n\nReturn the name of the processor, as a String.\n\nExternal links\n\nMPI_Get_processor_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/api/#Low-level-API","page":"Low-level API","title":"Low-level API","text":"","category":"section"},{"location":"reference/api/","page":"Low-level API","title":"Low-level API","text":"The MPI.API submodule provides a low-level interface which closely matches the MPI C API. While these functions are not intended for general usage, they are useful for calling MPI routines not yet available in MPI.jl main interface, and is the basis for the high-level wrappers. The methods suffixed with _c allow MPI_count typed arguments (vs int for the standard ones). The size of MPI_count depends on the implementation, but usually allows 64bit integer offsets.","category":"page"},{"location":"reference/api/","page":"Low-level API","title":"Low-level API","text":"Modules = [MPI.API]\nOrder = [:function]","category":"page"},{"location":"reference/api/#MPI.API.MPI_Abort-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Abort","text":"MPI_Abort(comm, errorcode)\n\nMPI_Abort man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Accumulate-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Accumulate","text":"MPI_Accumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Accumulate_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Accumulate_c","text":"MPI_Accumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_class-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_class","text":"MPI_Add_error_class(errorclass)\n\nMPI_Add_error_class man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_code-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_code","text":"MPI_Add_error_code(errorclass, errorcode)\n\nMPI_Add_error_code man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Add_error_string-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Add_error_string","text":"MPI_Add_error_string(errorcode, string)\n\nMPI_Add_error_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Address-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Address","text":"MPI_Address(location, address)\n\nMPI_Address man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Aint_add-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Aint_add","text":"MPI_Aint_add(base, disp)\n\nMPI_Aint_add man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Aint_diff-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Aint_diff","text":"MPI_Aint_diff(addr1, addr2)\n\nMPI_Aint_diff man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather","text":"MPI_Allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_c","text":"MPI_Allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_init","text":"MPI_Allgather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Allgather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgather_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgather_init_c","text":"MPI_Allgather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Allgather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv","text":"MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_c","text":"MPI_Allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_init","text":"MPI_Allgatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Allgatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allgatherv_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Allgatherv_init_c","text":"MPI_Allgatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Allgatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alloc_mem-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Alloc_mem","text":"MPI_Alloc_mem(size, info, baseptr)\n\nMPI_Alloc_mem man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce","text":"MPI_Allreduce(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_c","text":"MPI_Allreduce_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Allreduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_init","text":"MPI_Allreduce_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Allreduce_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Allreduce_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Allreduce_init_c","text":"MPI_Allreduce_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Allreduce_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall","text":"MPI_Alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_c","text":"MPI_Alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_init","text":"MPI_Alltoall_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Alltoall_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoall_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoall_init_c","text":"MPI_Alltoall_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Alltoall_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv","text":"MPI_Alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_c","text":"MPI_Alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_init","text":"MPI_Alltoallv_init(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Alltoallv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallv_init_c","text":"MPI_Alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Alltoallv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw","text":"MPI_Alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_c","text":"MPI_Alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_init","text":"MPI_Alltoallw_init(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Alltoallw_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Alltoallw_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Alltoallw_init_c","text":"MPI_Alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Alltoallw_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_delete","text":"MPI_Attr_delete(comm, keyval)\n\nMPI_Attr_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_get-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_get","text":"MPI_Attr_get(comm, keyval, attribute_val, flag)\n\nMPI_Attr_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Attr_put-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Attr_put","text":"MPI_Attr_put(comm, keyval, attribute_val)\n\nMPI_Attr_put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Barrier-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Barrier","text":"MPI_Barrier(comm)\n\nMPI_Barrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Barrier_init-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Barrier_init","text":"MPI_Barrier_init(comm, info, request)\n\nMPI_Barrier_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast","text":"MPI_Bcast(buffer, count, datatype, root, comm)\n\nMPI_Bcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_c","text":"MPI_Bcast_c(buffer, count, datatype, root, comm)\n\nMPI_Bcast_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_init","text":"MPI_Bcast_init(buffer, count, datatype, root, comm, info, request)\n\nMPI_Bcast_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bcast_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bcast_init_c","text":"MPI_Bcast_init_c(buffer, count, datatype, root, comm, info, request)\n\nMPI_Bcast_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend","text":"MPI_Bsend(buf, count, datatype, dest, tag, comm)\n\nMPI_Bsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_c","text":"MPI_Bsend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Bsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_init","text":"MPI_Bsend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Bsend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Bsend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Bsend_init_c","text":"MPI_Bsend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Bsend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_attach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_attach","text":"MPI_Buffer_attach(buffer, size)\n\nMPI_Buffer_attach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_attach_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_attach_c","text":"MPI_Buffer_attach_c(buffer, size)\n\nMPI_Buffer_attach_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_detach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_detach","text":"MPI_Buffer_detach(buffer_addr, size)\n\nMPI_Buffer_detach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Buffer_detach_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Buffer_detach_c","text":"MPI_Buffer_detach_c(buffer_addr, size)\n\nMPI_Buffer_detach_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cancel-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Cancel","text":"MPI_Cancel(request)\n\nMPI_Cancel man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_coords-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_coords","text":"MPI_Cart_coords(comm, rank, maxdims, coords)\n\nMPI_Cart_coords man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_create","text":"MPI_Cart_create(comm_old, ndims, dims, periods, reorder, comm_cart)\n\nMPI_Cart_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_get","text":"MPI_Cart_get(comm, maxdims, dims, periods, coords)\n\nMPI_Cart_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_map-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_map","text":"MPI_Cart_map(comm, ndims, dims, periods, newrank)\n\nMPI_Cart_map man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_rank-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_rank","text":"MPI_Cart_rank(comm, coords, rank)\n\nMPI_Cart_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_shift-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_shift","text":"MPI_Cart_shift(comm, direction, disp, rank_source, rank_dest)\n\nMPI_Cart_shift man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cart_sub-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cart_sub","text":"MPI_Cart_sub(comm, remain_dims, newcomm)\n\nMPI_Cart_sub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Cartdim_get-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Cartdim_get","text":"MPI_Cartdim_get(comm, ndims)\n\nMPI_Cartdim_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Close_port-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Close_port","text":"MPI_Close_port(port_name)\n\nMPI_Close_port man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_accept-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_accept","text":"MPI_Comm_accept(port_name, info, root, comm, newcomm)\n\nMPI_Comm_accept man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_call_errhandler","text":"MPI_Comm_call_errhandler(comm, errorcode)\n\nMPI_Comm_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_compare-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_compare","text":"MPI_Comm_compare(comm1, comm2, result)\n\nMPI_Comm_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_connect-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_connect","text":"MPI_Comm_connect(port_name, info, root, comm, newcomm)\n\nMPI_Comm_connect man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create","text":"MPI_Comm_create(comm, group, newcomm)\n\nMPI_Comm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_errhandler","text":"MPI_Comm_create_errhandler(comm_errhandler_fn, errhandler)\n\nMPI_Comm_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_from_group-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_from_group","text":"MPI_Comm_create_from_group(group, stringtag, info, errhandler, newcomm)\n\nMPI_Comm_create_from_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_group-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_group","text":"MPI_Comm_create_group(comm, group, tag, newcomm)\n\nMPI_Comm_create_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_create_keyval","text":"MPI_Comm_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn, comm_keyval, extra_state)\n\nMPI_Comm_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_delete_attr","text":"MPI_Comm_delete_attr(comm, comm_keyval)\n\nMPI_Comm_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_disconnect-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_disconnect","text":"MPI_Comm_disconnect(comm)\n\nMPI_Comm_disconnect man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_dup","text":"MPI_Comm_dup(comm, newcomm)\n\nMPI_Comm_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_dup_with_info-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_dup_with_info","text":"MPI_Comm_dup_with_info(comm, info, newcomm)\n\nMPI_Comm_dup_with_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_free","text":"MPI_Comm_free(comm)\n\nMPI_Comm_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_free_keyval","text":"MPI_Comm_free_keyval(comm_keyval)\n\nMPI_Comm_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_attr","text":"MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag)\n\nMPI_Comm_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_errhandler","text":"MPI_Comm_get_errhandler(comm, errhandler)\n\nMPI_Comm_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_info","text":"MPI_Comm_get_info(comm, info_used)\n\nMPI_Comm_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_name","text":"MPI_Comm_get_name(comm, comm_name, resultlen)\n\nMPI_Comm_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_get_parent-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_get_parent","text":"MPI_Comm_get_parent(parent)\n\nMPI_Comm_get_parent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_group","text":"MPI_Comm_group(comm, group)\n\nMPI_Comm_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_idup-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_idup","text":"MPI_Comm_idup(comm, newcomm, request)\n\nMPI_Comm_idup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_idup_with_info-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_idup_with_info","text":"MPI_Comm_idup_with_info(comm, info, newcomm, request)\n\nMPI_Comm_idup_with_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_join-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_join","text":"MPI_Comm_join(fd, intercomm)\n\nMPI_Comm_join man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_rank-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_rank","text":"MPI_Comm_rank(comm, rank)\n\nMPI_Comm_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_remote_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_remote_group","text":"MPI_Comm_remote_group(comm, group)\n\nMPI_Comm_remote_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_remote_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_remote_size","text":"MPI_Comm_remote_size(comm, size)\n\nMPI_Comm_remote_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_attr","text":"MPI_Comm_set_attr(comm, comm_keyval, attribute_val)\n\nMPI_Comm_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_errhandler","text":"MPI_Comm_set_errhandler(comm, errhandler)\n\nMPI_Comm_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_info","text":"MPI_Comm_set_info(comm, info)\n\nMPI_Comm_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_set_name","text":"MPI_Comm_set_name(comm, comm_name)\n\nMPI_Comm_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_size","text":"MPI_Comm_size(comm, size)\n\nMPI_Comm_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_spawn-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_spawn","text":"MPI_Comm_spawn(command, argv, maxprocs, info, root, comm, intercomm, array_of_errcodes)\n\nMPI_Comm_spawn man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_spawn_multiple-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_spawn_multiple","text":"MPI_Comm_spawn_multiple(count, array_of_commands, array_of_argv, array_of_maxprocs, array_of_info, root, comm, intercomm, array_of_errcodes)\n\nMPI_Comm_spawn_multiple man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_split-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_split","text":"MPI_Comm_split(comm, color, key, newcomm)\n\nMPI_Comm_split man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_split_type-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_split_type","text":"MPI_Comm_split_type(comm, split_type, key, info, newcomm)\n\nMPI_Comm_split_type man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Comm_test_inter-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Comm_test_inter","text":"MPI_Comm_test_inter(comm, flag)\n\nMPI_Comm_test_inter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Compare_and_swap-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Compare_and_swap","text":"MPI_Compare_and_swap(origin_addr, compare_addr, result_addr, datatype, target_rank, target_disp, win)\n\nMPI_Compare_and_swap man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dims_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Dims_create","text":"MPI_Dims_create(nnodes, ndims, dims)\n\nMPI_Dims_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_create-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_create","text":"MPI_Dist_graph_create(comm_old, n, sources, degrees, destinations, weights, info, reorder, comm_dist_graph)\n\nMPI_Dist_graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_create_adjacent-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_create_adjacent","text":"MPI_Dist_graph_create_adjacent(comm_old, indegree, sources, sourceweights, outdegree, destinations, destweights, info, reorder, comm_dist_graph)\n\nMPI_Dist_graph_create_adjacent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_neighbors-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_neighbors","text":"MPI_Dist_graph_neighbors(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights)\n\nMPI_Dist_graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Dist_graph_neighbors_count-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Dist_graph_neighbors_count","text":"MPI_Dist_graph_neighbors_count(comm, indegree, outdegree, weighted)\n\nMPI_Dist_graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_create-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_create","text":"MPI_Errhandler_create(comm_errhandler_fn, errhandler)\n\nMPI_Errhandler_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_free","text":"MPI_Errhandler_free(errhandler)\n\nMPI_Errhandler_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_get-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_get","text":"MPI_Errhandler_get(comm, errhandler)\n\nMPI_Errhandler_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Errhandler_set-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Errhandler_set","text":"MPI_Errhandler_set(comm, errhandler)\n\nMPI_Errhandler_set man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Error_class-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Error_class","text":"MPI_Error_class(errorcode, errorclass)\n\nMPI_Error_class man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Error_string-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Error_string","text":"MPI_Error_string(errorcode, string, resultlen)\n\nMPI_Error_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan","text":"MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Exscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_c","text":"MPI_Exscan_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Exscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_init","text":"MPI_Exscan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Exscan_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Exscan_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Exscan_init_c","text":"MPI_Exscan_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Exscan_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Fetch_and_op-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Fetch_and_op","text":"MPI_Fetch_and_op(origin_addr, result_addr, datatype, target_rank, target_disp, op, win)\n\nMPI_Fetch_and_op man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_c2f-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_c2f","text":"MPI_File_c2f(file)\n\nMPI_File_c2f man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_call_errhandler","text":"MPI_File_call_errhandler(fh, errorcode)\n\nMPI_File_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_close-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_close","text":"MPI_File_close(fh)\n\nMPI_File_close man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_create_errhandler","text":"MPI_File_create_errhandler(file_errhandler_fn, errhandler)\n\nMPI_File_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_delete","text":"MPI_File_delete(filename, info)\n\nMPI_File_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_f2c-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_f2c","text":"MPI_File_f2c(file)\n\nMPI_File_f2c man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_amode-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_amode","text":"MPI_File_get_amode(fh, amode)\n\nMPI_File_get_amode man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_atomicity-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_atomicity","text":"MPI_File_get_atomicity(fh, flag)\n\nMPI_File_get_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_byte_offset-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_byte_offset","text":"MPI_File_get_byte_offset(fh, offset, disp)\n\nMPI_File_get_byte_offset man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_errhandler","text":"MPI_File_get_errhandler(file, errhandler)\n\nMPI_File_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_group","text":"MPI_File_get_group(fh, group)\n\nMPI_File_get_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_info","text":"MPI_File_get_info(fh, info_used)\n\nMPI_File_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_position-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_position","text":"MPI_File_get_position(fh, offset)\n\nMPI_File_get_position man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_position_shared-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_position_shared","text":"MPI_File_get_position_shared(fh, offset)\n\nMPI_File_get_position_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_size","text":"MPI_File_get_size(fh, size)\n\nMPI_File_get_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_type_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_type_extent","text":"MPI_File_get_type_extent(fh, datatype, extent)\n\nMPI_File_get_type_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_type_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_type_extent_c","text":"MPI_File_get_type_extent_c(fh, datatype, extent)\n\nMPI_File_get_type_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_get_view-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_get_view","text":"MPI_File_get_view(fh, disp, etype, filetype, datarep)\n\nMPI_File_get_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread","text":"MPI_File_iread(fh, buf, count, datatype, request)\n\nMPI_File_iread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_all","text":"MPI_File_iread_all(fh, buf, count, datatype, request)\n\nMPI_File_iread_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_all_c","text":"MPI_File_iread_all_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at","text":"MPI_File_iread_at(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_all","text":"MPI_File_iread_at_all(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_all_c","text":"MPI_File_iread_at_all_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_at_c","text":"MPI_File_iread_at_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iread_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_c","text":"MPI_File_iread_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_shared","text":"MPI_File_iread_shared(fh, buf, count, datatype, request)\n\nMPI_File_iread_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iread_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iread_shared_c","text":"MPI_File_iread_shared_c(fh, buf, count, datatype, request)\n\nMPI_File_iread_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite","text":"MPI_File_iwrite(fh, buf, count, datatype, request)\n\nMPI_File_iwrite man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_all","text":"MPI_File_iwrite_all(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_all_c","text":"MPI_File_iwrite_all_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at","text":"MPI_File_iwrite_at(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_all","text":"MPI_File_iwrite_at_all(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_all_c","text":"MPI_File_iwrite_at_all_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_at_c","text":"MPI_File_iwrite_at_c(fh, offset, buf, count, datatype, request)\n\nMPI_File_iwrite_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_c","text":"MPI_File_iwrite_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_shared","text":"MPI_File_iwrite_shared(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_iwrite_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_iwrite_shared_c","text":"MPI_File_iwrite_shared_c(fh, buf, count, datatype, request)\n\nMPI_File_iwrite_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_open-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_open","text":"MPI_File_open(comm, filename, amode, info, fh)\n\nMPI_File_open man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_preallocate-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_preallocate","text":"MPI_File_preallocate(fh, size)\n\nMPI_File_preallocate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read","text":"MPI_File_read(fh, buf, count, datatype, status)\n\nMPI_File_read man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all","text":"MPI_File_read_all(fh, buf, count, datatype, status)\n\nMPI_File_read_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_begin","text":"MPI_File_read_all_begin(fh, buf, count, datatype)\n\nMPI_File_read_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_begin_c","text":"MPI_File_read_all_begin_c(fh, buf, count, datatype)\n\nMPI_File_read_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_c","text":"MPI_File_read_all_c(fh, buf, count, datatype, status)\n\nMPI_File_read_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_all_end","text":"MPI_File_read_all_end(fh, buf, status)\n\nMPI_File_read_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at","text":"MPI_File_read_at(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all","text":"MPI_File_read_at_all(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_begin-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_begin","text":"MPI_File_read_at_all_begin(fh, offset, buf, count, datatype)\n\nMPI_File_read_at_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_begin_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_begin_c","text":"MPI_File_read_at_all_begin_c(fh, offset, buf, count, datatype)\n\nMPI_File_read_at_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_c","text":"MPI_File_read_at_all_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_all_end","text":"MPI_File_read_at_all_end(fh, buf, status)\n\nMPI_File_read_at_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_at_c","text":"MPI_File_read_at_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_read_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_c","text":"MPI_File_read_c(fh, buf, count, datatype, status)\n\nMPI_File_read_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered","text":"MPI_File_read_ordered(fh, buf, count, datatype, status)\n\nMPI_File_read_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_begin","text":"MPI_File_read_ordered_begin(fh, buf, count, datatype)\n\nMPI_File_read_ordered_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_begin_c","text":"MPI_File_read_ordered_begin_c(fh, buf, count, datatype)\n\nMPI_File_read_ordered_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_c","text":"MPI_File_read_ordered_c(fh, buf, count, datatype, status)\n\nMPI_File_read_ordered_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_ordered_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_ordered_end","text":"MPI_File_read_ordered_end(fh, buf, status)\n\nMPI_File_read_ordered_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_shared","text":"MPI_File_read_shared(fh, buf, count, datatype, status)\n\nMPI_File_read_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_read_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_read_shared_c","text":"MPI_File_read_shared_c(fh, buf, count, datatype, status)\n\nMPI_File_read_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_seek-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_seek","text":"MPI_File_seek(fh, offset, whence)\n\nMPI_File_seek man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_seek_shared-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_seek_shared","text":"MPI_File_seek_shared(fh, offset, whence)\n\nMPI_File_seek_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_atomicity-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_atomicity","text":"MPI_File_set_atomicity(fh, flag)\n\nMPI_File_set_atomicity man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_errhandler","text":"MPI_File_set_errhandler(file, errhandler)\n\nMPI_File_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_info","text":"MPI_File_set_info(fh, info)\n\nMPI_File_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_size","text":"MPI_File_set_size(fh, size)\n\nMPI_File_set_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_set_view-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_set_view","text":"MPI_File_set_view(fh, disp, etype, filetype, datarep, info)\n\nMPI_File_set_view man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_sync-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_File_sync","text":"MPI_File_sync(fh)\n\nMPI_File_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write","text":"MPI_File_write(fh, buf, count, datatype, status)\n\nMPI_File_write man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all","text":"MPI_File_write_all(fh, buf, count, datatype, status)\n\nMPI_File_write_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_begin","text":"MPI_File_write_all_begin(fh, buf, count, datatype)\n\nMPI_File_write_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_begin_c","text":"MPI_File_write_all_begin_c(fh, buf, count, datatype)\n\nMPI_File_write_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_c","text":"MPI_File_write_all_c(fh, buf, count, datatype, status)\n\nMPI_File_write_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_all_end","text":"MPI_File_write_all_end(fh, buf, status)\n\nMPI_File_write_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at","text":"MPI_File_write_at(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all","text":"MPI_File_write_at_all(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_begin-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_begin","text":"MPI_File_write_at_all_begin(fh, offset, buf, count, datatype)\n\nMPI_File_write_at_all_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_begin_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_begin_c","text":"MPI_File_write_at_all_begin_c(fh, offset, buf, count, datatype)\n\nMPI_File_write_at_all_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_c","text":"MPI_File_write_at_all_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_all_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_all_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_all_end","text":"MPI_File_write_at_all_end(fh, buf, status)\n\nMPI_File_write_at_all_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_at_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_at_c","text":"MPI_File_write_at_c(fh, offset, buf, count, datatype, status)\n\nMPI_File_write_at_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_c","text":"MPI_File_write_c(fh, buf, count, datatype, status)\n\nMPI_File_write_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered","text":"MPI_File_write_ordered(fh, buf, count, datatype, status)\n\nMPI_File_write_ordered man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_begin-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_begin","text":"MPI_File_write_ordered_begin(fh, buf, count, datatype)\n\nMPI_File_write_ordered_begin man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_begin_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_begin_c","text":"MPI_File_write_ordered_begin_c(fh, buf, count, datatype)\n\nMPI_File_write_ordered_begin_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_c","text":"MPI_File_write_ordered_c(fh, buf, count, datatype, status)\n\nMPI_File_write_ordered_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_ordered_end-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_ordered_end","text":"MPI_File_write_ordered_end(fh, buf, status)\n\nMPI_File_write_ordered_end man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_shared-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_shared","text":"MPI_File_write_shared(fh, buf, count, datatype, status)\n\nMPI_File_write_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_File_write_shared_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_File_write_shared_c","text":"MPI_File_write_shared_c(fh, buf, count, datatype, status)\n\nMPI_File_write_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Finalize-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Finalize","text":"MPI_Finalize()\n\nMPI_Finalize man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Finalized-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Finalized","text":"MPI_Finalized(flag)\n\nMPI_Finalized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Free_mem-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Free_mem","text":"MPI_Free_mem(base)\n\nMPI_Free_mem man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather","text":"MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_c","text":"MPI_Gather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Gather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_init","text":"MPI_Gather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Gather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gather_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Gather_init_c","text":"MPI_Gather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Gather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv","text":"MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)\n\nMPI_Gatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_c","text":"MPI_Gatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm)\n\nMPI_Gatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_init","text":"MPI_Gatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)\n\nMPI_Gatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Gatherv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Gatherv_init_c","text":"MPI_Gatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, info, request)\n\nMPI_Gatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Get","text":"MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_accumulate-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_accumulate","text":"MPI_Get_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Get_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_accumulate_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_accumulate_c","text":"MPI_Get_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win)\n\nMPI_Get_accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_address-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_address","text":"MPI_Get_address(location, address)\n\nMPI_Get_address man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_c","text":"MPI_Get_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Get_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_count-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_count","text":"MPI_Get_count(status, datatype, count)\n\nMPI_Get_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_count_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_count_c","text":"MPI_Get_count_c(status, datatype, count)\n\nMPI_Get_count_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements","text":"MPI_Get_elements(status, datatype, count)\n\nMPI_Get_elements man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements_c","text":"MPI_Get_elements_c(status, datatype, count)\n\nMPI_Get_elements_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_elements_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_elements_x","text":"MPI_Get_elements_x(status, datatype, count)\n\nMPI_Get_elements_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_library_version-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_library_version","text":"MPI_Get_library_version(version, resultlen)\n\nMPI_Get_library_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_processor_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_processor_name","text":"MPI_Get_processor_name(name, resultlen)\n\nMPI_Get_processor_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Get_version-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Get_version","text":"MPI_Get_version(version, subversion)\n\nMPI_Get_version man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_create","text":"MPI_Graph_create(comm_old, nnodes, indx, edges, reorder, comm_graph)\n\nMPI_Graph_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_get","text":"MPI_Graph_get(comm, maxindex, maxedges, indx, edges)\n\nMPI_Graph_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_map-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_map","text":"MPI_Graph_map(comm, nnodes, indx, edges, newrank)\n\nMPI_Graph_map man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_neighbors-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_neighbors","text":"MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors)\n\nMPI_Graph_neighbors man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graph_neighbors_count-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Graph_neighbors_count","text":"MPI_Graph_neighbors_count(comm, rank, nneighbors)\n\nMPI_Graph_neighbors_count man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Graphdims_get-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Graphdims_get","text":"MPI_Graphdims_get(comm, nnodes, nedges)\n\nMPI_Graphdims_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Grequest_complete-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Grequest_complete","text":"MPI_Grequest_complete(request)\n\nMPI_Grequest_complete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Grequest_start-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Grequest_start","text":"MPI_Grequest_start(query_fn, free_fn, cancel_fn, extra_state, request)\n\nMPI_Grequest_start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_compare-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_compare","text":"MPI_Group_compare(group1, group2, result)\n\nMPI_Group_compare man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_difference-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_difference","text":"MPI_Group_difference(group1, group2, newgroup)\n\nMPI_Group_difference man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_excl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_excl","text":"MPI_Group_excl(group, n, ranks, newgroup)\n\nMPI_Group_excl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Group_free","text":"MPI_Group_free(group)\n\nMPI_Group_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_incl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_incl","text":"MPI_Group_incl(group, n, ranks, newgroup)\n\nMPI_Group_incl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_intersection-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_intersection","text":"MPI_Group_intersection(group1, group2, newgroup)\n\nMPI_Group_intersection man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_range_excl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_range_excl","text":"MPI_Group_range_excl(group, n, ranges, newgroup)\n\nMPI_Group_range_excl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_range_incl-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_range_incl","text":"MPI_Group_range_incl(group, n, ranges, newgroup)\n\nMPI_Group_range_incl man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_rank-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_rank","text":"MPI_Group_rank(group, rank)\n\nMPI_Group_rank man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_size","text":"MPI_Group_size(group, size)\n\nMPI_Group_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_translate_ranks-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_translate_ranks","text":"MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2)\n\nMPI_Group_translate_ranks man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Group_union-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Group_union","text":"MPI_Group_union(group1, group2, newgroup)\n\nMPI_Group_union man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgather","text":"MPI_Iallgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Iallgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgather_c","text":"MPI_Iallgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Iallgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgatherv","text":"MPI_Iallgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Iallgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallgatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallgatherv_c","text":"MPI_Iallgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Iallgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallreduce-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallreduce","text":"MPI_Iallreduce(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iallreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iallreduce_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iallreduce_c","text":"MPI_Iallreduce_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iallreduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoall-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoall","text":"MPI_Ialltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ialltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoall_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoall_c","text":"MPI_Ialltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ialltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallv","text":"MPI_Ialltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ialltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallv_c","text":"MPI_Ialltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ialltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallw-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallw","text":"MPI_Ialltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ialltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ialltoallw_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ialltoallw_c","text":"MPI_Ialltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ialltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibarrier-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibarrier","text":"MPI_Ibarrier(comm, request)\n\nMPI_Ibarrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibcast-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibcast","text":"MPI_Ibcast(buffer, count, datatype, root, comm, request)\n\nMPI_Ibcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibcast_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibcast_c","text":"MPI_Ibcast_c(buffer, count, datatype, root, comm, request)\n\nMPI_Ibcast_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibsend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibsend","text":"MPI_Ibsend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ibsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ibsend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ibsend_c","text":"MPI_Ibsend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ibsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iexscan-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iexscan","text":"MPI_Iexscan(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iexscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iexscan_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iexscan_c","text":"MPI_Iexscan_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iexscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igather-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Igather","text":"MPI_Igather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Igather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igather_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Igather_c","text":"MPI_Igather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Igather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igatherv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Igatherv","text":"MPI_Igatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)\n\nMPI_Igatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Igatherv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Igatherv_c","text":"MPI_Igatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, request)\n\nMPI_Igatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Improbe-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Improbe","text":"MPI_Improbe(source, tag, comm, flag, message, status)\n\nMPI_Improbe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Imrecv-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Imrecv","text":"MPI_Imrecv(buf, count, datatype, message, request)\n\nMPI_Imrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Imrecv_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Imrecv_c","text":"MPI_Imrecv_c(buf, count, datatype, message, request)\n\nMPI_Imrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgather-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgather","text":"MPI_Ineighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgather_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgather_c","text":"MPI_Ineighbor_allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgatherv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgatherv","text":"MPI_Ineighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Ineighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_allgatherv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_allgatherv_c","text":"MPI_Ineighbor_allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, request)\n\nMPI_Ineighbor_allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoall-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoall","text":"MPI_Ineighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoall_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoall_c","text":"MPI_Ineighbor_alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, request)\n\nMPI_Ineighbor_alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallv","text":"MPI_Ineighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ineighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallv_c","text":"MPI_Ineighbor_alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, request)\n\nMPI_Ineighbor_alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallw-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallw","text":"MPI_Ineighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ineighbor_alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ineighbor_alltoallw_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Ineighbor_alltoallw_c","text":"MPI_Ineighbor_alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, request)\n\nMPI_Ineighbor_alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_create-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Info_create","text":"MPI_Info_create(info)\n\nMPI_Info_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_create_env-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_create_env","text":"MPI_Info_create_env(argc, argv, info)\n\nMPI_Info_create_env man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_delete-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_delete","text":"MPI_Info_delete(info, key)\n\nMPI_Info_delete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_dup","text":"MPI_Info_dup(info, newinfo)\n\nMPI_Info_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Info_free","text":"MPI_Info_free(info)\n\nMPI_Info_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get","text":"MPI_Info_get(info, key, valuelen, value, flag)\n\nMPI_Info_get man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_nkeys-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_nkeys","text":"MPI_Info_get_nkeys(info, nkeys)\n\nMPI_Info_get_nkeys man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_nthkey-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_nthkey","text":"MPI_Info_get_nthkey(info, n, key)\n\nMPI_Info_get_nthkey man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_string-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_string","text":"MPI_Info_get_string(info, key, buflen, value, flag)\n\nMPI_Info_get_string man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_get_valuelen-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_get_valuelen","text":"MPI_Info_get_valuelen(info, key, valuelen, flag)\n\nMPI_Info_get_valuelen man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Info_set-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Info_set","text":"MPI_Info_set(info, key, value)\n\nMPI_Info_set man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Init-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Init","text":"MPI_Init(argc, argv)\n\nMPI_Init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Init_thread-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Init_thread","text":"MPI_Init_thread(argc, argv, required, provided)\n\nMPI_Init_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Initialized-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Initialized","text":"MPI_Initialized(flag)\n\nMPI_Initialized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_create","text":"MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm)\n\nMPI_Intercomm_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_create_from_groups-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_create_from_groups","text":"MPI_Intercomm_create_from_groups(local_group, local_leader, remote_group, remote_leader, stringtag, info, errhandler, newintercomm)\n\nMPI_Intercomm_create_from_groups man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Intercomm_merge-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Intercomm_merge","text":"MPI_Intercomm_merge(intercomm, high, newintracomm)\n\nMPI_Intercomm_merge man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iprobe-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Iprobe","text":"MPI_Iprobe(source, tag, comm, flag, status)\n\nMPI_Iprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irecv-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irecv","text":"MPI_Irecv(buf, count, datatype, source, tag, comm, request)\n\nMPI_Irecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irecv_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irecv_c","text":"MPI_Irecv_c(buf, count, datatype, source, tag, comm, request)\n\nMPI_Irecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce","text":"MPI_Ireduce(sendbuf, recvbuf, count, datatype, op, root, comm, request)\n\nMPI_Ireduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_c","text":"MPI_Ireduce_c(sendbuf, recvbuf, count, datatype, op, root, comm, request)\n\nMPI_Ireduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter","text":"MPI_Ireduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, request)\n\nMPI_Ireduce_scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_block-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_block","text":"MPI_Ireduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_block_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_block_c","text":"MPI_Ireduce_scatter_block_c(sendbuf, recvbuf, recvcount, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ireduce_scatter_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ireduce_scatter_c","text":"MPI_Ireduce_scatter_c(sendbuf, recvbuf, recvcounts, datatype, op, comm, request)\n\nMPI_Ireduce_scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irsend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irsend","text":"MPI_Irsend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Irsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Irsend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Irsend_c","text":"MPI_Irsend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Irsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Is_thread_main-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Is_thread_main","text":"MPI_Is_thread_main(flag)\n\nMPI_Is_thread_main man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscan-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscan","text":"MPI_Iscan(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscan_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscan_c","text":"MPI_Iscan_c(sendbuf, recvbuf, count, datatype, op, comm, request)\n\nMPI_Iscan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatter-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatter","text":"MPI_Iscatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatter_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatter_c","text":"MPI_Iscatter_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatterv-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatterv","text":"MPI_Iscatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Iscatterv_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Iscatterv_c","text":"MPI_Iscatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, request)\n\nMPI_Iscatterv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Isend","text":"MPI_Isend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Isend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Isend_c","text":"MPI_Isend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Isend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv","text":"MPI_Isendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)\n\nMPI_Isendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_c","text":"MPI_Isendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, request)\n\nMPI_Isendrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_replace-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_replace","text":"MPI_Isendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, request)\n\nMPI_Isendrecv_replace man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Isendrecv_replace_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Isendrecv_replace_c","text":"MPI_Isendrecv_replace_c(buf, count, datatype, dest, sendtag, source, recvtag, comm, request)\n\nMPI_Isendrecv_replace_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Issend-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Issend","text":"MPI_Issend(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Issend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Issend_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Issend_c","text":"MPI_Issend_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Issend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Keyval_create-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Keyval_create","text":"MPI_Keyval_create(copy_fn, delete_fn, keyval, extra_state)\n\nMPI_Keyval_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Keyval_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Keyval_free","text":"MPI_Keyval_free(keyval)\n\nMPI_Keyval_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Lookup_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Lookup_name","text":"MPI_Lookup_name(service_name, info, port_name)\n\nMPI_Lookup_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mprobe-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mprobe","text":"MPI_Mprobe(source, tag, comm, message, status)\n\nMPI_Mprobe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mrecv-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mrecv","text":"MPI_Mrecv(buf, count, datatype, message, status)\n\nMPI_Mrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Mrecv_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Mrecv_c","text":"MPI_Mrecv_c(buf, count, datatype, message, status)\n\nMPI_Mrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather","text":"MPI_Neighbor_allgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_c","text":"MPI_Neighbor_allgather_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_allgather_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_init","text":"MPI_Neighbor_allgather_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_allgather_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgather_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgather_init_c","text":"MPI_Neighbor_allgather_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_allgather_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv","text":"MPI_Neighbor_allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Neighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_c","text":"MPI_Neighbor_allgatherv_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm)\n\nMPI_Neighbor_allgatherv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_init","text":"MPI_Neighbor_allgatherv_init(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Neighbor_allgatherv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_allgatherv_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_allgatherv_init_c","text":"MPI_Neighbor_allgatherv_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, info, request)\n\nMPI_Neighbor_allgatherv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall","text":"MPI_Neighbor_alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_c","text":"MPI_Neighbor_alltoall_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm)\n\nMPI_Neighbor_alltoall_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_init","text":"MPI_Neighbor_alltoall_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoall_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoall_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoall_init_c","text":"MPI_Neighbor_alltoall_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoall_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv","text":"MPI_Neighbor_alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Neighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_c","text":"MPI_Neighbor_alltoallv_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm)\n\nMPI_Neighbor_alltoallv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_init","text":"MPI_Neighbor_alltoallv_init(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoallv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallv_init_c","text":"MPI_Neighbor_alltoallv_init_c(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, info, request)\n\nMPI_Neighbor_alltoallv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw","text":"MPI_Neighbor_alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Neighbor_alltoallw man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_c","text":"MPI_Neighbor_alltoallw_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm)\n\nMPI_Neighbor_alltoallw_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_init","text":"MPI_Neighbor_alltoallw_init(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Neighbor_alltoallw_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Neighbor_alltoallw_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Neighbor_alltoallw_init_c","text":"MPI_Neighbor_alltoallw_init_c(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, info, request)\n\nMPI_Neighbor_alltoallw_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_commutative-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_commutative","text":"MPI_Op_commutative(op, commute)\n\nMPI_Op_commutative man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_create-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_create","text":"MPI_Op_create(user_fn, commute, op)\n\nMPI_Op_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_create_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Op_create_c","text":"MPI_Op_create_c(user_fn, commute, op)\n\nMPI_Op_create_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Op_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Op_free","text":"MPI_Op_free(op)\n\nMPI_Op_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Open_port-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Open_port","text":"MPI_Open_port(info, port_name)\n\nMPI_Open_port man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack","text":"MPI_Pack(inbuf, incount, datatype, outbuf, outsize, position, comm)\n\nMPI_Pack man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_c","text":"MPI_Pack_c(inbuf, incount, datatype, outbuf, outsize, position, comm)\n\nMPI_Pack_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external","text":"MPI_Pack_external(datarep, inbuf, incount, datatype, outbuf, outsize, position)\n\nMPI_Pack_external man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_c","text":"MPI_Pack_external_c(datarep, inbuf, incount, datatype, outbuf, outsize, position)\n\nMPI_Pack_external_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_size-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_size","text":"MPI_Pack_external_size(datarep, incount, datatype, size)\n\nMPI_Pack_external_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_external_size_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_external_size_c","text":"MPI_Pack_external_size_c(datarep, incount, datatype, size)\n\nMPI_Pack_external_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_size-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_size","text":"MPI_Pack_size(incount, datatype, comm, size)\n\nMPI_Pack_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pack_size_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Pack_size_c","text":"MPI_Pack_size_c(incount, datatype, comm, size)\n\nMPI_Pack_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Parrived-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Parrived","text":"MPI_Parrived(request, partition, flag)\n\nMPI_Parrived man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready","text":"MPI_Pready(partition, request)\n\nMPI_Pready man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready_list-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready_list","text":"MPI_Pready_list(length, array_of_partitions, request)\n\nMPI_Pready_list man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Pready_range-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Pready_range","text":"MPI_Pready_range(partition_low, partition_high, request)\n\nMPI_Pready_range man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Precv_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Precv_init","text":"MPI_Precv_init(buf, partitions, count, datatype, dest, tag, comm, info, request)\n\nMPI_Precv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Probe-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Probe","text":"MPI_Probe(source, tag, comm, status)\n\nMPI_Probe man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Psend_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Psend_init","text":"MPI_Psend_init(buf, partitions, count, datatype, dest, tag, comm, info, request)\n\nMPI_Psend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Publish_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Publish_name","text":"MPI_Publish_name(service_name, info, port_name)\n\nMPI_Publish_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Put-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Put","text":"MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Put man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Put_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Put_c","text":"MPI_Put_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win)\n\nMPI_Put_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Query_thread-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Query_thread","text":"MPI_Query_thread(provided)\n\nMPI_Query_thread man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Raccumulate-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Raccumulate","text":"MPI_Raccumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Raccumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Raccumulate_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Raccumulate_c","text":"MPI_Raccumulate_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Raccumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv","text":"MPI_Recv(buf, count, datatype, source, tag, comm, status)\n\nMPI_Recv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_c","text":"MPI_Recv_c(buf, count, datatype, source, tag, comm, status)\n\nMPI_Recv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_init","text":"MPI_Recv_init(buf, count, datatype, source, tag, comm, request)\n\nMPI_Recv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Recv_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Recv_init_c","text":"MPI_Recv_init_c(buf, count, datatype, source, tag, comm, request)\n\nMPI_Recv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce","text":"MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm)\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_c","text":"MPI_Reduce_c(sendbuf, recvbuf, count, datatype, op, root, comm)\n\nMPI_Reduce_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_init-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_init","text":"MPI_Reduce_init(sendbuf, recvbuf, count, datatype, op, root, comm, info, request)\n\nMPI_Reduce_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_init_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_init_c","text":"MPI_Reduce_init_c(sendbuf, recvbuf, count, datatype, op, root, comm, info, request)\n\nMPI_Reduce_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_local-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_local","text":"MPI_Reduce_local(inbuf, inoutbuf, count, datatype, op)\n\nMPI_Reduce_local man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_local_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_local_c","text":"MPI_Reduce_local_c(inbuf, inoutbuf, count, datatype, op)\n\nMPI_Reduce_local_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter","text":"MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm)\n\nMPI_Reduce_scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block","text":"MPI_Reduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm)\n\nMPI_Reduce_scatter_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_c","text":"MPI_Reduce_scatter_block_c(sendbuf, recvbuf, recvcount, datatype, op, comm)\n\nMPI_Reduce_scatter_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_init","text":"MPI_Reduce_scatter_block_init(sendbuf, recvbuf, recvcount, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_block_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_block_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_block_init_c","text":"MPI_Reduce_scatter_block_init_c(sendbuf, recvbuf, recvcount, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_block_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_c","text":"MPI_Reduce_scatter_c(sendbuf, recvbuf, recvcounts, datatype, op, comm)\n\nMPI_Reduce_scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_init","text":"MPI_Reduce_scatter_init(sendbuf, recvbuf, recvcounts, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Reduce_scatter_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Reduce_scatter_init_c","text":"MPI_Reduce_scatter_init_c(sendbuf, recvbuf, recvcounts, datatype, op, comm, info, request)\n\nMPI_Reduce_scatter_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Register_datarep-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Register_datarep","text":"MPI_Register_datarep(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state)\n\nMPI_Register_datarep man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Register_datarep_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Register_datarep_c","text":"MPI_Register_datarep_c(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state)\n\nMPI_Register_datarep_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Request_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Request_free","text":"MPI_Request_free(request)\n\nMPI_Request_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Request_get_status-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Request_get_status","text":"MPI_Request_get_status(request, flag, status)\n\nMPI_Request_get_status man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget","text":"MPI_Rget(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rget man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_accumulate-NTuple{13, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_accumulate","text":"MPI_Rget_accumulate(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Rget_accumulate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_accumulate_c-NTuple{13, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_accumulate_c","text":"MPI_Rget_accumulate_c(origin_addr, origin_count, origin_datatype, result_addr, result_count, result_datatype, target_rank, target_disp, target_count, target_datatype, op, win, request)\n\nMPI_Rget_accumulate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rget_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rget_c","text":"MPI_Rget_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rget_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rput-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rput","text":"MPI_Rput(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rput man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rput_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Rput_c","text":"MPI_Rput_c(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, request)\n\nMPI_Rput_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend","text":"MPI_Rsend(buf, count, datatype, dest, tag, comm)\n\nMPI_Rsend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_c","text":"MPI_Rsend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Rsend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_init","text":"MPI_Rsend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Rsend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Rsend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Rsend_init_c","text":"MPI_Rsend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Rsend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan","text":"MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_c","text":"MPI_Scan_c(sendbuf, recvbuf, count, datatype, op, comm)\n\nMPI_Scan_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_init-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_init","text":"MPI_Scan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Scan_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scan_init_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scan_init_c","text":"MPI_Scan_init_c(sendbuf, recvbuf, count, datatype, op, comm, info, request)\n\nMPI_Scan_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter","text":"MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_c-NTuple{8, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_c","text":"MPI_Scatter_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatter_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_init-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_init","text":"MPI_Scatter_init(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatter_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatter_init_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatter_init_c","text":"MPI_Scatter_init_c(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatter_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv","text":"MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_c","text":"MPI_Scatterv_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm)\n\nMPI_Scatterv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_init-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_init","text":"MPI_Scatterv_init(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatterv_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Scatterv_init_c-NTuple{11, Any}","page":"Low-level API","title":"MPI.API.MPI_Scatterv_init_c","text":"MPI_Scatterv_init_c(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, info, request)\n\nMPI_Scatterv_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Send","text":"MPI_Send(buf, count, datatype, dest, tag, comm)\n\nMPI_Send man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_c","text":"MPI_Send_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Send_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_init","text":"MPI_Send_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Send_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Send_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Send_init_c","text":"MPI_Send_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Send_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv","text":"MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)\n\nMPI_Sendrecv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_c-NTuple{12, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_c","text":"MPI_Sendrecv_c(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status)\n\nMPI_Sendrecv_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_replace-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_replace","text":"MPI_Sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, status)\n\nMPI_Sendrecv_replace man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Sendrecv_replace_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Sendrecv_replace_c","text":"MPI_Sendrecv_replace_c(buf, count, datatype, dest, sendtag, source, recvtag, comm, status)\n\nMPI_Sendrecv_replace_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend","text":"MPI_Ssend(buf, count, datatype, dest, tag, comm)\n\nMPI_Ssend man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_c","text":"MPI_Ssend_c(buf, count, datatype, dest, tag, comm)\n\nMPI_Ssend_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_init-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_init","text":"MPI_Ssend_init(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ssend_init man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Ssend_init_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Ssend_init_c","text":"MPI_Ssend_init_c(buf, count, datatype, dest, tag, comm, request)\n\nMPI_Ssend_init_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Start-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Start","text":"MPI_Start(request)\n\nMPI_Start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Startall-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Startall","text":"MPI_Startall(count, array_of_requests)\n\nMPI_Startall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_c2f-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_c2f","text":"MPI_Status_c2f(c_status, f_status)\n\nMPI_Status_c2f man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_f2c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_f2c","text":"MPI_Status_f2c(f_status, c_status)\n\nMPI_Status_f2c man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_cancelled-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_cancelled","text":"MPI_Status_set_cancelled(status, flag)\n\nMPI_Status_set_cancelled man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_elements-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_elements","text":"MPI_Status_set_elements(status, datatype, count)\n\nMPI_Status_set_elements man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Status_set_elements_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Status_set_elements_x","text":"MPI_Status_set_elements_x(status, datatype, count)\n\nMPI_Status_set_elements_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Test-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Test","text":"MPI_Test(request, flag, status)\n\nMPI_Test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Test_cancelled-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Test_cancelled","text":"MPI_Test_cancelled(status, flag)\n\nMPI_Test_cancelled man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testall-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Testall","text":"MPI_Testall(count, array_of_requests, flag, array_of_statuses)\n\nMPI_Testall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testany-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Testany","text":"MPI_Testany(count, array_of_requests, indx, flag, status)\n\nMPI_Testany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Testsome-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Testsome","text":"MPI_Testsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses)\n\nMPI_Testsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Topo_test-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Topo_test","text":"MPI_Topo_test(comm, status)\n\nMPI_Topo_test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_commit-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_commit","text":"MPI_Type_commit(datatype)\n\nMPI_Type_commit man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_contiguous-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_contiguous","text":"MPI_Type_contiguous(count, oldtype, newtype)\n\nMPI_Type_contiguous man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_contiguous_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_contiguous_c","text":"MPI_Type_contiguous_c(count, oldtype, newtype)\n\nMPI_Type_contiguous_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_darray-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_darray","text":"MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)\n\nMPI_Type_create_darray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_darray_c-NTuple{10, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_darray_c","text":"MPI_Type_create_darray_c(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype)\n\nMPI_Type_create_darray_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_complex-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_complex","text":"MPI_Type_create_f90_complex(p, r, newtype)\n\nMPI_Type_create_f90_complex man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_integer-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_integer","text":"MPI_Type_create_f90_integer(r, newtype)\n\nMPI_Type_create_f90_integer man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_f90_real-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_f90_real","text":"MPI_Type_create_f90_real(p, r, newtype)\n\nMPI_Type_create_f90_real man page: OpenMPI\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed","text":"MPI_Type_create_hindexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_block-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_block","text":"MPI_Type_create_hindexed_block(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_block_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_block_c","text":"MPI_Type_create_hindexed_block_c(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hindexed_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hindexed_c","text":"MPI_Type_create_hindexed_c(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_hindexed_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hvector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hvector","text":"MPI_Type_create_hvector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_create_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_hvector_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_hvector_c","text":"MPI_Type_create_hvector_c(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_create_hvector_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_indexed_block-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_indexed_block","text":"MPI_Type_create_indexed_block(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_indexed_block man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_indexed_block_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_indexed_block_c","text":"MPI_Type_create_indexed_block_c(count, blocklength, array_of_displacements, oldtype, newtype)\n\nMPI_Type_create_indexed_block_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_keyval","text":"MPI_Type_create_keyval(type_copy_attr_fn, type_delete_attr_fn, type_keyval, extra_state)\n\nMPI_Type_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_resized-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_resized","text":"MPI_Type_create_resized(oldtype, lb, extent, newtype)\n\nMPI_Type_create_resized man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_resized_c-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_resized_c","text":"MPI_Type_create_resized_c(oldtype, lb, extent, newtype)\n\nMPI_Type_create_resized_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_struct-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_struct","text":"MPI_Type_create_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_create_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_struct_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_struct_c","text":"MPI_Type_create_struct_c(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_create_struct_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_subarray-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_subarray","text":"MPI_Type_create_subarray(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype)\n\nMPI_Type_create_subarray man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_create_subarray_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_create_subarray_c","text":"MPI_Type_create_subarray_c(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype)\n\nMPI_Type_create_subarray_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_delete_attr","text":"MPI_Type_delete_attr(datatype, type_keyval)\n\nMPI_Type_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_dup-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_dup","text":"MPI_Type_dup(oldtype, newtype)\n\nMPI_Type_dup man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_extent-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_extent","text":"MPI_Type_extent(datatype, extent)\n\nMPI_Type_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_free","text":"MPI_Type_free(datatype)\n\nMPI_Type_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Type_free_keyval","text":"MPI_Type_free_keyval(type_keyval)\n\nMPI_Type_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_attr","text":"MPI_Type_get_attr(datatype, type_keyval, attribute_val, flag)\n\nMPI_Type_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_contents-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_contents","text":"MPI_Type_get_contents(datatype, max_integers, max_addresses, max_datatypes, array_of_integers, array_of_addresses, array_of_datatypes)\n\nMPI_Type_get_contents man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_contents_c-NTuple{9, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_contents_c","text":"MPI_Type_get_contents_c(datatype, max_integers, max_addresses, max_large_counts, max_datatypes, array_of_integers, array_of_addresses, array_of_large_counts, array_of_datatypes)\n\nMPI_Type_get_contents_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_envelope-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_envelope","text":"MPI_Type_get_envelope(datatype, num_integers, num_addresses, num_datatypes, combiner)\n\nMPI_Type_get_envelope man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_envelope_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_envelope_c","text":"MPI_Type_get_envelope_c(datatype, num_integers, num_addresses, num_large_counts, num_datatypes, combiner)\n\nMPI_Type_get_envelope_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent","text":"MPI_Type_get_extent(datatype, lb, extent)\n\nMPI_Type_get_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent_c","text":"MPI_Type_get_extent_c(datatype, lb, extent)\n\nMPI_Type_get_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_extent_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_extent_x","text":"MPI_Type_get_extent_x(datatype, lb, extent)\n\nMPI_Type_get_extent_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_name","text":"MPI_Type_get_name(datatype, type_name, resultlen)\n\nMPI_Type_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent","text":"MPI_Type_get_true_extent(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent_c-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent_c","text":"MPI_Type_get_true_extent_c(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_get_true_extent_x-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_get_true_extent_x","text":"MPI_Type_get_true_extent_x(datatype, true_lb, true_extent)\n\nMPI_Type_get_true_extent_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_hindexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_hindexed","text":"MPI_Type_hindexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_hindexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_hvector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_hvector","text":"MPI_Type_hvector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_hvector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_indexed-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_indexed","text":"MPI_Type_indexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_indexed man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_indexed_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_indexed_c","text":"MPI_Type_indexed_c(count, array_of_blocklengths, array_of_displacements, oldtype, newtype)\n\nMPI_Type_indexed_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_lb-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_lb","text":"MPI_Type_lb(datatype, displacement)\n\nMPI_Type_lb man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_match_size-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_match_size","text":"MPI_Type_match_size(typeclass, size, datatype)\n\nMPI_Type_match_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_set_attr","text":"MPI_Type_set_attr(datatype, type_keyval, attribute_val)\n\nMPI_Type_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_set_name","text":"MPI_Type_set_name(datatype, type_name)\n\nMPI_Type_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size","text":"MPI_Type_size(datatype, size)\n\nMPI_Type_size man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size_c-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size_c","text":"MPI_Type_size_c(datatype, size)\n\nMPI_Type_size_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_size_x-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_size_x","text":"MPI_Type_size_x(datatype, size)\n\nMPI_Type_size_x man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_struct-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_struct","text":"MPI_Type_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype)\n\nMPI_Type_struct man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_ub-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_ub","text":"MPI_Type_ub(datatype, displacement)\n\nMPI_Type_ub man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_vector-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_vector","text":"MPI_Type_vector(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_vector man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Type_vector_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Type_vector_c","text":"MPI_Type_vector_c(count, blocklength, stride, oldtype, newtype)\n\nMPI_Type_vector_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack","text":"MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm)\n\nMPI_Unpack man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_c","text":"MPI_Unpack_c(inbuf, insize, position, outbuf, outcount, datatype, comm)\n\nMPI_Unpack_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_external-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_external","text":"MPI_Unpack_external(datarep, inbuf, insize, position, outbuf, outcount, datatype)\n\nMPI_Unpack_external man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpack_external_c-NTuple{7, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpack_external_c","text":"MPI_Unpack_external_c(datarep, inbuf, insize, position, outbuf, outcount, datatype)\n\nMPI_Unpack_external_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Unpublish_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Unpublish_name","text":"MPI_Unpublish_name(service_name, info, port_name)\n\nMPI_Unpublish_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wait-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Wait","text":"MPI_Wait(request, status)\n\nMPI_Wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitall-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitall","text":"MPI_Waitall(count, array_of_requests, array_of_statuses)\n\nMPI_Waitall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitany-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitany","text":"MPI_Waitany(count, array_of_requests, indx, status)\n\nMPI_Waitany man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Waitsome-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Waitsome","text":"MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses)\n\nMPI_Waitsome man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate","text":"MPI_Win_allocate(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_c","text":"MPI_Win_allocate_c(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_shared-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_shared","text":"MPI_Win_allocate_shared(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_shared man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_allocate_shared_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_allocate_shared_c","text":"MPI_Win_allocate_shared_c(size, disp_unit, info, comm, baseptr, win)\n\nMPI_Win_allocate_shared_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_attach-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_attach","text":"MPI_Win_attach(win, base, size)\n\nMPI_Win_attach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_call_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_call_errhandler","text":"MPI_Win_call_errhandler(win, errorcode)\n\nMPI_Win_call_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_complete-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_complete","text":"MPI_Win_complete(win)\n\nMPI_Win_complete man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create","text":"MPI_Win_create(base, size, disp_unit, info, comm, win)\n\nMPI_Win_create man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_c-NTuple{6, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_c","text":"MPI_Win_create_c(base, size, disp_unit, info, comm, win)\n\nMPI_Win_create_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_dynamic-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_dynamic","text":"MPI_Win_create_dynamic(info, comm, win)\n\nMPI_Win_create_dynamic man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_errhandler","text":"MPI_Win_create_errhandler(win_errhandler_fn, errhandler)\n\nMPI_Win_create_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_create_keyval-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_create_keyval","text":"MPI_Win_create_keyval(win_copy_attr_fn, win_delete_attr_fn, win_keyval, extra_state)\n\nMPI_Win_create_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_delete_attr-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_delete_attr","text":"MPI_Win_delete_attr(win, win_keyval)\n\nMPI_Win_delete_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_detach-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_detach","text":"MPI_Win_detach(win, base)\n\nMPI_Win_detach man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_fence-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_fence","text":"MPI_Win_fence(assert, win)\n\nMPI_Win_fence man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush","text":"MPI_Win_flush(rank, win)\n\nMPI_Win_flush man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_all","text":"MPI_Win_flush_all(win)\n\nMPI_Win_flush_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_local-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_local","text":"MPI_Win_flush_local(rank, win)\n\nMPI_Win_flush_local man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_flush_local_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_flush_local_all","text":"MPI_Win_flush_local_all(win)\n\nMPI_Win_flush_local_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_free-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_free","text":"MPI_Win_free(win)\n\nMPI_Win_free man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_free_keyval-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_free_keyval","text":"MPI_Win_free_keyval(win_keyval)\n\nMPI_Win_free_keyval man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_attr-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_attr","text":"MPI_Win_get_attr(win, win_keyval, attribute_val, flag)\n\nMPI_Win_get_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_errhandler","text":"MPI_Win_get_errhandler(win, errhandler)\n\nMPI_Win_get_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_group-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_group","text":"MPI_Win_get_group(win, group)\n\nMPI_Win_get_group man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_info","text":"MPI_Win_get_info(win, info_used)\n\nMPI_Win_get_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_get_name-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_get_name","text":"MPI_Win_get_name(win, win_name, resultlen)\n\nMPI_Win_get_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_lock-NTuple{4, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_lock","text":"MPI_Win_lock(lock_type, rank, assert, win)\n\nMPI_Win_lock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_lock_all-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_lock_all","text":"MPI_Win_lock_all(assert, win)\n\nMPI_Win_lock_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_post-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_post","text":"MPI_Win_post(group, assert, win)\n\nMPI_Win_post man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_attr-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_attr","text":"MPI_Win_set_attr(win, win_keyval, attribute_val)\n\nMPI_Win_set_attr man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_errhandler-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_errhandler","text":"MPI_Win_set_errhandler(win, errhandler)\n\nMPI_Win_set_errhandler man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_info-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_info","text":"MPI_Win_set_info(win, info)\n\nMPI_Win_set_info man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_set_name-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_set_name","text":"MPI_Win_set_name(win, win_name)\n\nMPI_Win_set_name man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_shared_query-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_shared_query","text":"MPI_Win_shared_query(win, rank, size, disp_unit, baseptr)\n\nMPI_Win_shared_query man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_shared_query_c-NTuple{5, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_shared_query_c","text":"MPI_Win_shared_query_c(win, rank, size, disp_unit, baseptr)\n\nMPI_Win_shared_query_c man page: MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_start-Tuple{Any, Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_start","text":"MPI_Win_start(group, assert, win)\n\nMPI_Win_start man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_sync-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_sync","text":"MPI_Win_sync(win)\n\nMPI_Win_sync man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_test-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_test","text":"MPI_Win_test(win, flag)\n\nMPI_Win_test man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_unlock-Tuple{Any, Any}","page":"Low-level API","title":"MPI.API.MPI_Win_unlock","text":"MPI_Win_unlock(rank, win)\n\nMPI_Win_unlock man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_unlock_all-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_unlock_all","text":"MPI_Win_unlock_all(win)\n\nMPI_Win_unlock_all man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Win_wait-Tuple{Any}","page":"Low-level API","title":"MPI.API.MPI_Win_wait","text":"MPI_Win_wait(win)\n\nMPI_Win_wait man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wtick-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Wtick","text":"MPI_Wtick()\n\nMPI_Wtick man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/api/#MPI.API.MPI_Wtime-Tuple{}","page":"Low-level API","title":"MPI.API.MPI_Wtime","text":"MPI_Wtime()\n\nMPI_Wtime man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"method"},{"location":"reference/collective/#Collective-communication","page":"Collective communication","title":"Collective communication","text":"","category":"section"},{"location":"reference/collective/#Synchronization","page":"Collective communication","title":"Synchronization","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Barrier\nMPI.Ibarrier","category":"page"},{"location":"reference/collective/#MPI.Barrier","page":"Collective communication","title":"MPI.Barrier","text":"Barrier(comm::Comm)\n\nBlocks until comm is synchronized.\n\nIf comm is an intracommunicator, then it blocks until all members of the group have called it.\n\nIf comm is an intercommunicator, then it blocks until all members of the other group have called it.\n\nExternal links\n\nMPI_Barrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Ibarrier","page":"Collective communication","title":"MPI.Ibarrier","text":"Ibarrier(comm::Comm[, req::AbstractRequest = Request())\n\nBlocks until comm is synchronized.\n\nIf comm is an intracommunicator, then it blocks until all members of the group have called it.\n\nIf comm is an intercommunicator, then it blocks until all members of the other group have called it.\n\nExternal links\n\nMPI_Ibarrier man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Broadcast","page":"Collective communication","title":"Broadcast","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Bcast!\nMPI.Bcast\nMPI.bcast","category":"page"},{"location":"reference/collective/#MPI.Bcast!","page":"Collective communication","title":"MPI.Bcast!","text":"Bcast!(buf, comm::Comm; root::Integer=0)\n\nBroadcast the buffer buf from root to all processes in comm.\n\nSee also\n\nbcast\n\nExternal links\n\nMPI_Bcast man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Bcast","page":"Collective communication","title":"MPI.Bcast","text":"Bcast(obj, root::Integer, comm::Comm)\n\nBroadcast the obj from root to all processes in comm. Returns the object. Currently obj must be isbits, i.e. isbitstype(typeof(obj)) == true.\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.bcast","page":"Collective communication","title":"MPI.bcast","text":"bcast(obj, comm::Comm; root::Integer=0)\n\nBroadcast the object obj from rank root to all processes on comm. This is able to handle arbitrary data.\n\nSee also\n\nBcast!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Gather/Scatter","page":"Collective communication","title":"Gather/Scatter","text":"","category":"section"},{"location":"reference/collective/#Gather","page":"Collective communication","title":"Gather","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Gather!\nMPI.Gather\nMPI.gather\nMPI.Gatherv!\nMPI.Allgather!\nMPI.Allgather\nMPI.Allgatherv!\nMPI.Neighbor_allgather!\nMPI.Neighbor_allgatherv!","category":"page"},{"location":"reference/collective/#MPI.Gather!","page":"Collective communication","title":"MPI.Gather!","text":"Gather!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root process stores elements in rank order in the buffer buffer recvbuf.\n\nsendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes, and should be the same length on all processes.\n\nOn the root process, sendbuf can be MPI.IN_PLACE on the root process, in which case the corresponding entries in recvbuf are assumed to be already in place (this corresponds the behaviour of MPI_IN_PLACE in MPI_Gather). For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Gather!(MPI.IN_PLACE, UBuffer(buf, count), comm; root=root)\nelse\n MPI.Gather!(buf, nothing, comm; root=root)\nend\n\nrecvbuf on the root process should be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.\n\nSee also\n\nGather for the allocating operation.\nGatherv! if the number of elements varies between processes.\nAllgather! to send the result to all processes.\n\nExternal links\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Gather","page":"Collective communication","title":"MPI.Gather","text":"Gather(sendbuf, comm::Comm; root=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root allocates the output buffer and stores elements in rank order.\n\nsendbuf can be an AbstractArray or a scalar, and should be the same length on all processes.\n\nSee also\n\nGather! for the mutating operation.\nGatherv! if the number of elements varies between processes.\nAllgather!/Allgather to send the result to all processes.\n\nExternal links\n\nMPI_Gather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.gather","page":"Collective communication","title":"MPI.gather","text":"gather(obj, comm::Comm; root::Integer=0)\n\nGather the objects obj from all ranks on comm to rank root. This is able to to handle arbitrary data. On root, it returns a vector of the objects, and nothing otherwise.\n\nSee also\n\nGather!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Gatherv!","page":"Collective communication","title":"MPI.Gatherv!","text":"Gatherv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nEach process sends the contents of the buffer sendbuf to the root process. The root stores elements in rank order in the buffer recvbuf.\n\nsendbuf should be a Buffer object, or any object for which Buffer_send is defined, with the same length on all processes.\n\nOn the root process, sendbuf can be MPI.IN_PLACE, in which case the corresponding entries in recvbuf are assumed to be already in place. For example\n\nif root == MPI.Comm_rank(comm)\n Gatherv!(MPI.IN_PLACE, VBuffer(buf, counts), comm; root=root)\nelse\n Gatherv!(buf, nothing, comm; root=root)\nend\n\nrecvbuf on the root process should be a VBuffer, or can be an AbstractArray if the length can be determined from sendbuf. On non-root processes it is ignored and can be nothing.\n\nSee also\n\nGather! if the number of elements is the same between processes.\nAllgatherv! to send the result to all processes.\n\nExternal links\n\nMPI_Gatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgather!","page":"Collective communication","title":"MPI.Allgather!","text":"Allgather!(sendbuf, recvbuf::UBuffer, comm::Comm)\nAllgather!(sendrecvbuf::UBuffer, comm::Comm)\n\nEach process sends the contents of sendbuf to the other processes, the result of which is stored in rank order into recvbuf.\n\nsendbuf can be a Buffer object, or any object for which Buffer_send is defined, and should be the same length on all processes.\n\nrecvbuf can be a UBuffer, or can be an AbstractArray if the length can be determined from sendbuf.\n\nIf only one buffer sendrecvbuf is provided, then on each process the data to send is assumed to be in the area where it would receive its own contribution.\n\nSee also\n\nAllgather for the allocating operation\nAllgatherv! if the number of elements varies between processes.\nGather! to send only to a single root process\n\nExternal links\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgather","page":"Collective communication","title":"MPI.Allgather","text":"Allgather(sendbuf, comm)\n\nEach process sends the contents of sendbuf to the other processes, who store the results in rank order allocating the output buffer.\n\nsendbuf can be an AbstractArray or a scalar, and should be the same size on all processes.\n\nSee also\n\nAllgather! for the mutating operation\nAllgatherv! if the number of elements varies between processes.\nGather! to send only to a single root process\n\nExternal links\n\nMPI_Allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allgatherv!","page":"Collective communication","title":"MPI.Allgatherv!","text":"Allgatherv!(sendbuf, recvbuf::VBuffer, comm::Comm)\nAllgatherv!(sendrecvbuf::VBuffer, comm::Comm)\n\nEach process sends the contents of sendbuf to all other process. Each process stores the received in the VBuffer recvbuf.\n\nsendbuf can be a Buffer object, or any object for which Buffer_send is defined.\n\nIf only one buffer sendrecvbuf is provided, then for each process, the data to be sent is taken from the interval of recvbuf where it would store its own data.\n\nSee also\n\nGatherv! to send the result to a single process\n\nExternal links\n\nMPI_Allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_allgather!","page":"Collective communication","title":"MPI.Neighbor_allgather!","text":"Neighbor_allgather!(sendbuf::Buffer, recvbuf::UBuffer, comm::Comm)\n\nPerform an all-gather communication along the directed edges of the graph.\n\nSee also MPI.Allgather!.\n\nExternal links\n\nMPI_Neighbor_allgather man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_allgatherv!","page":"Collective communication","title":"MPI.Neighbor_allgatherv!","text":"Neighbor_allgatherv!(sendbuf::Buffer, recvbuf::VBuffer, comm::Comm)\n\nPerform an all-gather communication along the directed edges of the graph with variable sized data.\n\nSee also MPI.Allgatherv!.\n\nExternal links\n\nMPI_Neighbor_allgatherv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Scatter","page":"Collective communication","title":"Scatter","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Scatter!\nMPI.Scatter\nMPI.scatter\nMPI.Scatterv!","category":"page"},{"location":"reference/collective/#MPI.Scatter!","page":"Collective communication","title":"MPI.Scatter!","text":"Scatter!(sendbuf::Union{UBuffer,Nothing}, recvbuf, comm::Comm;\n root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 into the recvbuf buffer.\n\nsendbuf on the root process should be a UBuffer (an Array can also be passed directly if the sizes can be determined from recvbuf). On non-root processes it is ignored, and nothing can be passed instead.\n\nrecvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Scatter!(UBuffer(buf, count), MPI.IN_PLACE, comm; root=root)\nelse\n MPI.Scatter!(nothing, buf, comm; root=root)\nend\n\nSee also\n\nScatterv! if the number of elements varies between processes.\n\nExternal links\n\nMPI_Scatter man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scatter","page":"Collective communication","title":"MPI.Scatter","text":"Scatter(sendbuf, T, comm::Comm; root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks, sending the j-th chunk to the process of rank j-1 as an object of type T.\n\nSee also\n\nScatter!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.scatter","page":"Collective communication","title":"MPI.scatter","text":"scatter(objs::Union{AbstractVector, Nothing}, comm::Comm; root::Integer=0)\n\nSends the j-th element of objs in the root process to rank j-1 and returns it. On root, objs is expected to be a Comm_size(comm)-element vector. On the other ranks, it is ignored and can be nothing.\n\nThis method can handle arbitrary data.\n\nSee also\n\nScatter!\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scatterv!","page":"Collective communication","title":"MPI.Scatterv!","text":"Scatterv!(sendbuf, recvbuf, comm::Comm; root::Integer=0)\n\nSplits the buffer sendbuf in the root process into Comm_size(comm) chunks and sends the jth chunk to the process of rank j-1 into the recvbuf buffer.\n\nsendbuf on the root process should be a VBuffer. On non-root processes it is ignored, and nothing can be passed instead.\n\nrecvbuf is a Buffer object, or any object for which Buffer(recvbuf) is defined. On the root process, it can also be MPI.IN_PLACE, in which case it is unmodified. For example:\n\nif root == MPI.Comm_rank(comm)\n MPI.Scatterv!(VBuffer(buf, counts), MPI.IN_PLACE, comm; root=root)\nelse\n MPI.Scatterv!(nothing, buf, comm; root=root)\nend\n\nSee also\n\nScatter! if the number of elements are the same for all processes\n\nExternal links\n\nMPI_Scatterv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#All-to-all","page":"Collective communication","title":"All-to-all","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Alltoall!\nMPI.Alltoall\nMPI.Alltoallv!\nMPI.Neighbor_alltoall!\nMPI.Neighbor_alltoallv!","category":"page"},{"location":"reference/collective/#MPI.Alltoall!","page":"Collective communication","title":"MPI.Alltoall!","text":"Alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)\nAlltoall!(sendrecvbuf::UBuffer, comm::Comm)\n\nEvery process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process stores the data received from rank j-1 process in the j-th chunk of the buffer recvbuf.\n\nrank send buf recv buf\n---- -------- --------\n 0 a,b,c,d,e,f Alltoall a,b,A,B,α,β\n 1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ\n 2 α,β,γ,ψ,η,ν e,f,E,F,η,ν\n\nIf only one buffer sendrecvbuf is used, then data is overwritten.\n\nSee also\n\nAlltoall for the allocating operation\n\nExternal links\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Alltoall","page":"Collective communication","title":"MPI.Alltoall","text":"Alltoall(sendbuf::UBuffer, comm::Comm)\n\nEvery process divides the UBuffer sendbuf into Comm_size(comm) chunks of equal size, sending the j-th chunk to the process of rank j-1. Every process allocates the output buffer and stores the data received from the process on rank j-1 in the j-th chunk.\n\nrank send buf recv buf\n---- -------- --------\n 0 a,b,c,d,e,f Alltoall a,b,A,B,α,β\n 1 A,B,C,D,E,F ----------------> c,d,C,D,γ,ψ\n 2 α,β,γ,ψ,η,ν e,f,E,F,η,ν\n\nSee also\n\nAlltoall! for the mutating operation\n\nExternal links\n\nMPI_Alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Alltoallv!","page":"Collective communication","title":"MPI.Alltoallv!","text":"Alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, comm::Comm)\n\nSimilar to Alltoall!, except with different size chunks per process.\n\nSee also\n\nVBuffer\n\nExternal links\n\nMPI_Alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_alltoall!","page":"Collective communication","title":"MPI.Neighbor_alltoall!","text":"Neighbor_alltoall!(sendbuf::UBuffer, recvbuf::UBuffer, comm::Comm)\n\nPerform an all-to-all communication along the directed edges of the graph with fixed size messages.\n\nSee also MPI.Alltoall!.\n\nExternal links\n\nMPI_Neighbor_alltoall man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Neighbor_alltoallv!","page":"Collective communication","title":"MPI.Neighbor_alltoallv!","text":"Neighbor_alltoallv!(sendbuf::VBuffer, recvbuf::VBuffer, graph_comm::Comm)\n\nPerform an all-to-all communication along the directed edges of the graph with variable size messages.\n\nSee also MPI.Alltoallv!.\n\nExternal links\n\nMPI_Neighbor_alltoallv man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#Reduce/Scan","page":"Collective communication","title":"Reduce/Scan","text":"","category":"section"},{"location":"reference/collective/","page":"Collective communication","title":"Collective communication","text":"MPI.Reduce!\nMPI.Reduce\nMPI.Allreduce!\nMPI.Allreduce\nMPI.Scan!\nMPI.Scan\nMPI.Exscan!\nMPI.Exscan","category":"page"},{"location":"reference/collective/#MPI.Reduce!","page":"Collective communication","title":"MPI.Reduce!","text":"Reduce!(sendbuf, recvbuf, op, comm::Comm; root::Integer=0)\nReduce!(sendrecvbuf, op, comm::Comm; root::Integer=0)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf and stores the result in recvbuf on the process of rank root.\n\nOn non-root processes recvbuf is ignored, and can be nothing.\n\nTo perform the reduction in place, provide a single buffer sendrecvbuf.\n\nSee also\n\nReduce to handle allocation of the output buffer.\nAllreduce!/Allreduce to send reduction to all ranks.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Reduce","page":"Collective communication","title":"MPI.Reduce","text":"recvbuf = Reduce(sendbuf, op, comm::Comm; root::Integer=0)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, returning the result recvbuf on the process of rank root, and nothing on non-root processes.\n\nsendbuf can also be a scalar, in which case recvbuf will be a value of the same type.\n\nSee also\n\nReduce! for mutating and in-place operations\nAllreduce!/Allreduce to send reduction to all ranks.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Reduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allreduce!","page":"Collective communication","title":"MPI.Allreduce!","text":"Allreduce!(sendbuf, recvbuf, op, comm::Comm)\nAllreduce!(sendrecvbuf, op, comm::Comm)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, storing the result in the recvbuf of all processes in the group.\n\nAllreduce! is equivalent to a Reduce! operation followed by a Bcast!, but can lead to better performance.\n\nIf only one sendrecvbuf buffer is provided, then the operation is performed in-place.\n\nSee also\n\nAllreduce, to handle allocation of the output buffer.\nReduce!/Reduce to send reduction to a single rank.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Allreduce","page":"Collective communication","title":"MPI.Allreduce","text":"recvbuf = Allreduce(sendbuf, op, comm)\n\nPerforms elementwise reduction using the operator op on the buffer sendbuf, returning the result in the recvbuf of all processes in the group.\n\nsendbuf can also be a scalar, in which case recvbuf will be a value of the same type.\n\nSee also\n\nAllreduce! for mutating or in-place operations.\nReduce!/Reduce to send reduction to a single rank.\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Allreduce man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scan!","page":"Collective communication","title":"MPI.Scan!","text":"Scan!(sendbuf, recvbuf, op, comm::Comm)\nScan!(sendrecvbuf, op, comm::Comm)\n\nInclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.\n\nIf only a single buffer sendrecvbuf is provided, then operations will be performed in-place.\n\nSee also\n\nScan to handle allocation of the output buffer\nExscan!/Exscan for exclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Scan","page":"Collective communication","title":"MPI.Scan","text":"recvbuf = Scan(sendbuf, op, comm::Comm)\n\nInclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i.\n\nsendbuf can also be a scalar, in which case recvbuf will also be a scalar of the same type.\n\nSee also\n\nScan! for mutating or in-place operations\nExscan!/Exscan for exclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Exscan!","page":"Collective communication","title":"MPI.Exscan!","text":"Exscan!(sendbuf, recvbuf, op, comm::Comm)\nExscan!(sendrecvbuf, op, comm::Comm)\n\nExclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is ignored, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.\n\nIf only a single sendrecvbuf is provided, then operations are performed in-place, and buf on rank 0 will remain unchanged.\n\nSee also\n\nExscan to handle allocation of the output buffer\nScan!/Scan for inclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Exscan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"reference/collective/#MPI.Exscan","page":"Collective communication","title":"MPI.Exscan","text":"recvbuf = Exscan(sendbuf, op, comm::Comm)\n\nExclusive prefix reduction (analogous to accumulate in Julia): recvbuf on rank i will contain the result of reducing sendbuf by op from ranks 0:i-1. The recvbuf on rank 0 is undefined, and the recvbuf on rank 1 will contain the contents of sendbuf on rank 0.\n\nSee also\n\nExscan! for mutating and in-place operations\nScan!/Scan for inclusive scan\nOp for details on reduction operators.\n\nExternal links\n\nMPI_Scan man page: OpenMPI, MPICH\n\n\n\n\n\n","category":"function"},{"location":"#MPI.jl","page":"MPI.jl","title":"MPI.jl","text":"","category":"section"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)","category":"page"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"If you use MPI.jl in your work, please cite the following paper:","category":"page"},{"location":"","page":"MPI.jl","title":"MPI.jl","text":"Simon Byrne, Lucas C. Wilcox, and Valentin Churavy (2021) \"MPI.jl: Julia bindings for the Message Passing Interface\". JuliaCon Proceedings, 1(1), 68, doi: 10.21105/jcon.00068","category":"page"}] } diff --git a/dev/usage/index.html b/dev/usage/index.html index c8940c1a9..b1f5006ec 100644 --- a/dev/usage/index.html +++ b/dev/usage/index.html @@ -21,4 +21,4 @@ # p = run(ignorestatus(`$(mpiexec()) ...`)) # @test success(p) end -end +end