Skip to content

MPI_Comm_spawn performance regression on Mac ARM #7707

@scott-routledge2

Description

@scott-routledge2

I while upgrading to MPICH 4.3.2 from 4.1.3, I noticed that the performance of MPI_Comm_spawn got significantly slower on my M2 Mac .

I created a small, C++ reproducer:

// spawner.cpp
#include <mpi.h>
#include <vector>
#include <chrono>
#include <iostream>

int main(int argc, char** argv) {
    auto start = std::chrono::high_resolution_clock::now();

    MPI_Init(&argc, &argv);

    const int num_workers = 2;
    MPI_Comm intercomm;
    std::vector<int> errcodes(num_workers);

    MPI_Comm_spawn(
        "./worker",
        MPI_ARGV_NULL,
        num_workers,
        MPI_INFO_NULL,
        0,
        MPI_COMM_WORLD,
        &intercomm,
        errcodes.data()
    );
    MPI_Finalize();
    auto end = std::chrono::high_resolution_clock::now();
    std::chrono::duration<double> elapsed = end - start;
    std::cout << "Spawned " << num_workers << " workers in " << elapsed.count() << " seconds" << std::endl;
    return 0;
}
// worker.cpp
#include <mpi.h>
#include <iostream>

int main(int argc, char** argv) {
    MPI_Init(&argc, &argv);

    std::cout << "Hello from worker process!" << std::endl;

    MPI_Finalize();
    return 0;
}
mpicxx -O2 -o spawner spawner.cpp
mpicxx -O2 -o worker worker.cpp
./spawner

Output (4.3.2):

Hello from worker process!
Hello from worker process!
Spawned 2 workers in 11.7658 seconds

Compared to (4.1.3):

Hello from worker process!
Hello from worker process!
Spawned 2 workers in 0.378856 seconds

For both new and old versions, I am installing from a release tarball using the default configuration options i.e.
./configure --prefix=$MPI_HOME. I am including the output of mpiexec --version below:

% mpiexec --version
HYDRA build details:
    Version:                                 4.3.2
    Release Date:                            Mon Oct  6 11:14:20 AM CDT 2025
    CC:                              gcc        
    Configure options:                       '--disable-option-checking' '--prefix=/Users/scottroutledge/dev/bodo-org/mpich-install' '--with-hwloc=embedded' '--cache-file=/dev/null' '--srcdir=.' 'CC=gcc' 'CFLAGS= -fno-common -O2' 'LDFLAGS= -framework Foundation -framework IOKit' 'LIBS= ' 'CPPFLAGS= -DNETMOD_INLINE=__netmod_inline_ofi__ -DPOSIX_EAGER_INLINE=__posix_eager_inline_iqueue__ -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/src/mpl/include -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/modules/json-c -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/modules/hwloc/include -D_REENTRANT -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/src/mpi/romio/include -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/src/pmi/include -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/modules/yaksa/src/frontend/include -I/Users/scottroutledge/dev/bodo-org/mpich-4.3.2/modules/libfabric/include'
    Process Manager:                         pmi
    Launchers available:                     ssh rsh fork slurm ll lsf sge manual persist
    Topology libraries available:            hwloc
    Resource management kernels available:   user slurm ll lsf sge pbs cobalt
    Demux engines available:                 poll select

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions