Users guide

Running MPC

  1. Source the mpcvars script (located inside the bin/ directory of the MPC installation) to update environment variables (e.g., PATH and LD_LIBRARY_PATH). For csh and tcsh:

    source $(HOME)/mpc-install/bin/mpcvars.csh
    

    For bash and sh:

    source $HOME/mpc-install/bin/mpcvars.sh
    

    Check that everything went well at this point by running

    which mpcrun
    which mpc_cc
    

     

  2. To compile your first MPC program, you may execute the mpc_cc compiler:

    mpc_cc mpc/MPC_Tests/parallel/MPC_Message_Passing/hello_world.c -o hello_world
    

     

    This command uses the main default patched GCC to compile the code. If you want to use your favorite compiler instead (loosing some features like OpenMP support and global-variable removal), you may use the mpc_cflags and mpc_ldflags commands:

    $CC MPC_Tests/parallel/MPC_Message_Passing/hello_world.c -o hello_world `mpc_cflags` `mpc_ldflags`
    

     

  3. To execute your MPC program, use the mpcrun command:
    mpcrun -n=4 hello_world
    

 

The mpcrun script drives the launch of MPC programs with different types of parallelism. Its usage is defined as follows:

Usage mpcrun [option] [--] binary [user args]

Informations:
    --help,-h Display this help
    --show, Display command line
    --version-details, Print version of each module used
    --report, Print report
    --verbose=n,-v,-vv,-vvv Verbose mode (level 1 to 3)
    --verbose Verbose level 1

Topology:
    --task-nb=n,-n=n Total number of tasks
    --process-nb=n,-p=n Total number of processes
    --cpu-nb=n,-c=n Number of cpus per process
    --node-nb=n,-N=n Total number of nodes
    --enable-smt Enable SMT capabilities (disabled by default)
    --disable-share-node Do not restrict on CPU number to share node

Multithreading:
    --multithreading=n,-m=n Define multithreading mode
        modes: pthread ethread_mxn pthread_ng ethread_mxn_ng ethread ethread_ng

Network:
    --network=n,-net=n Define Network mode (TCP + SHM by default)
        modes:  none ib tcp
        modes (experimental): 

Checkpoint/Restart and Migration:
    --checkpoint Enable checkpoint
    --migration Enable migration
    --restart Enable restart

Launcher:
    --launcher=n,-l=n Define launcher
    --opt=<options> launcher specific options
    --launch_list print available launch methods
    --config=<file> Configuration file to load.
    --profiles=<p1,p2> List of profiles to enable in config.

Debugger:
    --dbg=<debugger_name> to use a debugger
    
Profiling (if compiled with --enable-MPC_Profiler) :
        --profiling=AA,BB,...
        
        example : --profiling=stdout,html,latex
        
        With the following output modules :
                * file : Outputs to file indented profile with time in standard units
                * file-raw : Outputs to file unindented profile in ticks (gnuplot compliant)
                * file-noindent : Same as "text" with no indentation
                * stdout : "text" to stdout
                * stdout-raw : "text_raw" to stdout
                * stdout-noindent : "text_noindent" to stdout
                * latex : Outputs profile to a latex source file
                * html : Outputs profile to an html file

 

Running MPC with a specific environment

Compiling and running your application using GNU compilers (v4.8)

 

 1 % mpc_cc foo.c -o foo
 2 % mpcrun -n=6 -p=2 --network=tcp ./foo

This executes the program foo on 6 MPI ranks using 2 processes (with 3 MPI ranks or threads per process).

Compiling and running your application using Intel v14+ compilers

 

 1 % module load intel/14.0 (or a similar system setting to add Intel compilers to your path)
 2 % mpc_icc foo.c -o foo
 3 % mpcrun -n=6 -p=2 --network=tcp ./foo

You may use –network=ib (default) for Infiniband.

Launcher options

 

Options passed to the launcher options should be compatible with the launch mode chosen during configure. For more information you might read the documentations of mpiexec and srun respectively for Hydra and Slurm.

  • Hydra: If MPC is configured with Hydra, mpcrun should be used with -l=mpiexec argument. Note that this argument is used by default if not specified.

  • SLURM: If MPC is configured with SLURM, mpcrun should be used with -l=srun argument.

Mono-process job

 

In order to run an MPC job in a single process with Hydra, you should use on of the following methods (depending on the thread type you want to use).

mpcrun -m=ethread      -n=4 hello_world
mpcrun -m=ethread_mxn  -n=4 hello_world
mpcrun -m=pthread      -n=4 hello_world

See Supported APIs for details on the ’-m’ option.

To use one of the above methods with SLURM, just add -l=srun to the command line.

Multi-process job on a single node

 

In order to run an MPC job with Hydra in a two-process single-node manner with the shared memory module enabled (SHM), you should use one of the following methods (depending on the thread type you want to use). Note that on a single node, even if the TCP module is explicitly used, MPC automatically uses the SHM module for all process communications.

mpcrun -m=ethread      -n=4 -p=2 -net=tcp hello_world
mpcrun -m=ethread_mxn  -n=4 -p=2 -net=tcp hello_world
mpcrun -m=pthread      -n=4 -p=2 -net=tcp hello_world

 

To use one of the above methods with SLURM, just add -l=srun to the command line.

Of course, this mode supports both MPI and OpenMP standards, enabling the use of hybrid programming.

There are different implementations of inter-process communications. A call to mpcrun -help details all the available implementations.

Multi-process job on multiple nodes

 

In order to run an MPC job on two nodes with eight processes communicating with TCP, you should use one of the following methods (depending on the thread type you want to use). Note that on multiple nodes, MPC automatically switches to the MPC SHared Memory module (SHM) when a communication between processes on the same node occurs. This behavior is available with all inter-process communication modules (TCP included).

mpcrun -m=ethread      -n=8 -p=8 -net=tcp -N=2 hello_world
mpcrun -m=ethread_mxn  -n=8 -p=8 -net=tcp -N=2 hello_world
mpcrun -m=pthread      -n=8 -p=8 -net=tcp -N=2 hello_world

 

Of course, this mode supports both MPI and OpenMP standards, enabling the use of hybrid programming. There are different implementations of inter-process communications and launch methods. A call to mpcrun -help detail all the available implementations and launch methods.

Launch with Hydra

 

In order to execute an MPC job on multile nodes using Hydra, you need to provide the list of nodes in a hosts file and set the HYDRA_HOST_FILE variable with the path to the file. You can also pass the host file as a parameter of the launcher as follow:

mpcrun -m=ethread -n=8 -p=8 -net=tcp -N=2 --opt='-f hosts' hello_world

 

see Using the Hydra Process Manager for more information about hydra hosts file.