MPI的基本介绍
MPI is a message-passing library specification proposed as a standard by a
committee of vendors, implementers, and users. It is designed to permit the
development of parallel software libraries
WHAT ITS NOT!
- A compiler
- A specific Product
The concept of message transferring so that processes communicate with other
processes by sending and receiving messages, is the core of the Message
Passing Interface (MPI)
MPI在分布式系统中使用比较频繁,在我们项目中是最基础的消息发送底层管理平台。
接触也有些时间了,但在使用的时候,调用的还是MPI中较少的一部分。下面这个例子
就是一个简单的应用实例。
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
main(int argc, char **argv)
{
int rank, size, myn, i, N;
double *vector, *myvec, sum, mysum, total;
MPI_Init(&argc, &argv );
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
/* In the root process read the vector length, init
the vector and determine the sub-vector sizes */
if (rank == 0) {
printf("Enter the vector length : ");
scanf("%d", &N);
vector = (double *)malloc(sizeof(double) * N);
for(i=0,sum=0;i<N; i++)
vector[i] = 1.0;
myn = N / size;
}
/* Broadcast the local vector size */
MPI_Bcast(&myn, 1, MPI_INT, 0, MPI_COMM_WORLD );
/* allocate the local vectors in each process */
myvec = (double *)malloc(sizeof(double)*myn);
/* Scatter the vector to all the processes */
MPI_Scatter(vector, myn, MPI_DOUBLE, myvec, myn, MPI_DOUBLE,
0, MPI_COMM_WORLD );
/* Find the sum of all the elements of the local vector */
for (i = 0, mysum = 0; i < myn; i++)
mysum += myvec[i];
/* Find the global sum of the vectors */
MPI_Allreduce(&mysum, &total, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD );
/* Multiply the local part of the vector by the global sum */
for (i = 0; i < myn; i++)
myvec[i] *= total;
/* Gather the local vector in the root process */
MPI_Gather(myvec, myn, MPI_DOUBLE, vector, myn, MPI_DOUBLE,
0, MPI_COMM_WORLD );
if (rank == 0)
for(i=0;i<N; i++)
printf("[%d] %f\n", rank, vector[i]);
MPI_Finalize();
return 0;
}
What is OpenMP
OpenMP is spec’s for a set of compiler directives, library routines, and
environment variables that used to specify shared memory parallelism.
Supports Fortran (77, 90, and 95), C, and C++
MP = Multi Processing
OpenMP stands for Open specifications for Multi Processing via collaborative
work with interested parties from the hardware and software industry,
government and academia
What OpenMP isn’t.
A specific language or compiler
Meant for distributed memory parallel systems (without help)
Implemented the same by every vendor
Guaranteed to make the most efficient use of shared memory
PARALEL Region Construct
Specifies a block of code that will be executed by multiple threads.
Fundamental OpenMP parallel construct
#pragma omp parallel [clause ...] newline
if (scalar_expression)
private (list)
shared (list)
default (shared | none)
firstprivate (list)
reduction (operator: list)
copyin (list) structured_block
{
structured code block
}
Work-Sharing Constructs
The following directives are designed specifically for distributing the
execution of the enclosed code throughout the members of the team that
encounter it.
for
SECTIONS
SINGLE
SECTIONS and SINGLE
SECTIONS divides the team into different sections and gives code for each
section.
SINGLE specifies that only one thread is to execute the following thread
These directives can also be used with the PARALLEL directive
For examples: http://www.llnl.gov/computing/tutorials/openMP/
www.openmp.org
www.llnl.gov/computing/tutorials/openMP
www.openmp.org/presentations/sc99/sc99_tutorial_files/frame.htm