当我想通过MPI_Send发送非常大的消息时遇到了一个问题:有多个处理器,我们需要传输的int总数为2 ^ 25,我测试了1000的大小,我的代码可以正常工作,但是如果我将其设置为教授要求的尺寸,它将停留很长时间,并向我返回一些信息,如下所示:
2 more processes have sent help message help-mpi-btl-base.txt / btl:no-nics
Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Primary job terminated normally,but 1 process returned
a non-zero exit code. Per user-direction,the job has been aborted.
mpiexec noticed that process rank 0 with PID 0 on node srv-p22-13 exited on signal 24 (CPU time limit exceeded).
我在每行代码之后都使用了“ cout”,并且我确定它卡在MPI_Send行之前,Si的大小超过20,000,000。我不确定这是原因吗?但是我已经搜索到MPI_Send的最大限制是2 ^ 32-1 ...更大的是2 ^ 25 ...所以我感到困惑。
这是我代码的主要部分:
//This is send part
for(int i=0; i<5; i++){
if(i!=my_rank){//my_rank is from MPI_Comm_rank(MPI_COMM_WORLD,&my_rank)
int n = A.size();//A is a vector of int
int* Si= new int[n];//I want to convert vector to a int array
std::copy(A.begin(),A.end(),Si);
MPI_Send(&Si,n,Type,i,my_rank,MPI_COMM_WORLD);//**The code stuck here and says CPU time limit exceeded
delete[] Si;
}
}
MPI_Barrier(MPI_COMM_WORLD);//I want all the processor finish sending part,then start receive and save in vector
//This is receive part
for(int i=0; i<5; i++){
if(i!=my_rank){
MPI_Status status;
MPI_Probe(i,MPI_COMM_WORLD,&status);
int rn = 0;
MPI_Get_count(&status,&rn);
int* Ri = new int[rn];
MPI_Recv(Ri,rn,MPI_STATUS_IGNORE);
/*Save received elements into vector A*/
for(int i=0; i<sizeof(Ri);i++){
inout.push_back(A);
}
}
}