我对MPI_Isend和MPI_Irecv有问题:接收向量永远不会正确发送。 该代码是用Fortran编写的。
每个过程都有许多我想向其发送值的接触过程。我要发送的值包含4个向量,它们是每个进程称为variables
的类型的一部分。
这是我使用的代码:
program isend_test
use mpi
real,dimension(:,:,:),allocatable :: receivedValues
real,allocatable :: sendReals
integer,allocatable :: requestSend
integer,allocatable :: requestReceive
integer,dimension(:),allocatable :: neighbours
integer,dimension(mpi_status_size) :: status
integer :: ierr,currentNeighbour,k,me,nTasks,nValues,nNeighbours,addedNeighbours
call MPI_init(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD,ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD,ierr)
nNeighbours = 2
! Only 3 values for each variable to keep it simple
nValues = 3
allocate(receivedValues(nNeighbours,4,nValues))
allocate(sendReals(4,nValues))
allocate(requestSend(4,nNeighbours))
allocate(requestReceive(4,nNeighbours))
allocate(neighbours(2))
receivedValues = -9999
! Initializing neighbours - Every process is adjacent to every other process in this example
addedNeighbours = 0
do j = 0,2
if (j == me) then
cycle
endif
addedNeighbours = addedNeighbours + 1
neighbours(addedNeighbours) = j
enddo
! fill in some values to send
do j = 1,4
do i=1,nValues
sendReals(j,i) = j + 10*me + 100*i
enddo
enddo
do j = 1,4
do i = 1,nNeighbours
call mpi_isend(sendReals(j,mpi_real,neighbours(i),j,MPI_COMM_WORLD,requestSend(j,i),ierr)
call mpi_irecv(receivedValues(i,requestReceive(j,ierr)
enddo
enddo
do j = 1,nNeighbours
call mpi_wait(requestSend(j,status,ierr)
call mpi_wait(requestreceive(j,ierr)
enddo
enddo
write(*,*)receivedValues
call MPI_finalize(ierr)
end
我知道数据类型是正确的(它们与MPI_Send
和MPI_Recv
一起使用),并且邻居和标记的整个匹配也是正确的,因为代码可以正确运行。但是,如果我在同步之前开始设置receivedValues = -9999
,则这些值不会更改。
我知道代码可以更高效地完成,但是我做了很多修改以发现错误而没有成功……有人有想法吗?缓冲区可能有问题,我只是找不到...
顺便说一句:发送和接收sendReals(j,1)
和neighbours(i),1)
也不起作用...