Professional Documents
Culture Documents
■ Inter-process communication (IPC) has been a heart for process to exchange data in
a system, and between system (termed as intra-process communication).
■ Without an efficient inter-process communication API, it will be hard for processes
across a distributed system to implicitly or explicitly share data in order to complete
and coordinate certain tasks.
■ This is a very important as a Distributed system to operate at high level of
transparency to a level that it appears as a single system to end users.
■ Therefore, it sparked an idea for our mini project, where we will be developing a few
small programs to evaluate the inter process communication RTT & OWD of one of
the IPC facilities in UNIX based system, which is UNIX Domain Socket (Socket API)
against a well known parallel programming API, OpenMPI.
PROBLEM STATEMENT
■ We were curious whether the fact that the different semantics between UNIX
Domain Socket and OpenMPI will have a major effects on inter-process
communication RTT and OWD.
■ We are not sure whether the size of message in asynchronous communication event
in MPI will have an effect to the communication RTT or not since it returns
immediately.
OBJECTIVES
■ To find out whether different semantics used by UNIX Socket API and OpenMPI, will
have an effect on inter – process communication RTT & OWD or not.
■ To find out whether the size of message will have an effect in synchronous and
asynchronous inter – process communication event or not.
PROJECT SCOPE
Communication..
*/
end = clock();
double elapsed = (end - start)/(double)CLOCKS_PER_SEC;
BUFFER & ARRAY INITIALIZATION
■ Buffers for both programs are defined in Heap Space, instead of Stack Space, in
order to avoid stack overflowing when the number of array size exceeds certain
amount.
■ Consequently our program that can barely transmit 4MB managed to jump to
hundreds more message size in MB.
■ Array are initialized using a function below.
void Array_Init(int *Array_Buff, int int *Array_Buff =
ARRAY_SIZE) malloc(sizeof(int)*array_len);
{ /*
/*Initializing array with random .
integer with 999999 as maximum*/ Allocating on Heap Space
unsigned int seed = 184611; .
srand(seed); .
for(int i = 0; i < ARRAY_SIZE; i++) */
{ free(Array_Buff);
*(Array_Buff++) = rand() % 999999;
} }
RTT (Round-trip time)
■ Round trip time is the length of time it takes for a message to be sent, plus the
times needed for the message to be processed at receiving side, and plus the times
needed to send back the message to the sending side.
■ A timer it set to start before “send(), MPI_Ssend()” are invoked and stopped when
the same message is received at “recv(), MPI_Recv()”.
■ OWD only takes account the time length for message to be sent from source to
destination.
■ Usually we’ll divide RTT by 2, to get an approximation of OWD, as we assume that
the forward and backward paths are the same in terms of speed, congestion, etc. in
our mini project
ACCURACY TEST
■ To test the accuracy of received message from the one that is sent to receiving site,
we made a function to write the received message on an output file
“xxx_recv_arr.txt”.
■ By comparing the content of message sent, and received back at sending site, we
can see whether the content are the same or not.
■ To actually achieve this, we have to reset memory buffer at sending site once it sent
the message to make sure that the future received message will not the same
content sent before.
void Write_to_File(FILE *outfile, int *Array_Buff, int array_len)
{
for(int i = 0; i<array_len; i++)
{
fprintf(outfile, " %d ", *(Array_Buff++));
}
printf("[CLIENT PROCESS] File Write Completed.. \n");
}
RESULTS & ANALYSIS
■ The message size will act as a test sample, and it will be 10MB, 20MB, 40MB,
80MB, 160MB, and 320MB; this message size will be consistent for both programs.
■ For an integer array to get to size of 10MB, 20MB, 40MB, 80MB, 160MB, and
320MB it will require about 2500000, 5000000, 10000000, 20000000,
40000000, and 80000000 length of array respectively.
SYNCHRONOUS
COMMUNICATION
RESULTS FOR
Message Size Integer Array RTT (seconds) OWD (seconds)
(MB) Length Socket API OpenMPI Socket API OpenMPI
10 2,500,000 0.005141 0.005665 0.002571 0.002833
20 5,000,000 0.010804 0.011267 0.005402 0.005634
40 10,000,000 0.024944 0.025387 0.012472 0.012694
80 20,000,000 0.032137 0.058114 0.016069 0.029057
160 40,000,000 0.078453 0.122296 0.039227 0.061148
320 80,000,000 0.181116 0.266867 0.090558 0.133434
■ Table 3 Shows the Time Reduction of Using Asynchronous OpenMPI against UNIX
Domain Socket and Synchronous OpenMPI
DISCUSSION
■ Figure 10 shows the rich – set of routine by OpenMPI in comparison to Socket API.
■ Moreover, based on the result we get from Section 6.6.2, for an asynchronous
communication that had been implemented in OpenMPI, we can see that they are
major differences.
■ Refers Table 3 & Graph 3 where the results and the differences in term of time
reduction has been stated.
■ For our programs this might not seems to be useful, as processes in our program
doesn’t have any computational to be done, however for program that cannot
tolerate CPU to be idle waiting for communication events to be completed, it will
severely causes the program to slow – down.
■ This is worsen for program that’s finely – grained in terms of computation:
communication ratio.
CONCLUSION