You are on page 1of 31

SYNCHONOUS & ASYNCHRONOUS INTER

– PROCESS COMMUNICATION USING


OPENMPI & UNIX DOMAIN SOCKET

MUHAMMAD HAFIZUL HAZMI WAHAB (184611)


MOHAMAD SYAHMI BIN SAID (184334)
INTRODUCTION

■ Inter-process communication (IPC) has been a heart for process to exchange data in
a system, and between system (termed as intra-process communication).
■ Without an efficient inter-process communication API, it will be hard for processes
across a distributed system to implicitly or explicitly share data in order to complete
and coordinate certain tasks.
■ This is a very important as a Distributed system to operate at high level of
transparency to a level that it appears as a single system to end users.
■ Therefore, it sparked an idea for our mini project, where we will be developing a few
small programs to evaluate the inter process communication RTT & OWD of one of
the IPC facilities in UNIX based system, which is UNIX Domain Socket (Socket API)
against a well known parallel programming API, OpenMPI.
PROBLEM STATEMENT

■ We were curious whether the fact that the different semantics between UNIX
Domain Socket and OpenMPI will have a major effects on inter-process
communication RTT and OWD.
■ We are not sure whether the size of message in asynchronous communication event
in MPI will have an effect to the communication RTT or not since it returns
immediately.
OBJECTIVES

■ To find out whether different semantics used by UNIX Socket API and OpenMPI, will
have an effect on inter – process communication RTT & OWD or not.
■ To find out whether the size of message will have an effect in synchronous and
asynchronous inter – process communication event or not.
PROJECT SCOPE

■ We presumed asynchronous communication routines to complete communication event faster for a


program for beginner programmer.
■ Asynchronous communication evaluation will only be implemented in OpenMPI’s program due to
time constraint.
■ Timer to calculate RTT and OWD has to use the same library in “#include <time.h>”, in order to for
us to obtain the consistence result.
■ Instead of using provided “double time_start = MPI_Wtime();” in OpenMPI, both OpenMPI program
and UNIX Domain Socket program will be using “(double) clock_t start = clock();”. Refers Figure 3
where timer is implemented in Socket program.
■ The size of message sends and receives will be determined by the integer array length. For example
for 10,000 length of integer array = 10,000 * 4 bytes = 40, 000bytes = 40KB = 0.04MB.
■ There should be one or less operation between communication routines to get result as accurate
as possible.
LITERATURE REVIEW

■ Figure 4 Showing Round trip


latency(s) benchmarked by the
Qureshi & Rashid (2005)
LITERATURE REVIEW

■ Figure 5 showing the transfer rate


(MB/s) against Data Size (MB) of
different IPCs
METHODOLOGY
Timer, Buffer & Array Initialization, RTT, OWD, Accuracy Test,
Results & Analysis, Discussion
TIMER
■ Both of the program will be using the same standard C timer.
■ This is to avoid inconsistency where different time function might take different
number off CPU clock cycles to finish.
■ Timer will yield a decimal points value (double floating point) as its divided by a
macro “(double)CLOCKS_PER_SEC”

clock_t start, end;


start = clock();
/*

Communication..

*/
end = clock();
double elapsed = (end - start)/(double)CLOCKS_PER_SEC;
BUFFER & ARRAY INITIALIZATION
■ Buffers for both programs are defined in Heap Space, instead of Stack Space, in
order to avoid stack overflowing when the number of array size exceeds certain
amount.
■ Consequently our program that can barely transmit 4MB managed to jump to
hundreds more message size in MB.
■ Array are initialized using a function below.
void Array_Init(int *Array_Buff, int int *Array_Buff =
ARRAY_SIZE) malloc(sizeof(int)*array_len);
{ /*
/*Initializing array with random .
integer with 999999 as maximum*/ Allocating on Heap Space
unsigned int seed = 184611; .
srand(seed); .
for(int i = 0; i < ARRAY_SIZE; i++) */
{ free(Array_Buff);
*(Array_Buff++) = rand() % 999999;
} }
RTT (Round-trip time)
■ Round trip time is the length of time it takes for a message to be sent, plus the
times needed for the message to be processed at receiving side, and plus the times
needed to send back the message to the sending side.
■ A timer it set to start before “send(), MPI_Ssend()” are invoked and stopped when
the same message is received at “recv(), MPI_Recv()”.

if((rv = send(…)) <0){


exit(EXIT_FAILURE);}
else{
if((rv = recv(…)) < 0)
{
exit(EXIT_FAILURE);}
else{
printf(“Completed %d receive bytes”, rv);}
OWD (One-way delay)

■ OWD only takes account the time length for message to be sent from source to
destination.
■ Usually we’ll divide RTT by 2, to get an approximation of OWD, as we assume that
the forward and backward paths are the same in terms of speed, congestion, etc. in
our mini project
ACCURACY TEST
■ To test the accuracy of received message from the one that is sent to receiving site,
we made a function to write the received message on an output file
“xxx_recv_arr.txt”.
■ By comparing the content of message sent, and received back at sending site, we
can see whether the content are the same or not.
■ To actually achieve this, we have to reset memory buffer at sending site once it sent
the message to make sure that the future received message will not the same
content sent before.
void Write_to_File(FILE *outfile, int *Array_Buff, int array_len)
{
for(int i = 0; i<array_len; i++)
{
fprintf(outfile, " %d ", *(Array_Buff++));
}
printf("[CLIENT PROCESS] File Write Completed.. \n");
}
RESULTS & ANALYSIS

■ The message size will act as a test sample, and it will be 10MB, 20MB, 40MB,
80MB, 160MB, and 320MB; this message size will be consistent for both programs.
■ For an integer array to get to size of 10MB, 20MB, 40MB, 80MB, 160MB, and
320MB it will require about 2500000, 5000000, 10000000, 20000000,
40000000, and 80000000 length of array respectively.
SYNCHRONOUS
COMMUNICATION
RESULTS FOR
Message Size Integer Array RTT (seconds) OWD (seconds)
(MB) Length Socket API OpenMPI Socket API OpenMPI
10 2,500,000 0.005141 0.005665 0.002571 0.002833
20 5,000,000 0.010804 0.011267 0.005402 0.005634
40 10,000,000 0.024944 0.025387 0.012472 0.012694
80 20,000,000 0.032137 0.058114 0.016069 0.029057
160 40,000,000 0.078453 0.122296 0.039227 0.061148
320 80,000,000 0.181116 0.266867 0.090558 0.133434

■ Table 1 showing the result of synchronous programs.


■ Figure 7 Output Sample of OpenMPI program
■ Graph 1 shows the comparison of RTT between Socket API implementation
using UNIX Domain Socket vs. OpenMPI.
■ Graph 2 shows the comparison of OWD/end-to-end delay between Socket
API implementation using UNIX Domain Socket vs. OpenMPI
ASYNCHRONOUS
COMMUNICATION
RESULTS FOR
Message Integer Array RTT (seconds) OWD (seconds)
Size (MB) Length OpenMPI OpenMPI
10 2,500,000 0.000249 0.000125
20 5,000,000 0.000368 0.000184
40 10,000,000 0.000624 0.000312
80 20,000,000 0.001341 0.000671
160 40,000,000 0.003097 0.001549
320 80,000,000 0.005341 0.002671

■ Table 2 showing the result of running an asynchronous program using OpenMPI,


where asynchronous routines had been implemented.
■ Figure 8 shows an output sample of OpenMPI program using asynchronous
communications.
■ Graph 3 shows the round-trip-time delay (s) differences between Asynchronous OpenMPI against
Synchronous OpenMPI & Synchronous Socket communication routines implementation.
Message Size %𝑹𝒆𝒅𝒖𝒄𝒕𝒊𝒐𝒏 = 𝑻𝒊𝒎𝒆𝒃−𝑻𝒊𝒎𝒆𝒂 𝒙 𝟏𝟎𝟎 %𝑹𝒆𝒅𝒖𝒄𝒕𝒊𝒐𝒏 = 𝑻𝒊𝒎𝒆𝒃−𝑻𝒊𝒎𝒆𝒂 𝒙 𝟏𝟎𝟎
𝑻𝒊𝒎𝒆 𝒃 𝑻𝒊𝒎𝒆𝒃
(MB)
Over Synchronous OpenMPI Over Synchronous Socket API
10 95.604 95.157
20 96.734 96.594
40 97.542 97.498
80 97.692 95.827
160 97.468 96.052
320 97.997 97.027

■ Table 3 Shows the Time Reduction of Using Asynchronous OpenMPI against UNIX
Domain Socket and Synchronous OpenMPI
DISCUSSION

■ Based the result of Synchronous Communication at Section 6.6.1, from Figure 9


below we can see a steady elevation of RTT for OpenMPI, and Socket API up until
40MB of message size.
■ Once the programs has to transmit a 80MB size of messages we can see that
there’s high deviation of OpenMPI program from the steady line, where Socket API
requires 0.032137 seconds to complete RTT whilst OpenMPI took 0.058114
seconds to complete.
■ Moreover, the trend seems to be continuing for 160MB and 320MB message size.
The only time where the two programs align in consistency towards each other were
at 10MB, 20MB, and 40MB size of messages.
■ When the messages size increases RTT using OpenMPI program seems to be
increasing more than the Socket API program. This shows that even with larger
messages in term of RTT program with Socket API will still be better, due to its ability
to cope with the message sizes.
■ However, the trend line for both programs seems to be rocketed rapidly once the
message sizes increased from 80MB – 320MB; although the trend line for Socket
MPI are seems to be slightly better.
■ Figure 9 shows a snippet from Graph 1
■ Based on the result obtained from Section 6.6.1, its concludes that OpenMPI will take
longer time transmitting larger size of message compared to Socket API, this is
because the rich set of communication routines of OpenMPI, where they are a lot of
parameters involves in delivering a message as compared to Socket API. For instance
refers to Figure 10, where to simply send a message in Socket API and OpenMPI,
routine (1) & (2) are called respectively.

■ Figure 10 shows the rich – set of routine by OpenMPI in comparison to Socket API.

■ Moreover, based on the result we get from Section 6.6.2, for an asynchronous
communication that had been implemented in OpenMPI, we can see that they are
major differences.
■ Refers Table 3 & Graph 3 where the results and the differences in term of time
reduction has been stated.
■ For our programs this might not seems to be useful, as processes in our program
doesn’t have any computational to be done, however for program that cannot
tolerate CPU to be idle waiting for communication events to be completed, it will
severely causes the program to slow – down.
■ This is worsen for program that’s finely – grained in terms of computation:
communication ratio.
CONCLUSION

■ As conclusion we can say that even if the communication is asynchronous, there’s


still insignificant increases of time where the messages size is increase.
■ Thus making our earlier presumptions wrong, as we thought asynchronous
communication will return immediately for all size of messages, now it’s been based
on the result from Section 6.6.2 specifically Graph 3.
■ Asynchronous communication routines still affected by the message size that they
have to transmits, but not as much as synchronous communication where they have
wait for the buffer to be ready for reusing.
■ The effects of message size on asynchronous is relatively small compared to
synchronous communication for both OpenMPI and Socket API.
REFERENCES
■ Barney, B. (2009). Message Passing Interface (MPI). Lawrence Livermore National Laboratory,
https. Retrieved from https://computing.llnl.gov/tutorials/mpi/ on 7 December 2017.
■ Khan, M., & Shah, M. A. (2017, September). Inter-process communication, MPI and MPICH in
microkernel environment: A comparative analysis. In Automation and Computing (ICAC), 2017 23rd
International Conference on (pp. 1-7). IEEE.
■ Qureshi, K., & Rashid, H. (2005). A performance evaluation of RPC, Java RMI, MPI and PVM.
Malaysian Journal of Computer Science, 18(2), 38-44.
■ Silberschatz, A., Galvin, P. B., & Gagne, G. (2014). Operating system concepts essentials. John
Wiley & Sons, Inc.
■ Venkataraman, A., & Jagadeesha, K. K. Evaluation of Inter-Process Communication Mechanisms.
Architecture, 86, 64. Unpublished Article.
■ Wright, K., & Kang, K. (2007). Performance analysis of various mechanisms for inter-process
communication. Operating Systems and Networks Lab, Dept. of Computer Science, Binghamton
University.

You might also like