Professional Documents
Culture Documents
Computer Communications
journal homepage: www.elsevier.com/locate/comcom
A R T I C L E I N F O A B S T R A C T
Keywords: Internet of Things (IoT) is a wireless network composed of a variety of heterogeneous objects such as Connected
Internet of things Wearable Devices (sensors, smartwatches, smartphones, PDAs ...), Connected Cars, Connected Homes,...etc.
Parallel broadcasting These things use generally wireless communication to interact and cooperate with each other to reach common
Communication protocols goals. IoT(T, n) is a network of things composed of T things with n items (packets) distributed randomly on it.
Permutation routing
The aim of the permutation routing is to route to each thing, its items, so it can accomplish its task. In this paper,
Collision-free
we propose two agent-based broadcast protocols for mobile IoT, using a limited number of communication
Energy-efficiency
channels. The main idea is to partition the things into groups where an agent in each group manages a group of
things. This partitioning is based on the memory capacities for these heterogeneous nodes. The first protocol uses
n
a few communication channels to perform a parallel broadcasting and requires O ( k ) memory space, where k is
the number of communication channels. The second protocol uses an optimal complexity of memory space for
each thing to achieve the permutation routing with a parallel broadcasting using less number of channels. We
give an estimation of the upper and lower bounds of the number of broadcast rounds in the worst case and we
discuss experimental results.
1. Introduction onboard radio that can be used to receive messages from its neighbors
and to send the information to them. That is, each thing needs to re-
The Internet of things (IoT) consists of a great number of hetero- ceive information available in the local memories of other things using
geneous nodes such as Connected Wearable Devices (sensors, MEMS, routing protocols. We refer the reader to Fig. 1, depicting a 15-things in
robots, smartwatchs, smartphones, PDA ...), Connected Smart Cars, IoT. Such technological development has encouraged practitioners to
Connected Smart Homes, Connected Smart Cities, and the Industrial envision aggregating the limited capabilities of the individual things in
Internet. These things are equipped with data processing and commu- a large scale network that may operate unattended [1,5,10,11].
nication capabilities which give them the ability of sensing, computa- As said before, IoT will occupy a prominent place in our day-to-day
tion, and wireless communications [1–4]. IoT is an attractive research life. However, the design of protocols to control them in order to
subject that has started to receive growing attention from the research achieve a common goal is far from being a simple task. Indeed, due to
and engineering communities in recent years. The nodes in IoT may be the resources limitation, a solution for an application in Internet of
mobile or static, deployed in ad hoc manner in area of interest. These things should take into account the restrained capabilities (limited
things are useful in a wide range of applications of our every-day life. battery power, processing power and memory storage) of these het-
Such applications include smart energy, smart health, distributed in- erogeneous devices by using as little memory and energy as possible
telligent MEMS, smart buildings, smart transport, smart industry, smart whilst maximize the lifetime of the network [12–14]. Furthermore, the
city, facilitating/conducting urban search and rescue, tasks in un- number of radio channels are limited. In addition, transmissions
attended and rough environments etc. [5–9]. Roughly speaking, IoT is through wireless channels can suffer errors due to both channel and
making our daily life easier and smarter. interference conditions [15–17,19]. It is well known that, in real ap-
The Internet of things generally employs large number of dis- plications communication channels are subject to channel noise, fault
tributed heterogeneous things, which may be miniaturized devices that and thus errors may be introduced during transmission from the source
cooperate and collaborate with each other using wireless communica- to a receiver. Besides, in some cases some channels are damaged and
tion to achieve common goals and objectives. Each thing has an are no longer used. These facts illuminate the need and the importance
⁎
Corresponding author.
E-mail addresses: hlakhlef@utc.fr (H. Lakhlef), bouabdal@utc.fr (A. Bouabdallah), raynal@irisa.fr (M. Raynal), jbourgeois@femto-st.fr (J. Bourgeois).
https://doi.org/10.1016/j.comcom.2017.10.020
Received 29 September 2016; Received in revised form 19 September 2017; Accepted 31 October 2017
Available online 07 November 2017
0140-3664/ © 2017 Elsevier B.V. All rights reserved.
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
52
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
algorithm appeared in [36]. All these approaches assume that the nodes boundaries in IoT (T, n). The permutation routing problem is to route
of the network are homogeneous. The solution in [37] presents a ran- the items in such a way that each thing ti receives its Mi items. Fig. 2 (a)
domized algorithm for the same problem in multi-hop network with presents an example where the items are distributed in the network and
high probability. These solutions are designed for radio or sensor net- each thing holds items that must be routed to their destinations. Fig. 2
works and each node within these protocols stores exactly one item or n (b) presents the network after the permutation routing, so each node
p
items. has its information and it can proceed to perform its task.
Some solutions for the permutation routing in wireless node net- As in [16,17,19], the term broadcast in this paper refers to one-to-
works use clustering to divide the network into groups of single-hop many and to point-to-point transmission. The set of all broadcast op-
networks where the cluster head routes the items to another cluster. erations that take place in the same time slot is referred to as a broadcast
The protocol in [38] is designed for multi-hop homogenous sensors. It round.
needs ( ) (HUBmax )(k + 1) + O (HUBmax ) + k
n
p
2 + k broadcast rounds, Let LT = {t1, t2, …, tT } be the list of T things in IoT(T, n) and let
L = {M1, M2, …, MT } be the list of the memory spaces that hold the items
where n is the number of the data items stored in the network, p is the
of the things in the network. With, for each 1 ≤ i ≤ T, Mi is the memory
number of sensors, HUBmax is the number of sensors in the clique of
space (number of items) of thing ti. We note:
maximum size and k is the number of cliques after the first clustering.
-MIN(Mi), the lowest memory space for items that a thing has in the
The drawback of this protocol is the fact that there is a high probability
network, i with ∀j ≠ i then Mj > Mi. If Mj = Mi , then i < j
of collision and conflict on the communication channels, because it has
-MAX(Mi), the largest memory space for items that a thing has in the
not been shown a mechanism to manage the broadcasting. In [39], it is
network, i with ∀j ≠ i, then Mj < Mi. If Mj = Mi , then i < j.
presented a secure permutation routing protocol in multi-hop networks.
In the presentation of the proposed protocols, we work mainly on
It uses clustering and secures the procedure of clustering and routing.
the memory spaces for items M1, M2, …, MT and when an action (add
Contributions. We consider a wireless network of heterogeneous
into a group, delete from a group...) is applied on a given memory space
things with n items (packets) and T things, IoT(T, n) for short. The n
Mi this means that the action is applied on ti, assuming that there is a
items are distributed on the T things, so each thing t has in its local
function ID(Mi) that gives the thing ti.
memory Mt items. In this paper, we propose two, agent-based, dis-
tributed and parallel protocols for the permutation routing for Internet
of Things. We partition the things into groups, where in each group 3. Proposed protocols
there is a set of agents. The role of the agents is to manage groups of
things and also manage the broadcasting on the communication chan- In this section we present and discuss our proposed protocols. This
nels so to provide protocols that run without collision or conflict at the section is divided into two subsections. In the first one, we present an
n
communication channels. Both protocols aim to use maximally the efficient protocol that uses O ( k ) of memory space for each thing and in
communication channels available. the second subsection, we present a protocol that uses O(Mi) for each
The aim of the first protocol is to use a few communication channels thing ti. We discuss the advantages of each one.
and perform broadcasting in parallel to optimize the number of
n
broadcast rounds. The main idea of this protocol is to partition the 3.1. Protocol with O ( k ) memory on IoT(T, n)
things into k groups, where k is the number of communication channels
n
that allows using O ( k ) of memory space. In this protocol the grouping is The aim of this section is to show that the permutation routing on an
based on the number k. The aim of the second protocol is to use a IoT(T, n, k) of T things, n items (information) and k channels, where
n
minimum amount of memory for each thing, where the broadcasting in each thing has a memory space of O ( k ) is possible if:
parallel is possible. In this protocol a thing t that stores Mt items uses O
(Mt) of memory space. Contrary to the first protocol, in this one the ⎢ ∑T (Mi ) ⎥
k ≤ ⎢ i=1 ⎥.
partitioning is based on the memory capacities of the nodes in the ⎢ MAX (Mi ) ⎥
⎣ ⎦ (1)
groups. Therefore, the number of communication channels used is
smaller compared to the first one. The procedure is to divide the channels on the T things, so to per-
The proposed permutation routing protocols are distributed, where form broadcasting in parallel and to use optimally each channel.
nodes make autonomous decisions without any centralized control. We Clearly, the aim is to distribute the broadcasting on the channels.
give an estimation of the upper bound of the number of broadcast Contrary to the related works, we can see clearly that this grouping will
rounds in the worst case and experiments results. Our solutions are the not depend on the number of things because the things store different
first to give efficient and collision-free solutions for the permutation numbers of items. Therefore, this grouping depends on the list L of the
routing problems for Internet of Things. memory spaces of things that hold the items. The procedure is to divide
Outline of the paper. The paper is made up of 5 sections. The rest the things into k groups. In each group there are k agents. The role of
of sections is organized as follows: Section 2 presents the model of the each agent in a given group is to receive the elements that belong to a
network and some definitions. Section 3 presents and discusses the given group (one group of the k groups). In the first step each node
proposed protocols. Section 4 details the simulation results. Finally, our sends its items to the agents and in the second steps the agents
conclusions and suggestions for the future works are given in Section 5. broadcast the items to their final destinations. Note that each agent in
each group may be the destination of all things in its group, because the
items in group of things G(1) may have all destinations in group of
2. Model and definitions
things G(2). Based on this, we can observe that an agent should have a
The network considered is simple-hop (the pairs can communicate memory capacity to store a set of elements of a group. Therefore its
directly). IoT includes large numbers of mobile thing nodes. The things memory space must be equal to a size of a group. To satisfy this we can
communicate with each other using bidirectional links. The computa- choose:
tion and communication capabilities are different for all thing nodes. T
Furthermore, the things have different memory capacities. IoT (T, n) is k≤⎢
⎢ ∑i =1 (Mi) ⎥⎥.
⎣ ⎦ (2)
a thing network with n items (information or packets) and T things.
Fig. 2 depicts IoT (15, 56). In IoT (T, n), each thing ti has in its local This condition depends on the memory capacities of things. With
n
memory Mi items. Each item has a unique destination thing. The time is this condition an agent will use in the worst case O ( k ) of memory space
divided into slots and all packet transmissions take place at slot because in each group the overall number of items will be
53
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
The first step is to partition the network into k groups, • for all 0 < i ≤ k, m ≥ M ∈ L − K, 0 < j ≤ T and
i j
•k≤
∑T
i = 1 (Mi )
G (1), G (2), …, G (k ), such that G(j), 0 < j ≤ k. In each group G(j) will be .
m1
δj things. A size, s(j), of a group is defined as the sum of all Mi,
i ∈ {1, 2, ….,T }, that a group G(j), 0 < j ≤ k, contains. We use Grouping By applying Grouping Algorithm in Fig. 3 on L and k, we name a full
Algorithm presented in Fig. 3 to apply the grouping of the things in the iteration when an or several integers Mi, 1 ≤ i ≤ T from a set H ∈ L are
network. This algorithm takes in input a list of memory spaces added into each mj, 1 ≤ j ≤ k.
L = {M1, M2, …, MT } which represents the memory spaces that contain ITL‖K(γ) is the full iteration number γ by applying the grouping al-
the items and an integer k which represents the number of channels and gorithm on L and k.
the number of groups. It outputs the groups G (1), G (2), …, G (k ) . Each
Lemma 3.1. By applying the grouping algorithm, whatever the values Mi,
i ∈ {1, 2, ….,T } and k satisfies inequation (1), then the minimum size s(j) is
1. INPUT : A set L and k always:
2. OUTPUT : A list of groups,
3. LG ← {G(1), G(2), ..., G(k)} s (j ) ≥
MAX (Mi ) × (k − 1)
.
4. begin 2k − 3 (3)
5. X ← L; b ← 1;
Proof. ITL‖K(1) is the maximal full iteration that allows having the
6. while (b ≤ k) do minimum of s(j) because if there is ITL‖K(γ), with γ > 1 this means that
7. G(b) ← M AX(X); to all mi was added one or several Mi, including MAX(Mi). Therefore,
8. X ← X − G(b − 1); this means that there was in some previous step all mi + Ci ≥ MAX (Mi ),
9. b←b+1 where Ci is the sum of all values from L added to mi. Consequently,
10. end while; ITL‖K(γ), γ > 1 will not give the minimum s(j). To apply ITL‖K(1), the
T
11. Ln ← L − {G(1), ..., G(k − 1), G(k)} ; sum ∑i = 1 (Mi ) must be the minimum as possible and satisfies inequation
T
12. LG ← {G(1), G(2), ..., G(k)} ; (1). This sum is ∑i = 1 (Mi ) = MAX (Mi ) × k because (MAX (Mi ) × k ) − ϵ
does not satisfy inequation (1). ∀mi, no integer will be added to m1
13. while (Ln = ) do T
because ∑i = 2 (Mi ) = MAX (Mi )(k − 1) . Therefore, by applying the
14. M IN (LG) ← M IN (LG) ∪ M AX(Ln); T
grouping algorithm, the sum ∑i = 1 (Mi ) = MAX (Mi ) × k will give the
15. Ln ← Ln − {M AX(Ln)} minimum s(j).
16. end while; To have the minimum we should find the k-tuple
17. Return (LG) (m1, m2 , m3, …, mk ) where m2 is the minimum, that if chosen, no
18. end number from L − K will be added to it. That is, the k-tuple allows,
following the grouping algorithm to add the L − K to m3, …, mk without
Fig. 3. Grouping Algorithm. adding any value to m2. Note that m1 is the maximum and it is
54
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
T
invariable, so it is out of computation. To let the difference maximal add until u = MAX (Mi ) − 1 to ∑i = 1 (Mi ) . If u > MAX (Mi ) − 1, the value
between m2 and m3, …, mk after adding the sum, say X, to m3, …, mk , we ∑in= 1 (Mi) + u
MAX (Mi)
≥ (k + 1), therefore mk will not be the maximum, which
must have m2 = m3 = …=mk . This means that, the sum X, to be add to
represents a contradiction. Therefore, we conclude until now that: to let
m3, …, mk divided by m2 must be inferior than or equals to k − 2. T
X mk maximum we may add until u = MAX (Mi ) − 1 to ∑i = 1 (Mi ) .
Because if m > k − 2, following the grouping algorithm a positive
2 We obtain the largest group if MAX (Mi ) = m1 = m2 = …,=mk be-
value will be added to m2. To assemble our conclusions, the condition to
cause, following the grouping algorithm, the value u will be added to
not add any element to m2 is:
MIN(Mi). Therefore, the largest group will be
X MAX (Mi ) + MAX (Mi ) − 1 = 2MAX (Mi ) − 1. Consequently, for dif-
≤ (k − 2).
m2 (4) ferent values of m1, m2 , …, and mk, S (lg ) ≤ 2MAX (Mi ) − 1. □
As we can write: Lemma 3.3. Let a set of positive integers beL = {M1, M2, …, Mm} and a set of
positive integers beK = {m1, m2, …, mk } as in Definition 3.1
(m1 × k ) − (m1 − (k − 1) m2) Then, if we apply the grouping algorithm on L, with k < mk the largest
≤ (k − 2).
m2 (5) group, lg, will be with a size of:
(5) ⇔
(MAX (Mi) × (k − 1))
2k − 3
≤ m2 . □ (
mk
)
Case 1:S (lg ) ≤ k + 1 × MAX (Mi ) − 1, ifmk mod k = 0,
or
Lemma 3.1 does not give always integer values. To deal with each
case to have the exact size we give the following equations that can be
(mk
)
Case 2:S (lg ) ≤ ⌊ k ⌋ + 1 × MAX (Mi ), if mk mod k ≠ 0.
deduced from inequation (3): Proof. We first proof the Case 1: for mk = k, the size is
if k is odd: (1 + 1) × MAX (Mi ) − 1. This is true, from the previous lemma. From
the previous lemma, to let mk maximum we may add until
m
MAX (Mi ) k − 1⎞ u = MAX (Mi ) − 1 to ∑i = 1 (Mi ) . Let G(j), 1 ≤ j ≤ k, initialized to 0.
s (j ) ≥ MAX (Mi ) − ⎛ × ,
⎝ k 2 ⎠ Each G(j) will contain one mi and the values to be added to mi. If
if MAX (Mi ) modk = 0, (6)
mk
( )
mk mod k = 0, to each G(j) will be added k × MAX (Mi ) except intk
or ( )
mk
that will be added to it k × MAX (Mi ) + MAX (Mi ) − 1. Therefore, in
this case the largest group will be of size
s (j ) ≥ MAX (Mi ) − ⎛⎢
⎢
MAX (Mi ) ⎥ k − 1
⎜
k ⎥× 2 + 2
k − 1⎞
, ⎟ ( )
mk
k
× MAX (M i ) + MAX (M i ) − 1 =
mk
k
+( 1 × )
MAX ( Mi ) − 1.
⎝⎣ ⎦ ⎠ Consequently, for different values of m1, m2 , …, and mk,
if MAX (Mi ) modk ≥
k−1
2
,
(7)
(
mk
)
S (lg ) ≤ k + 1 × MAX (Mi ) − 1, if mk mod k = 0 .
For the Case 2: this second case can be observed directly from Case
mk − mk modk
or 1. In this case, to each G(j) will be added k
× MAX (Mi ), as
mk mod k ≠ 0. And remains (mk − k ) × MAX (Mi ) . Given that
MAX (Mi ) ⎥ k − 1 ⎞ (mk − k ) < k . So to each (mk − k ) of the group G(j), 1 ≤ j ≤ k, will be
s (j ) ≥ MAX (Mi ) − ⎛⎢⎢ ⎜
⎥× 2 , ⎟
Example
MAX (Mi ) k
s (j ) ≥ MAX (Mi ) − ⎛ × ⎞, We run the grouping algorithm on the network of Fig. 2 (b), as-
⎝ k+1 2⎠
suming the number of channels is 2 (k = 2 ). We have
if MAX (Mi ) mod (k + 1) = 0, (9) X = L = {M1, M2, …, M15} (line 5). As k = 2 the algorithm will create two
or groups, G(1) and G(2) (line 3).
As k = 2, the algorithm first adds into G(1) the thing MAX(X), which
MAX (Mi ) ⎥ k ⎞ is the thing t9 that has M9 = 8 (line 7). After it adds t14 into G(2) (line
s (j ) ≥ MAX (Mi ) − ⎛⎢
⎢ ⎜
⎥× , ⎟
⎝⎣ k + 1 ⎦ 2 ⎠ 7). We have now ln = L − {M1, M2} (line 11). After, in instruction 14 the
k algorithm adds into G(2) the max in Ln so G (2) = {t14, t1} . So at line 15
if MAX (Mi ) mod (k + 1) ≤ , ln = L − {t14, t1, t9} . After, in line 14 we add t4 into group G(1) because
2 (10)
|G(1)| < |G(2)|. So G (1) = {t 9, t 4} . After we add t10 into group G(2)
or because |G(2)| < |G(1)| (line 14). So G (2) = {t14, t1, t10} . After we add t6
into group G(1) because |G(1)| < |G(2)|. So G (1) = {t9, t4, t6} . After we
MAX (Mi ) ⎥ k ⎞
s (j ) ≥ MAX (Mi ) − ⎛⎢
⎢ k + 1 ⎥ × 2 +1),
⎜ ⎟ add t2 into group G(2) because G(2) < G(1). So G (2) = {t14, t1, t10, t2} .
⎝⎣ ⎦ ⎠ After we add t3 into group G(1) because |G(1)| < |G(2)|. So
k G (1) = {t9, t4, t6, t3} . After we add t5 into group G(2) because
if MAX (Mi ) mod (k + 1) > .
2 (11) G(2) < G(1). So G (2) = {t14, t1, t10, t2, t5} . After we add t11 into group
G(1) because |G(1)| < |G(2)| (line 14). So G (1) = {t9, t4, t6, t3, t11} . After
Lemma 3.2. Let a set of positive integers be inL = {M1, M2, …, MT } and let a we add t12 into group G(2) because |G(2)| < |G(1)| (line 14). So
set of positive integers be inK = {m1, m2, …, mk } as in Definition 3.1. And let G (2) = {t14, t1, t10, t2, t5, t12} . After we add t13 into group G(1) because
mk be the maximum value of k that can be chosen with respect to inequation |G(1)| < |G(2)| (line 14). So G (1) = {t9, t4, t6, t3, t11, t13} . After we add t15
(1). into group G(2) because |G(2)| < |G(1)| (line 14).
Then, if we apply the grouping algorithm on L, with mk, the largest So G (2) = {t14, t1, t10, t2, t5, t12, t15} . After we add t7 into group G(1)
group, lg, will be with a size ofS (lg ) ≤ 2MAX (Mi ) − 1. because |G(1)| < |G(2)| (line 14). So G (1) = {t9, t4, t6, t3, t11, t13, t7} .
∑T
i = 1 (Mi )
After we add t8 into group G(2) because |G(2)| < |G(1)| (line 14). So .
Proof. Given that, from inequation (1), ≥ k . If k chosen is the
m1 At this point the list ln is empty and the algorithm ends by giving two
⎢ ∑Ti = 1 (Mi) ⎥ groups : G (1) = 28, G (2) = 29 .
maximum as possible, mk = ⎢ m1 ⎥. To let mk maximum we may
⎣ ⎦
55
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
56
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
begin
Algorithm for each list H(j), 1 < j < k: (
MAX (M )
)
or≥ 2MAX (Mi ) − ⎛ ⎢ k + 1 i ⎥ − 1 × k + 3⎞, if k is even.
⎝ ⎣ ⎦ ⎠
for a ∈ {1, ..., k − 1} do
3.2. Protocol with O(Mi) memory, 1 ≤ i < T for each ti in IoT(T, n)
ta (G(j)) broadcasts |IT EM (k − 1, j, d)| on
C(j) In this section, we present an agent-based broadcasting solution for
end for the permutation routing problem in IoT(T, n) that uses an optimal
a←1 memory space. In this protocol, each thing of memory space for items
while(a < k) do Mi uses O(Mi) of memory space.
Thing ta (G(j)) broadcasts at time T ime(a); In this protocol an agent Xj1, j2 may be composed of a single thing or
Thing td copies in its local memory the item a set of things. Xj1, j2 is the agent in group G(j1) and acts for group G(j2).
IT EM (s, j, d); Xj1, j2 stores only items ITEM(j1, j2, d) that have G(j1) as the source
a←a+1 group and G(j2) as a destination group. In this protocol, each agent Xj1,
j2 that has Mi of memory space for items does not stores more than 3Mi
end while;
of items.
end
Let x (j1, j2)1, x (j1, j2)2 , …..,x (j1, j2) β (j1, j2) , be the β(j1, j2) memory
Fig. 7. Broadcast to the final destinations Algorithm. spaces for items of things composing the agent Xj1, j2 if |Xj1, j2| > 1.
The main idea to implement the protocol with O(Mi) of memory
space is to set up in each group, a relief agents. These relief agents are
Procedure : Time(a) called to store the items when the agent became full. Each agent Xj1, j2
begin may have one or several relief agents. Zj1, j2, N, 1 ≤ N ≤ J − 2, is a relief
X ← 0; h ← 1; agent number N for agent Xj1, j2 in group G(j1), where J is the number of
while(h < a) do groups after applying the grouping algorithm (section 3.2.1). Each re-
X ← X + |IT EM (h, j, d)| ; lief agent Zj1, j2, N, is called to store the items that have G(j1) as source
h←h+1 group and G(j2) as destination group if the agent Xj1, j2 is full (i.e. it
end while; stores 3Mi items). Zj1, j2, N is also composed of a single thing or a set of
things.
Return(X)
Let z (j1, j2, N )1, z (j1, j2, N )2 , …..,z (j1, j2, N ) β (j1, j2, N ) , be the β(j1, j2,
end
N) memory spaces for items of things composing the relief agent Zj1, j2, N
Fig. 8. Procedure Time(a). if |Zj1, j2, N| > 1.
Observe that J − 2 relief agents for each agent are necessary and
for rounds that in really there was no broadcast operations during them. sufficient in each group. Because if there are J groups, there are J agents
After, each agent tx(G(j)) can know its turn to broadcast by calculating in each group. Let us assume that in a given group G(j), all items have
the sum of number of items to be broadcast by agents G(1) as destination group. X(j, 1) will be able to store its items and the
t1 (G (j )), t2 (G (j )), …, tx − 1 (G (j )) . items of X(j, 2), because following the grouping algorithm X(j, 2) < 3X
(j, 1). Notice that it remains J − 2 agents that have G(1) as destination
Lemma 3.5. This phase, clearly, takesk − 1 broadcast rounds. Because in group. Therefore, J − 2 agents are necessary and sufficient for the agent
each group,k − 1 agents broadcast the number of items on the channels in X(j, 1).
parallel.
Definition 3.2. Let a set of m positive integers be in S,
B) Phase 2 S = {int1, int2, …, intm}, where inta ≥ intb, a < b, 0 < a, b ≤ m and let X1
As in Phase 1, a channel C(j), 1 ≤ j ≤ k is assigned to a group H(j), and X2 be two subsets from set S, where X 1 ∩ X 2 = ⌀ .
to let the agent broadcast the items to the final destinations. With the We say that sf(X1) ≥ s(X2) if there is
sums computed by each agent in Phase 1, each of them broadcasts in X 1 = {e1 ∪ e2 ∪ …∪e v}, with, v ≤ m and e1 + e2 + …+e v ≥ s (X 2) and
the appropriate time its items on the channels. For each item ITEM(s, a, e1 + e2 + …+e v − 1 < s (X 2), where s(X2) is the size of X2 (the sum of the
d) broadcast in the channel of group G(a), the final destination thing elements in X2).
t = d, copies it in its memory.
This protocol is composed of three algorithms: Grouping and agents
Lemma 3.6. As the broadcasts take place in parallel for the k groups, assigning, broadcast to agents and broadcast to the final destinations. The
therefore, the group that contains more items will be the last group to finish details of each algorithm are discussed next.
the broadcast. Consequently, from lemma 3.3, the number of broadcast
rounds for Broadcast to the final destinations is: 3.2.1. Grouping and agents assigning
(mk
)
≤ k + 1 × MAX (Mi ) − 1, ifmk mod k = 0, The algorithm of grouping, assigning of agents and relief agents is
or presented in Fig. 9. This algorithm groups the things to have a protocol
( mk
)
≤ ⌊ ( k ) ⌋ + 1 × MAX (Mi ), if mk mod k ≠ 0. that uses an optimal memory space for the permutation routing. Indeed,
this protocol requires O(Mi) memory space for each thing ti. The algo-
Theorem 3.7. From lemma 3.2, lemma 3.3, lemma 3.4, lemma 3.5 and
n rithm assumes that there are enough memory spaces for items to have
lemma 3.6, the total number of broadcast rounds for this protocol usingO ( k )
at least two groups (two groups is the minimum number that allows
memory has an upper bound:
broadcasting in parallel). Fig. 10 depicts the model of grouping and
mk
( )
≤ 2 k + 1 × MAX (Mi ) + k , ifmk mod k = 0,
agents assigning, where there are agents, relief agents and remaining
or
things. The algorithm sets at the beginning the number of groups j = 2
(
mk
)
≤ 2 ⌊ ( k ) ⌋ + 1 × MAX (Mi ) + k − 1, if mk mod k ≠ 0. (instruction (2)). From instruction (3) to instruction (12) the algorithm
Theorem 3.8. From lemma 3.1, lemma 3.4, lemma 3.5 and lemma 3.6, the makes the first two groups. From instruction (13) to instruction (58),
n the algorithm at each time, checks if it is possible to add a new group. If
total number of broadcast rounds for this protocol usingO ( k ) memory has an
lower bound: the rest of memory spaces for items is not enough to make a new group,
the algorithm proceeds to distribute these memory spaces for the cur-
≥ 2MAX (Mi ) − ⌊ ( MAX (Mi)
k
⌋ )
× (k − 1) , if k is odd
rent j groups following the Grouping Algorithm in Fig. 3. This is done
57
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
1. begin algorithm
2. j ← 2;
3. if (|L| ≥ 4) then
4. Add X1,1 = MAX (L) into group G(1);
5. if (∃X1,2 = x(1, 2)1 ∪ x(1, 2)2 ∪ .... ∪ x(1, 2)β(1,2) in L − X1,1 , with sf (X1,2 ) ≥ s(X1,1 )) then
6. Add X1,2 into G(1); Add X2,1 = MAX (L − {X1,1 ∪ X1,2 }) into group G(2)
7. end if
8. if (∃X2,2 = x(2, 2)1 ∪ x(2, 2)2 ∪ .... ∪ x(2, 2)β(2,2) in L − {X1,1 ∪ X1,2 ∪ X2,1 }, with sf (X2,2 ) ≥ s(X2,1 )) then
9. Add X2,2 into G(2);
10. L ← L − {X1,1 ∪ X1,2 ∪ X2,1 ∪ X2,2 }
11. end if
12. end if
13. while (L = ∅) do
14. j ← j + 1; S ← L;
15. for (y = 1toy = j) do
16. if ((|S| ≥ 2) and (y == 1)) then
17. Add Xj,y = MAX (S) to G(j);
18. S ← S − Xj,y
19. end if
20. if (∃Xj,y = x(j, y)1 ∪ x(j, y)2 ∪ .... ∪ x(j, y)β(j,y) in S, with sf (Xj,y ) ≥ s(Xj,y−1 )) then
21. Add Xj,y into G(j);
22. S ← S − Xj,y ;
23. else GOTO A;
24. end if
25. end for
26. for (e = 1toe = j) do
27. for (y = 1toy = j − 2)
28. if (∃Zj,e,y = z(j, e, y)1 ∪ z(j, e, y)2 ∪ .... ∪ z(j, e, y)β(j,e,y) in S, with sf (Zj,e,y ) ≥ s(Zj,e,y−1 )) then
29. Add Zj,e,y into G(j);
30. S ← S − Zj,e,y ;
31. else GOTO A
32. end if
33. end for
34. end for
35. for (y = 1toy = j − 1) do
36. for (e = 1toe = j − 1) do
37. if (∃Zy,e,j−2 = z(y, e, j − 2)1 ∪ z(y, e, j − 2)2 ∪ .... ∪ z(j, e, y)β(y,e,j−2) in S, with sf (Zy,e,j−2 ) ≥ s(Zy,e,j−3 ))
38. Add Zy,e,j−2 into G(y);
39. S ← S − Zj,e,j−2
40. end if
41. else GOTO A
42. end for
43. end for
44. for (y = 1toy = j − 1) do
45. if (∃Xy,j = x(j, y)1 ∪ x(y, j)2 ∪ .... ∪ x(y, j)β(y,j) in S, with sf (Xy,j ) ≥ s(Xy,j−1 )) then
46. Add Xy,j into G(y);
47. for (e = 1toe = j − 2) do
48. if (∃Zj,e,y = z(j, e, y)1 ∪ z(j, e, y)2 ∪ .... ∪ z(j, e, y)β(j,e,y) in L, with sf (Zj,e,y ) ≥ s(Zj,e,y−1 )) then
49. Add Zy,j,e into group G(y);
50. S ← S − Zj,e,y ;
51. else GOTO A
52. end if
53. end for
54. else GOTO A
55. end if
56. end for
57. L←S
58. end while;
59. GOTO B;
60. A: j ← j − 1;
61. B: if (L = ) then Grouping(L, j, LG = {G(1), G(2), .....G(j)});
62. end algorithm
58
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
from instruction (60) to instruction (61). this stage, relief agents are not needed, because each agent can store all
We detail the explanation of the algorithm of grouping, assigning of items that are in its group.
agents and relief agents using the memory spaces for items of 42 things, In the next step, we enter into the loop while to check if we may add
in IoT (42, 326) presented in Fig. 11. a new group (instruction (13)). If that is possible, we add it and in the
Example: next iteration of the loop, we check if the grouping into four groups is
Within the example of memory spaces for items in Fig. 11, we have: possible and so on, as long as the updated list L is not empty. Typically,
we can add a new group if there is enough memory spaces for items that
L = {15, 15, 13, 13, 13, 13, 12, 12, 12, 12, 12, 12, 11, 11, 11, 10} . allow saving items for the new group. This is by setting up agents for
∪ {10, 10, 9, 9, 9, 8, 8, 8, 7, 7, 7, 7, 6, 6, 6, 6, 6, 5, 4, 4, 3, 3, 3, 3} the new group in each group.
In instruction (14), we assume that adding a new group G(3) is
∪ {3, 3, 3, 3} possible, so we increment the number of groups. Therefore, in our ex-
Applying the algorithm, we add into G(1), the agent X1, 1, which is the
ample j = 3. Also, we record the list L in another list S, because if the
thing having the maximum memory space, which is the first thing ti that
grouping into j group is not possible we need the memory spaces that
has Mi = 15, so G (1) = {15} (instruction 4). After, we add the agent
cannot make a new group to distribute them on the current j − 1
X1,2 = {15}, because sf(X2, 1) ≥ s(X1, 1). So G (1) = {15, 15} (instruction
groups.
(6)). In the next instruction (6) we add the first agent,
From instruction (15) to instruction (25), we add the agents into the
X2,1 = MAX (L − {X1,1 ∪ X1,2 }) = 13 into group G(2). So G2 becomes
new group G(3). We add 3 (y ≤ 3) agents, in instruction (17) and (21)
G2 = {13} . After, we add the second agent X2,2 = {13} because sf(X2,
(to each group an agent that acts for it). At the last iteration of loop for
2) ≥ s(X2, 1) (instruction (9)). Therefore, we have G (2) = {13, 13} . The
(instruction (25)), we get G (3) = X3,1 ∪ X3,2 ∪ X3,3 = {13, 13, 12, 12},
list L becomes L = L − {15, 15, 13, 13} (instruction 10). So, at this stage,
where X3,3 = {12, 12} and we get S = L − {13, 13, 12, 12} . For each
the grouping into two groups is possible. Because, for each group there
memory space added we delete it from the list S (instructions (18) and
are two agents that act for it, one agent in G(1) and the other in G(2). At
(22)). If the memory space for items cannot make agents in the new
group the algorithm jumps to label A to get out from the loop because
15, 15, 13, 13, 13, 13, 12, 12, 12, 12, 11, 11, 11, the grouping into j groups is not possible (instruction (23)).
From instruction (26) to instruction (34), we add to each agent in
10, 10, 10, 9, 9, 9, 8, 8, 8, 7, 7, 7, 7, 6, 6, 6, 6, 6,
group G(j), j − 2 relief agents. At the end of the loop for (instruction
5, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3 (34)), we get Z3,1,1 = {12, 12},Z3,2,1 = {12, 12},Z3,3,1 = {11, 11, 11}, there-
Fig. 11. An example of memory spaces for items of IoT(42, 326). fore, G (3) = G (3) ∪ Z3,1,1 ∪ Z3,2,1 ∪ Z3,3,1 and S = S − G (3) .
59
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
From instruction (35) to instruction (43), we add to each agent in Algorithm for each set H(j), 1 < j < J:
each group G(u), u < j of the previous groups a relief agent which may begin
be called from an agent to store the items that have as destination group for i ∈ {1, ..., J} do
G(j). These relief agents are: Z1,1,1 = {10, 10},Z1,2,1 = {10, 9, 9}, if (|Xi,j | = 1) then
Z2,1,1 = {9, 8},Z2,2,1 = {8, 8, 7} . From instruction (44) to instruction (56), Broadcast items one by one on C(j);
to each group G(u), u < j, an agent which acts for the group G(j) is Thing td copies IT EM (i, j, d) in its local memory;
added. And for each agent added, we add its relief agent. From in- else
struction (44) to instruction (56), we get: v←1
For G(1): while (v < β(i, j)) do
Agent x(i, j)v ∈ Xi,j broadcasts items
X1,3 = {7, 7, 7}, so S = S − {7, 7, 7},Z1,3,1 = {6, 6, 6, 6, 6}, so
IT EM (i, j, d) on C(j);
S = S − {6, 6, 6, 6, 6}
Thing td copies IT EM (i, j, d) in its local memory;
For G(2):
v ←v+1
X2,3 = {5, 4, 4}, Z2, 3, 1= {3, 3, 3, 3, 3, 3, 3, 3}, end while;
Z2, 2, 1= {5, 5, 5, 4}, Z2, 3, 1 = {4, 4, 4, 4, 4}. end if
So the grouping into three groups, G(1), G(2) and G(3) is possible, t←1
where there are 3 agents in each group and each agent has 2 relief while (t ≤ J − 2)
agents. if (|Zi,j,t | = 1) then
The instruction (60) is executed if the grouping into j group is not Broadcast items one by one on C(j);
possible. In this case, the remaining of memory spaces for items, which Thing td copies IT EM (i, j, d) in its local memory;
is in L (recovered in instruction (57)) is distributed on the groups fol- else for (v = 1, v < β(i, j, t), v + +)
lowing the algorithm 3 (instruction (61)). RTc, d is the remaining thing Agent z(i, j, t)v ∈ Zi,j,t broadcasts items
number d added into the group c, 1 ≤ c ≤ j, see Fig. 10. IT EM (i, j, d) on C(j);
Thing td copies IT EM (i, j, d) in its memory
end for
3.2.2. Broadcast to agents
end if
Let J be the final number of groups after applying the grouping
t←t+1
algorithm of the previous step. In this step, each thing in each group end while;
G(j), 1 ≤ j ≤ J, broadcasts its items in the channel C(j). In each group, end for
the agent Xj, j2 copies from the channel C(j) the items ITEM(j, j2, d) that end
have G(j) as source group and have as destination the group G(j2).
Proceeding sequentially, the things in each group broadcast one by one Fig. 12. Broadcast to final destinations algorithm.
on the channel, each agent that has Xj, j2 = 1 copies without any pro-
blem the items. However, in each agent that has |Xj, j2| > 1, the things implement this step, we put now the agents that store items with same
in this agent should know the time at which each thing should start destination group in new group.
copying the items from the channel C(j). Let H(j), 1 ≤ j ≤ J, be the group of agents that store items that have
The mechanism used for agents that have |Xj, j2| > 1 to know the as destination the group G(j). Namely, H
exact moment when each thing x(j, j2)i, 1 ≤ j, j2 ≤ J, 0 ≤ i ≤ β(j, j2) it (j ) = {X1, j ∪ Z1, j,1 ∪ …∪Z1, j, J − 2} ∪ {X2, j ∪ Z2, j,1 ∪ …∪Z2, j, J − 2 ∪ …} is the set
should start copying the items from channel C(j) is the following: Let M
∪ {XJ , j ∪ ZJ , j,1 ∪ …∪ZJ , j, J − 2}
(x(j, j2)i) be the memory capacity of x(j, j2)i. Each thing x(j, j2)i ∈ Xj, j2,
of agents and relief agents that act for group G(j). In this last step, each
1 ≤ , j, j2 ≤ J, has a counter c(j, j2)i. For each item ITEM(j, j2, d)
channel C(j) is assigned to the agents that have items to be broadcast to
broadcast in the channel, x(j, j2)i increments c(j, j2)i. The station x(1,
destinations in group G(j). The principle of this step is to allow each
j)1, 1 ≤ j ≤ J, is the first to start copying the items. Each thing x(j, j2)i,
agent in H(j) to know the exact time at which it should broadcast its
f1 ≠ 1, starts copying the items ITEM(j, j2, d) after
items in the channel C(j). The algorithm for this step is presented in
c (j, j2)i = 3(M (x (j, j2)1) + M (x (j, j2)2) + …..+M (x (j, j2)i − 1)), with,
Fig. 12. This algorithm is executed in parallel in each group, using a
0 ≤ i ≤ β(j, j2), x(j, j2)i ∈ Xj, j2.
channel C(j) for each set of agents H(j), 1 ≤ j ≤ J.
We recall that for each group G(j), relief agents Zj, j2, h, 1 ≤ j2 ≤ J,
1 ≤ h ≤ J − 2, takes turn of copying when the agent Xj, j2 is full.
Therefore, firstly, each Zj, j2, h should know when the agent Xj, j2 and the 4. Experimental results
relief agents Zj, j2, h1, 1 ≤ h1 < h are full. That is, each relief agent Zj, j2,
h should know the exact moment when it starts copying and therefore To demonstrate the effectiveness and the performance of the pro-
each z(j, j2, h)i, 0 ≤ i ≤ β(j, j2, h) in Zj, j2, h should know the moment posed protocols and to compare them to the related works, we im-
when it should start copying. The mechanism used for this plemented them using C + +. We simulated the two protocols on a
purpose is the following: Each relief agent Zj, j2, h, 1 ≤ h ≤ J − 2 for the laptop with processor Intel(R) Core(Tm) i5, 2.53 Ghz with 4 GB of
agent Xj, j2 has a counter C(j, j2, h). For each item ITEM(j, j2, d) memory. The following simulation results are the average of 100 tests
broadcast in the channel C(j), Zj, j2, h increments C(j, j2, h). (for each number of nodes) on the topology of connected and randomly
Each agent Zj, j2, h, starts copying after Cj, j2, h = generated networks of 100, 200, 400, 600, 800 and 1000 things in 50
3(M (Xj, j2 ) + M (Zj, j2,1) + M (Zj, j2,2) + …….+M (Zj, j2, h − 1)) . Now, we give × 50 (m2) simulation area.
the moment when each thing z(j, j2, h)i ∈ Zj, j2, h, 1 ≤ h ≤ J − 2, has a The points addressed in this section are:
counter c(j, j2, h)i. For each item ITEM(j, j2, d) broadcast in the channel
C(j), z(j, j2, h)i increments c(j, j2, h)i. The station z1, j2, h ∈ Zj, j2, h is
the first to start copying the items at the time C(j, j2, h). Each thing
• Demonstrate through different tests the number of broadcast rounds
with the theorems theoretically proved,
z(j, j2, h)i, i ≠ 1, starts copying after c (j, j2, h)i = C
(j, j2, h) + 3(M (z (j, j2, h)1) + M (z (j, j2, h)2) + …..+M (z (j, j2, h)i − 1)) .
• Comparison of the two protocols together and with other protocols
that use other grouping procedures,
• Comparison of the protocols with other protocols existing in the
3.2.3. Broadcast to final destinations state of the art, applied for the permutation routing in IoT.
Using a principle similar to the one in Section 3.1.3, in this step the
agents broadcast each item ITEM(j, j2, d) to its final destination td. To Figs. 13–15 present the number of broadcast rounds for our first
60
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
800 140
f1(Max(Mi))
700 f2(Max(Mi)) MaMG1
f3(Max(Mi)) 120 MaMG2
600 #Brdcst Rnds MiMG
# Broadcast Rounds
RRG
100
400 80
300
60
200
40
100
0 20
0 50 100 150 200
MAX(Mi)
0
(100, 4) (200, 5) (400, 6) (600, 7) (800, 8) (1000, 9)
Fig. 13. Number of broadcast rounds with k = 3 .
(number of things, k)
Fig. 16. Comparison of the number of broadcast rounds with other protocols that use
800
f1(Max(Mi)) other procedures of grouping.
f2(MAX(Mi))
700 f3(MAX(Mi))
#Brdcst Rnds 160
600 OUR
[17]
# Broadcast Rounds
140
500
120
400
Ave.# Broadcast rounds
300 100
200 80
100 60
0
0 50 100 150 200 40
MAX(Mi)
20
Fig. 14. Number of broadcast rounds with k = 6 .
0
800 (100, 4) (200, 5) (400, 6) (600, 7) (800, 8) (1000, 9)
f1(Max(Mi)) (number of things, k)
f2(MAX(Mi))
700 f3(MAX(Mi))
Fig. 17. Comparison of the number of broadcast rounds of protocols that use O ⎛⎜ ⎞⎟
n
#Brdcst Rnds k
⎝ ⎠
600
memory space, our protocol and the protocol in [17].
# Broadcast Rounds
500
400
×k + 3 . )
These results come to confirm our theoretical studies on the number
of broadcast rounds. We see in Figs. 13–15 that for different experi-
300
ments, the number of broadcast rounds is always greater or equals to
200 f1(MAX(Mi)) or f2(MAX(Mi)) and inferior or equals to f3(MAX(Mi)).
Given that the number of broadcast rounds depends on the group
100 that has the highest number of items (from Theorem 3.8). Therefore, to
show the superiority of our grouping procedures in the first step of our
0 protocols (abbreviated as MaMG, for MAX MAX Grouping, with MaMG1
0 50 100 150 200 n
is for the protocol that uses O ( k ) memory space and MaMG2 is for the
MAX(Mi)
protocol that uses O(Mi) memory space for each thing ti. We compare
Fig. 15. Number of broadcast rounds with k = 7 . the number of broadcast rounds of our protocols and the number of
broadcast rounds with other protocols that use other grouping proce-
proposed algorithm with k = 3,k = 6,k = 7 respectively. With a simu- dures. The principle of the first grouping to compare with is to add
lation on 1000 things for different values of MAX(Mi) (35, 60, 45, 55, randomly the memory spaces that contain the items to the k groups
(abbreviated as RRG, for Random Random Grouping), this protocol is
mk
65, 75, 85, 95, 105), where f 1(MAX (Mi )) =2 k + 1 × MAX (Mi ) + k , ( ) important to compare with, because it adds directly the elements in the
if mk mod k = 0, f 2(MAX (Mi )) = 2 ⌊ ( ( mk
k
)⌋ )
+ 1 × MAX (Mi ) + k − 1, if groups, without spending a time to seek the min or the MAX. The
principle of the second grouping procedure is to add at each step to the
mk mod k ≠ 0, and (
MAX (M )
f 3(MAX (Mi )) = 2MAX (Mi ) −⎛ ⎢ k + 1 i ⎥ − 1
⎝ ⎣ ⎦ ) minimum of the k groups the maximum of the memory spaces
61
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
things)
50
• Proposing solutions that deal with the sleep/wakeup states for
things to save energy
0
(100, 4) (200, 5) (400, 6) (600, 7) (800, 8) (1000, 9)
• Studying of the different attacks that may occur and propose secure
protocols
(number of things, k)
Fig. 18. Comparison of the number of broadcast rounds of our protocol that uses O(Mi) References
62
H. Lakhlef et al. Computer Communications 115 (2018) 51–63
Measurement, San Diego, 2007, pp. 327–332. Abdelmadjid Bouabdallah received the Master (DEA) degree and Ph.D. from university
[24] S. Park, S. Cho, J. Lee, Energy-efficient probabilistic routing algorithm for internet of Paris sud Orsay (France) respectively in 1988 and 1991. From 1992 to 1996, he was
of things, J. Appl. Math. (2014). Assistant Professor at university of EvryVal d’Essonne (France) and since 1996 he is
[25] K. Akkaya, F. Senel, B. McLaughlan, Clustering of wireless sensor and actor net- Professor at University of Technology of Compiegne (UTC), where he is leading the
works based on sensor distribution and connectivity, Els. J. Parallel Distrib. Networking and Security research group and the Interaction and Cooperation research of
Comput. 69 (6) (2009) 573–587. the Excellence Research Center LABEX MS2T. His research Interest includes Internet QoS,
[26] J.L. Trff, A. Ripke, Optimal broadcast for fully connected processor-node networks, security, unicast/multicast communication, Wireless Sensor Networks, and fault tolerance
J. Parallel Distrib. Comput. 68 (2008) 887–901. in wired/wireless networks. He conducted several large scale research projects funded by
[27] C. Zhan, V.C.S. Lee, J. Wang, Y. Xu, Coding-based data broadcast scheduling in on- Motorola Labs., Orange Labs., ANRRNRT, CNRS, and ANR-Carnot.
demand broadcast, IEEE Trans. Wireless Commun. 10 (11) (2011).
[28] M. Raynal, J. Stainer, J. Cao, W. Wu, A simple broadcast algorithm for recurrent
dynamic systems, 28th IEEE International Conference on Advanced Information Michel Raynal is a Professor of computing science, IRISA, University of Rennes, France.
Networking and Applications, AINA 2014, Victoria, BC, Canada, 2014, pp. 13–16. His main research interests are the basic principles of distributed computing systems. He
[29] R. Narasimhan, Cooperative broadcast channels with hybrid ARQ, IEEE 22nd is a world leading researcher in distributed computing, and the author of numerous pa-
International Symposium on Personal Indoor and Mobile Radio Communications pers on this topic (more than 140 in int’l scientific journals, more than 300 papers in int’l
(PIMRC), (2011), pp. 1579–1584. Toronto, ON, Canada conferences). He is also well-known for his books on distributed computing. Michel
[30] J.F. Myoupo, Concurrent broadcasts-based permutation routing algorithms in radio Raynal chaired the program committee of the major conferences on the topic (e.g., ICDCS,
networks, IEEE Symposium Comput. Commun. (2003) 1272–1278. DISC, SIROCCO, OPODIS, ICDCN, etc.) and served on the program committees of more
[31] A. Datta, A fault-tolerant protocol for energy-efficient permutation routing in than 180 int’l conferences including all the most prestigious ones. He is the recipient of
wireless networks, IEEE Trans. Comput. 54 (11) (2005) 421. several “Best Paper” awards (including ICDCS 1999, 2000 and 2001, SSS 2009 and 2011,
[32] S. Rajasekaran, D. Sharma, R. Ammar, N. Lownes, An efficient randomized routing Europar 2010, DISC 2010, PODC 2014) and has supervised more than 45 PhD students.
protocol for single-hop radio networks, Proc. of the 39th International Conference He is also the winner of the 2015 Int’l Award “Innovation in Disitributed Computing”
on Parallel Processing (ICPP), (2010), pp. 160–167. (also known as SIROCCO Prize). He gave lectures on distributed computing in many
[33] D. Karimou, J.F. Myoupo, A fault tolerant permutation routing algorithm in mobile universities all over the world. In the recent past, Michel Raynal has written four books:
ad hoc networks, International Conference on Networks-Part II, (2005), pp. “Communication and Agreement Abstractions for Fault-Tolerant Asynchronous
107–115. Distributed Systems”, Morgan & Claypool 251 pages, 2010 (ISBN 978-1-60845-293-4);
[34] A. Datta, A.Y. Zomaya, An energy-efficient permutation routing protocol for single- “Fault-Tolerant Agreement in Synchronous Distributed Systems”, 165 pages, Morgan &
hop radio networks, IEEE Trans. Parallel Distrib. Syst. 15 (4) (2005) 331–338. Claypool, September 2010), (ISBN 978-1-60845-525-6); “Concurrent Programming:
[35] I.S. Walls, J. erovnik, Optimal permutation routing on mesh networks, International Algorithms, Principles and Foundations”, Springer, 515 pages, 2012 (ISBN 978-3-642-
Network Optimization Conference,Belgium, (2008). 22–25 32026-2), and “Distributed Algorithms for Message-passing Systems”, Springer, 510
[36] D. Karimou, J.F. Myoupo, An application of an initialization protocol to permuta- pages, 2013 (ISBN: 978-3-642-32026-2). Since 2010, Michel Raynal is a senior member of
tion routing in a single-hop mobile ad-hoc networks, J. Super-comput. 31 (3) (2005) the prestigious “Institut Universitaire de France”.
215–226.
[37] D. Karimou, J.F. Myoupo, Randomized permutation routing in multi-hop ad hoc Julien Bourgeois is professor of computer science at the University of Franche-Comté
networks with unknown destinations, IFIP Int. Federation Inf. Process. 212 (2006) (UFC) in France. He is part of the FEMTO-ST institute (UMR CNRS 6174) where he is
47–59. leading the complex networks team. His research interests are in distributed intelligent
[38] A.B. Bomgni, J.F. Myoupo, A deterministic protocol for permutation routing in MEMS, P2P networks and security management for complex networks. He has been in-
dense multi-hop sensor networks, Wireless Sensor Netw. 2 (4) (2010) 293–299. vited professor at Carnegie Mellon University (US) in 2012/2013, at Emory University
[39] H. Lakhlef, A.B. et, J.F. Myoupo, An efficient permutation routing protocol in multi- (US) in 2011 and in Hong Kong Polytechnic University in 2010 and 2011. He is currently
hop wireless sensor networks, Int. J. Adv. Comput. Technol. (3) (2011) 207–214. leading the topic System architecture, communication, networking in the LABEX ACTION
funded program whose aims at building integrated smart systems (http://www.
Hicham Lakhlef is an associate professor at the University of Technology of Compiegne labexaction. fr/). He created and then co-led the Smart Surface project. In 2011, he
(UMR CNRS 7253). During the year 2015/2016 he was a temporary researcher created the Smart Blocks project which aims at building a self-reconfigurable conveying
(Postdoctoral) in IRISA, University of Rennes 1 (UMR CNRS 6074) in France. During the modular plate-form composed of MEMS sensors and actuators and in 2013, he created the
year 2014/2015 he was a temporary teaching assistant and researcher at the University of computation and coordination for DiMEMS project. All these projects are funded by re-
Franche-Comté/ FEMTO-ST institute (UMR CNRS 6174) in France. He was General Chair search agencies. He has also worked in the Centre for Parallel Computing at the
of IEEE ATC 2018; He served as a committee program member for: IEEE ISPA 2017, IEEE University Of Wetsminster (UK) and in the Consiglio Nazionale delle Richerche (CNR) in
ATC 2017, IEEE Eurocon 2015. He co-authored more than 25 international publications. Geneva. He collaborated with several other institutions (Lawrence Livermore National
He obtained his Ph.D degree at the University of Franche-Comté in 2014 in France. He Lab, Oak Ridge National Lab, etc.). He is member of more than 10 program committees of
obtained his Master’s degree from the University of Picardie Jules Verne in 2011 in international conferences on these topics and has organized many events. He has worked
France. In 2010 he was a student at the University of Setif in Algeria, in 2009 he was a for more than 10 years on these topics has co-authored more than 100 international
student at the University of Oum Albouaghi in Algeria, where he obtained a license di- publications and communications and has served as PC members and chaired various
ploma. In 2007 and 2008 he was a student at the University of Bordj Bou Arreridj in conferences. Apart from its research activities, he is acting as a consultant for the French
Algeria. His research interests are in parallel and distributed algorithms, WSNs, clus- government and for companies.
tering, self-reconfiguration, security, routing, and Internet of things.
63