60 views

Uploaded by nezarat

fft vhdl

- intro to DC
- ITC Imp Ques
- Stats 121 Assignment
- Radio-News-1938-01-R
- ps2
- 11
- Frsky L9R Manual
- Statistics Unit 6 Notes
- Modifications for the Yaesu Ft te3tet
- Abstract- Implementing FFT and IFFT on FPGA
- Ex 5642
- Assert Hand02
- Report
- CHARACTERISTIC VALUE.pdf
- NASSIM NICHOLAS TALEB ON BOTTLE NECK AND NOISE EFFECT
- John Kanzius water fuel patent
- V.karthikeyan Published Article Aaa
- IR Receiver AX 1838HS
- Simetar Users Manual 2008 Complete
- ECB - ecbwp1237

You are on page 1of 6

Condential Messages

Hassan Zivari-Fard

,

, Bahareh Akhbari

, Mahmoud Ahmadian-Attari

Information Systems and Security Lab (ISSL), Sharif University of Technology, Tehran, Iran

Email: hassan zivari@ee.kntu.ac.ir, aref@sharif.edu

Email: {akhbari, mahmoud}@eetd.kntu.ac.ir

1

AbstractIn this paper, we study the problem of secret com-

munication over a Compound Multiple Access Channel (MAC).

In this channel, we assume that one of the transmitted messages

is condential that is only decoded by its corresponding receiver

and kept secret from the other receiver. For this proposed setting

(compound MAC with condential messages), we derive general

inner and outer bounds on the secrecy capacity region. Also, as

examples, we investigate Less noisy and Gaussian versions of

this channel, and extend the results of the discrete memoryless

version to these cases. Moreover, providing numerical examples

for the Gaussian case, we illustrate the comparison between

achievable rate regions of compound MAC and compound MAC

with condential messages.

I. INTRODUCTION

The wire-tap channel was rst introduced by Wyner in

1975 [1]. His model consisted of a transmitter, a receiver

and an eavesdropper. In the Wyner model, the eavesdropper

channel was a degraded version of the legitimate receiver

channel. Csisz ar and K orner extended the wire-tap channel to

a more generalized model called the broadcast channel with

condential message [2]. More recently, sending a condential

message over multiple-user channels have been studied under

various different models [3][8]. We also refer the reader to

[9] for a recent survey of the research progress in this area.

The discrete memoryless compound Multiple Access Channel

(MAC), Gaussian compound MAC with a common message

and conferencing decoders, and also the compound MAC

when both encoders and decoders cooperate via conferencing

links were considered in [10].

In [3] the authors studied the effect of users cooperation

in multiple access channel when transmitting a condential

message. There, active cooperation between two trusted users

is attained through a generalized feedback channel. In [4] a

discrete memoryless multiple access channel with condential

messages was studied where each user uses the output of

generalized feedback to eavesdrop the other users private

message. Ekrem and Ulukus derived n-letter inner and outer

bounds for the multiple access wire-tap channel with no

common message [5]. In [7], the authors studied this model

assuming that there exists a common message and that the

eavesdropper is unable to decode it. They also derived a rate

region under the strong secrecy criterion.

In this paper, we consider Compound Multiple Access

Channel with Condential Messages (CMAC-CM). Actually,

in wireless networks, there may be a scenario in which some

of the users have condential information that wish to be

1

This work was partially supported by Iran Telecom Research Center under

contract no. 17175/500.

CMAC-CM

Encoder1

Encoder 2

Decoder 1

Decoder 2

1 2 1 2

( , | , ) p y y x x

1 2

( | )

n

H W Y

02 W

22 W

01 W

11 W

21 W

1

n

X

2

n

X

1

n

Y

2

n

Y

1

W

2

W

0

W

Fig. 1 Compound Multiple Access Channel with Condential Mes-

sages (CMAC-CM)

kept secret from illegal users. In fact, in terms of information

the users can be divided into legitimate and illegal users.

Legitimate users are allowed to decode all the transmitted

information (including common and private messages of all

the transmitters), while illegal users are allowed to decode only

the messages of their intended transmitters. Motivated by this

scenario, we consider CMAC-CM as a building block of this

setting. In this model, while each of the transmitters sends its

own private message, both of them have a common message.

One of the transmitters private message (W

1

) is condential

and only decoded by the rst receiver and kept secret from

the second receiver. The common message W

0

and private

message W

2

are decoded by both receivers (see Fig. 1). For

this model we derive single letter inner and outer bounds on

the secrecy capacity region. We also consider two examples

for this channel: Less noisy and Gaussian CMAC-CM.

This paper is organized as follows. In Section II, the system

model is described. In Section III, an outer bound on the

secrecy capacity region of CMAC-CM and also an achievable

secrecy rate region for CMAC-CM are derived. Two examples

are given in Section IV. The paper is concluded in Section V.

II. SYSTEM MODEL

Consider a discrete memoryless CMAC-CM with four-

terminals as shown in Fig. 1. The nite sets X

1

,X

2

,Y

1

,Y

2

and the transition probability distribution p(y

1

, y

2

|x

1

, x

2

) are

the constitutive components of this channel. Here, X

1

and

X

2

are the channel inputs from the transmitters. Also Y

1

and

Y

2

are the channel outputs at the receiver 1 and receiver 2,

respectively. Throughout this paper, the random variables

are denoted by capital letters e.g., X, Y, and their realiza-

tions by lower case letters e.g. x, y. The set of strongly

jointly typical sequences of length n, on joint distribution

p(x, y) is denoted by A

n

(P

X,Y

). We use X

n

i

, to indicate

vector (X

i,1

, X

i,2

, . . . , X

i,n

), and X

k

i,j

to indicate vector

(X

i,j

, X

i,j+1

, . . . , X

i,k

). Before discussing the achievability

rate, we rst dene a code for the channel as follows.

Denition 1: A (M

0

, M

1

, M

2

, n, P

n

e

) code for the CMAC-

CM (Fig. 1) consists of the following: i) Two message sets

(W

0

, W

1

) and (W

0

, W

2

) that are uniformly distributed over

a

r

X

i

v

:

1

4

0

2

.

5

8

6

9

v

1

[

c

s

.

I

T

]

2

4

F

e

b

2

0

1

4

[1 : M

0

][1 : M

1

] and [1 : M

0

][1 : M

2

], respectively, where

messages W

u

W

u

= {1, 2, ..., M

u

} and u = 0, 1, 2. Note

that W

0

, W

1

and W

2

are independent. ii) A stochastic encoder

f for transmitter 1 is specied by the matrix of conditional

probability f(X

n

1

|w

0

, w

1

), where X

n

1

X

n

1

, w

0

W

0

,

w

1

W

1

are channel input, common and private message

sets respectively, and

X

n

1

f(X

n

1

|w

0

, w

1

) = 1. Note that

f(X

n

1

|w

0

, w

1

) is the probability of encoding message pair

(w

0

, w

1

) to the channel input X

n

1

. iii) A deterministic encoder

g for transmitter 2 which is the mapping g : W

0

W

2

X

n

2

for generating codewords X

n

2

= g(w

0

, w

2

). iv) A decoding

function : Y

n

1

W

0

W

1

W

2

at the receiver 1 that assigns

(

W

01

,

W

11

,

W

21

) [1 : M

0

] [1 : M

1

] [1 : M

2

] to received

sequence y

n

1

. v) A decoding function : Y

n

2

W

0

W

2

, at

the receiver 2 that assigns (

W

02

,

W

22

) [1 : M

0

] [1 : M

2

]

to received sequence y

n

2

. The probability of error is dened

as,

P

n

e

= Pr(

W

0j

= W

0

for j = 1, 2 or

W

11

= W

1

or

W

2j

= W

2

for j = 1, 2). (1)

The ignorance level of receiver 2 with respect to the con-

dential message is measured by the normalized equivocation

1

n

H(W

1

|Y

n

2

).

Denition 2: A rate tuple (R

0

, R

1

, R

2

) is said to be

achievable for CMAC-CM, if for any > 0 there exists a

(M

0

, M

1

, M

2

, n, P

n

e

) code as

P

n

e

< (2)

M

0

2

nR

0

, M

1

2

nR

1

, M

2

2

nR

2

(3)

R

1

1

n

H(W

1

|Y

n

2

) (4)

III. MAIN RESULTS

A. Outer Bound

Theorem 1: (Outer bound) The secrecy capacity region for

the CMAC-CM is included in the set of rates satisfying

R

0

min{I(U; Y

1

), I(U; Y

2

)} (5)

R

1

I(V

1

; Y

1

|U, V

2

) I(V

1

; Y

2

|U, V

2

) (6)

R

2

min{I(V

2

; Y

1

), I(V

2

; Y

2

)} (7)

R

1

+R

2

I(V

1

, V

2

; Y

1

) I(V

1

; Y

2

|U, V

2

) (8)

for some joint distribution

p(u)p(v

1

, v

2

|u)p(x

1

|v

1

)p(x

2

|v

2

)p(y

1

, y

2

|x

1

, x

2

). (9)

Remark 1: If we set W

2

= and thus R

2

= 0 in

Theorem 1, the region reduces to the region of the broadcast

channel with condential messages discussed in [2] by Csisz ar

and K orner.

Proof (Theorem 1): We next show that any achievable rate

tuples satises (5)-(8) for some distribution factorized as (9).

Consider a code (M

0

, M

1

, M

2

, n, P

n

e

) for the CMAC-CM.

Applying Fanos inequality [11] results in

H(W

0

, W

1

, W

2

|Y

n

1

) n

1

(10)

H(W

0

, W

2

|Y

n

2

) n

2

(11)

We rst derive the bound on R

1

. Note that the perfect

secrecy (4) implies that

nR

1

n H(W

1

|Y

n

2

). (12)

Hence, we derive the bound on H(W

1

|Y

n

2

) as following:

H(W

1

|Y

n

2

) = H(W

1

|Y

n

2

, W

0

, W

2

) +I(W

1

; W

0

, W

2

|Y

n

2

)

= H(W

1

|Y

n

2

, W

0

, W

2

) +H(W

0

, W

2

|Y

n

2

)

H(W

0

, W

2

|Y

n

2

, W

1

)

H(W

1

|Y

n

2

, W

0

, W

2

) +n

2

H(W

1

|Y

n

2

, W

0

, W

2

) H(W

1

|Y

n

1

, W

0

, W

2

)

+n

1

+n

2

(13)

where the rst and the second inequality are due to Fanos

inequalities. Now, based on (13) we have

H(W

1

|Y

n

2

) I(W

1

; Y

n

1

|W

0

, W

2

)

I(W

1

; Y

n

2

|W

0

, W

2

) +n

=

n

i=1

[I(W

1

; Y

1,i

|Y

i1

1

, W

0

, W

2

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)] +n

=

n

i=1

[I(W

1

, Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

2

)

I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

1

, W

2

)

I(W

1

, Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)

+I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

1

, W

2

)] +n

=

n

i=1

[I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

2

)

+I(W

1

; Y

1,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)

I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

1

, W

2

)

I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)

+I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

1

, W

2

)] +n

=

n

i=1

[I(W

1

; Y

1,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)] +n

where =

1

+

2

. The last equality is due to [2, Lemma 7]

where:

n

i=1

I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

2

) =

n

i=1

I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

) (14)

and

n

i=1

I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

1

, W

2

) =

n

i=1

I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

1

, W

2

). (15)

So, we have

H(W

1

|Y

n

2

)

n

i=1

[I(W

1

; Y

1,i

|U

i

, V

2,i

)

I(W

1

; Y

2,i

|U

i

, V

2,i

)] +n

=

n

i=1

[I(V

1,i

; Y

1,i

|U

i

, V

2,i

) I(V

1,i

; Y

2,i

|U

i

, V

2,i

)] +n

(16)

where the equalities resulting from the following denitions

of the random variables

U

i

= Y

n

2,i+1

, Y

i1

1

, W

0

(17)

V

1,i

= (U

i

, W

1

) (18)

V

2,i

= (U

i

, W

2

). (19)

Now, we have

H(W

1

|Y

n

2

) n

n

i=1

1

n

[I(V

1,Q

; Y

1,Q

|U

Q

, V

2,Q

, Q = i)

I(V

1,Q

; Y

2,Q

|U

Q

, V

2,Q

, Q = i)] +n

= n

n

i=1

p(Q = i)[I(V

1,Q

; Y

1,Q

|U

Q

, V

2,Q

, Q = i)

I(V

1,Q

; Y

2,Q

|U

Q

, V

2,Q

, Q = i)] +n

= n[I(V

1,Q

; Y

1,Q

|U

Q

, V

2,Q

, Q)

I(V

1,Q

; Y

2,Q

|U

Q

, V

2,Q

, Q)] +n

= n[I(V

1

; Y

1

|U, V

2

) I(V

1

; Y

2

|U, V

2

)] +n (20)

where V

1,Q

= V

1

, V

2,Q

= V

2

, Y

1,Q

= Y

1

, Y

2,Q

=

Y

2

, (U

Q

, Q) = U and Q has a uniform distribution over

{1, 2, ..., n} outcomes. Now, we derive the bound on R

2

as

following:

nR

2

= H(W

2

) = H(W

2

|W

0

)

= I(W

2

; Y

n

1

|W

0

) +H(W

2

|Y

n

1

, W

0

)

I(W

2

; Y

n

1

|W

0

) +n

1

(21)

where the second equality results from independence of

W

0

, W

1

, W

2

, and the inequality is due to the Fanos inequality.

Now, based on (21)

nR

2

n

i=1

I(W

2

; Y

1,i

|Y

i1

1

, W

0

) +n

1

i=1

I(W

2

, Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

) +n

1

i=1

I(W

2

, W

0

, Y

n

2,i+1

, Y

i1

1

; Y

1,i

) +n

1

=

n

i=1

I(V

2,i

, U

i

; Y

1,i

) +n

1

(22)

where the last equality follows from (17) and (19). Now, we

have

R

2

n

i=1

1

n

[I(V

2,i

, U

i

; Y

1,i

)] +n

1

=

n

i=1

1

n

[I(V

2,Q

, U

Q

; Y

1,Q

|Q = i)] +n

1

=

n

i=1

p(Q = i)[I(V

2,Q

, U

Q

; Y

1,Q

|Q = i)] +n

1

= I(V

2,Q

, U

Q

; Y

1,Q

|Q) +n

1

= I(V

2,Q

, U

Q

, Q; Y

1,Q

) +n

1

= I(V

2,Q

; Y

1,Q

) +I(U

Q

; Y

1,Q

|V

2,Q

)

+I(Q; Y

1,Q

|V

2,Q

, U

Q

) +n

1

= I(V

2,Q

; Y

1,Q

) +n

1

= I(V

2

; Y

1

) +n

1

(23)

where fourth equality result from independence of Q and Y

Q

.

Sixth equality is due to I(U

Q

; Y

1,Q

|V

2,Q

) = 0 (see (17) and

(19)), and I(Q; Y

1,Q

|V

2,Q

, U

Q

) = 0 since Q is independent of

Y

1,Q

. The last equality results by setting V

2,Q

= V

2

, Y

1,Q

=

Y

1

.

On the other hand, we have:

nR

2

= H(W

2

) = H(W

2

|W

0

)

= I(W

2

; Y

n

2

|W

0

) +H(W

2

|Y

n

2

, W

0

)

I(W

2

; Y

n

2

|W

0

) +n

2

(24)

where the inequality is due to the Fanos inequality. Next, we

have

nR

2

n

i=1

I(W

2

; Y

2,i

|Y

n

2,i+1

, W

0

) +n

2

i=1

I(W

2

, Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

) +n

2

i=1

I(W

2

, Y

n

2,i+1

, Y

i1

1

, W

0

; Y

2,i

) +n

2

=

n

i=1

I(U

i

, V

2,i

; Y

2,i

) +n

2

(25)

where the last equality follows from (17) and (19). Now, by

applying the same time-sharing strategy as (23) we have

R

2

I(V

2

; Y

2

). (26)

Now, we derive the bound on n(R

1

+R

2

) as following:

n(R

1

+R

2

) = H(W

1

, W

2

) = H(W

1

, W

2

|W

0

)

= I(W

1

, W

2

; Y

n

1

|W

0

) +H(W

1

, W

2

|Y

n

1

, W

0

)

I(W

1

, W

2

; Y

n

1

|W

0

) +n

1

where the inequality is due to the Fanos inequality. Now, we

have

n(R

1

+R

2

) I(W

1

, W

2

; Y

n

1

|W

0

)

(H(W

1

) H(W

1

|Y

n

2

) n) +n

1

= I(W

1

, W

2

; Y

n

1

|W

0

) H(W

1

|W

0

, W

2

)

+H(W

1

|Y

n

2

) +n(

1

+)

= I(W

1

, W

2

; Y

n

1

|W

0

) H(W

1

|W

0

, W

2

)

+H(W

1

|Y

n

2

, W

0

, W

2

)

+I(W

1

; W

0

, W

2

|Y

n

2

) +n(

1

+)

I(W

1

, W

2

; Y

n

1

|W

0

)

I(W

1

; Y

n

2

|W

0

, W

2

) +n

2

+n(

1

+)

where the rst inequality is a consequence of (4) and the last

inequality is due to the Fanos inequality. We also dene

3

=

1

+

2

+. So, we have

n(R

1

+R

2

)

n

i=1

[I(W

1

, W

2

; Y

1,i

|Y

i1

1

, W

0

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)] +n

3

=

n

i=1

[I(W

1

, W

2

, Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

)

I(Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

, W

1

, W

2

)

I(W

1

, Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)

+I(Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

1

, W

2

)] +n

3

=

n

i=1

[I(W

1

, W

2

, Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

)

I(W

1

, Y

i1

1

; Y

2,i

|Y

n

2,i+1

, W

0

, W

2

)] +n

3

where the last equality is a consequence of (15). Hence, we

have

n(R

1

+R

2

)

n

i=1

[I(W

1

, W

2

, Y

n

2,i+1

; Y

1,i

|Y

i1

1

, W

0

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)] +n

3

i=1

[I(W

0

, W

1

, W

2

, Y

i1

1

, Y

n

2,i+1

; Y

1,i

)

I(W

1

; Y

2,i

|Y

n

2,i+1

, Y

i1

1

, W

0

, W

2

)] +n

3

.

Now, we have

n(R

1

+R

2

)

n

i=1

[I(U

i

, V

1,i

, V

2,i

; Y

1,i

)

I(V

1,i

; Y

2,i

|U

i

, V

2,i

)] +n

3

where the inequality follows from (17)-(19). Now, by applying

the same time-sharing strategy as before, we have:

R

1

+R

2

I(V

1

, V

2

; Y

1

) I(V

1

; Y

2

|U, V

2

). (27)

Now, we derive the bound on R

0

as following:

nR

0

= H(W

0

) = I(W

0

; Y

n

1

) +H(W

0

|Y

n

1

)

I(W

0

; Y

n

1

) +n

1

=

n

i=1

I(W

0

; Y

1,i

|Y

i1

1

) +n

1

=

n

i=1

[I(W

0

, Y

i1

1

; Y

1,i

) I(Y

i1

1

; Y

1,i

)] +n

1

.

So, we have

nR

0

n

i=1

I(W

0

, Y

i1

1

; Y

1,i

) +n

1

i=1

[I(W

0

, Y

i1

1

, Y

n

2,i+1

; Y

1,i

)

I(Y

n

2,i+1

; Y

1,i

|W

0

, Y

i1

1

)] +n

1

i=1

I(W

0

, Y

i1

1

, Y

n

2,i+1

; Y

1,i

) +n

1

=

n

i=1

I(U

i

; Y

1,i

) +n

1

.

Now, by applying the same time-sharing strategy as before,

we have

R

0

I(U; Y

1

) +

1

. (28)

Similarly

R

0

I(U; Y

2

) +

2

. (29)

Therefore

R

0

min{I(U; Y

1

), I(U; Y

2

)}. (30)

Considering (12), (20),(23),(26),(27) and (30), the region in

(5)-(8) is obtained. This completes the proof.

B. Achievability

Theorem 2: An inner bound on the secrecy capacity region

is given by:

R

0

0 , R

1

0 , R

2

0

R

1

I(V

1

; Y

1

|X

2

, U) I(V

1

; Y

2

|X

2

, U)

R

2

min{I(X

2

; Y

1

|V

1

, U), I(X

2

; Y

2

|U)}

R

0

+R

2

I(U, X

2

; Y

2

)

R

1

+R

2

I(V

1

, X

2

; Y

1

|U) I(V

1

; Y

2

|X

2

, U)

R

0

+R

1

+R

2

I(V

1

, X

2

; Y

1

) I(V

1

; Y

2

|X

2

, U)

(31)

where the union is taken over all probability

distributions of the form p(u, v

1

, x

1

, x

2

, y

1

, y

2

) =

p(u)p(v

1

|u)p(x

1

|v

1

)p(x

2

|u)p(y

1

, y

2

|x

1

, x

2

).

Remark 2: If we convert our model to a multiple access

channel with correlated sources by setting Y

2

= and V

1

=

X

1

in Theorem 2, the region reduces to the region of the

multiple access channel with correlated sources discussed in

[12] by Slepian and Wolf.

Remark 3: If we set X

2

= in Theorem 2, the region

includes the region of the broadcast channel with condential

messages discussed in [2] by Csisz ar and K orner.

Proof (Theorem 2): Fix p(u), p(v

1

|u), p(x

1

|v

1

) and

p(x

2

|u).

1) Codebook generation:

i) Generate 2

nR

0

codewords u

n

, each is uniformly drawn

from the set A

n

(P

U

) indexing by u

n

(w

0

), w

0

{1, ..., 2

nR

0

}.

ii) For each codeword u

n

(w

0

), generate 2

n

R

codewords v

n

1

each is uniformly drawn from the set A

(n)

(P

V

1

|U

), where

R = R

1

+ I(V

1

; Y

2

|X

2

, U) . Then, randomly bin

the 2

n

R

codewords into 2

nR

1

bins and label them as

v

n

1

(w

0

, w

1

, l). Here, w

1

is the bin number and l L =

{1, ..., 2

n(I(V

1

;Y

2

|X

2

,U))

} is the index of codewords in

the bin number w

1

.

iii) For each codeword u

n

(w

0

), generate 2

nR

2

codewords

x

n

2

(w

0

, w

2

) each is uniformly drawn from the set

A

(n)

(P

X

2

|U

), and label them as x

n

2

(w

0

, w

2

), w

2

{1, ..., 2

nR

2

}.

2) Encoding: To send the message pair (w

0

, w

1

), the

encoder f rst randomly chooses index l corresponding to

(w

0

, w

1

) and then, generates a codeword X

n

1

at random

according to

n

i=1

p(x

1,i

|v

1,i

). Transmitter 2 uses the deter-

ministic encoder for sending (w

0

, w

2

) and sends codeword

x

n

2

(w

0

, w

2

).

3) Decoding and Probability of error:

Receiver 1 declares that the indices of

( w

01

, w

11

, w

21

) has been sent if there is a

unique tuple of indices ( w

01

, w

11

, w

21

) such that

(u

n

( w

01

), v

n

1

( w

01

, w

11

, l), x

n

2

( w

01

, w

21

), y

n

1

)

A

n

(P

UV

1

X

2

Y

1

).

Receiver 2 declares that the index pair of ( w

02

, w

22

) has

been sent if there is a unique pair of indices ( w

02

, w

22

)

such that (u

n

( w

02

), x

n

2

( w

02

, w

22

), y

n

2

) A

n

(P

UX

2

Y

2

).

Using joint decoding [11], it can be shown that the proba-

bility of error goes to zero as n if we choose:

R

1

I(V

1

; Y

1

|X

2

, U) I(V

1

; Y

2

|X

2

, U) (32)

R

2

min{I(X

2

; Y

1

|V

1

, U), I(X

2

; Y

2

|U)} (33)

R

0

+R

2

I(U, X

2

; Y

2

) (34)

R

1

+R

2

I(V

1

, X

2

; Y

1

|U) I(V

1

; Y

2

|X

2

, U) (35)

R

0

+R

1

+R

2

I(U, V

1

, X

2

; Y

1

) I(V

1

; Y

2

|X

2

, U) (36)

4) Equivocation computation:

H(W

1

|Y

n

2

) H(W

1

|Y

n

2

, X

n

2

, U

n

)

= H(W

1

, Y

n

2

|X

n

2

, U

n

) H(Y

n

2

|X

n

2

, U

n

)

= H(W

1

, Y

n

2

, V

n

1

|X

n

2

, U

n

)

H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

) H(Y

n

2

|X

n

2

, U

n

)

= H(W

1

, V

n

1

|X

n

2

, U

n

)

+H(Y

n

2

|W

1

, V

n

1

, X

n

2

, U

n

)

H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

) H(Y

n

2

|X

n

2

, U

n

)

H(V

n

1

|X

n

2

, U

n

) +H(Y

n

2

|V

n

1

, X

n

2

, U

n

)

H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

) H(Y

n

2

|X

n

2

, U

n

)

= H(V

n

1

|U

n

) H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

)

I(V

n

1

; Y

n

2

|X

n

2

, U

n

) (37)

where the last inequality is due to the fact that V

n

1

is a function

of W

1

and the last equality follows from the Markov chain

V

n

1

U

n

X

n

2

. The rst term in (37) is given by:

H(V

n

1

|U

n

) = n

R. (38)

We then show that H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

) n

1

, where

as n then

1

0. Based on the Fanos inequality, we

have:

H(V

n

1

|W

1

= w

1

, Y

n

2

, X

n

2

, U

n

) 1 +p

ew

1

(n

R nR

1

) n

1

where p

ew

1

species the average of error probability of user 1 for

decoding v

n

1

(w

0

, w

1

, l) given W

1

= w

1

and W

0

= w

0

. Hence

H(V

n

1

|W

1

, Y

n

2

, X

n

2

, U

n

) =

w

1

W

1

p(W

1

= w

1

)H(V

n

1

|W

1

= w

1

, Y

n

2

, X

n

2

, U

n

) n

1

. (39)

The last term in (37) is bounded as:

I(V

n

1

; Y

n

2

|X

n

2

, U

n

) nI(V

1

; Y

2

|X

2

, U) +n

2

(40)

where as n , then

2

0 [1, Lemma 8]. By replacing (38)-(40)

in equation (37), and setting =

1

+

2

, we have:

H(W

1

|Y

n

2

) n

R n

1

nI(V

1

; Y

2

|X

2

, U) n

2

= n(

R I(V

1

; Y

2

|X

2

, U)) n

= nR

1

n. (41)

This completes the proof of Theorem 2.

IV. EXAMPLES

In this section, we consider less noisy and Gaussian CMAC-CM.

A. Less Noisy CMAC-CM

We dene less noisy model as a CMAC-CM where I(V

2

; Y

1

)

I(V

2

; Y

2

) for all p(v

2

, x

2

).

Theorem 3 (Outer Bound): The secrecy capacity region for the

less noisy CMAC-CM is included in the set of rates satisfying

R

1

I(V

1

; Y

1

|V

2

) I(V

1

; Y

2

|V

2

) (42)

R

2

I(V

2

; Y

2

) (43)

R

1

+R

2

I(V

1

, V

2

; Y

1

) I(V

1

; Y

2

|V

2

)} (44)

for some joint distribution

p(v

1

, v

2

)p(x

1

|v

1

)p(x

2

|v

2

)p(y

1

, y

2

|x

1

, x

2

).

Proof: The proof is similar to Theorem 1. If we set R

0

= 0 and

dene V

1,i

= (Y

n

2,i+1

, Y

i1

1

, W

1

) and V

2,i

= (Y

n

2,i+1

, Y

i1

1

, W

2

),

we can derive (42) to (44).

Enc1

Enc2

Dec1

Dec2

1

h

2

g

2

N

2

h

1

g

1

n

X

2

n

X

1

N

1

n

Y

2

n

Y

0

W

2

W

1

W

01 W

11 W

21 W

02 W

22 W

1 2

( | )

n

H W Y

Fig. 2 Gaussian compound MAC with condential messages.

Theorem 4 (Achievability): An inner bound on the secrecy

capacity region is given by:

R

1

I(V

1

; Y

1

|V

2

) I(V

1

; Y

2

|V

2

)

R

2

I(V

2

; Y

2

)

R

1

+R

2

I(V

1

, V

2

; Y

1

) I(V

1

; Y

2

|V

2

)

(45)

where the union is taken over all joint distributions factorized as

p(v

1

)p(v

2

)p(x

1

|v

1

)p(x

2

|v

2

)p(y

1

, y

2

|x

1

, x

2

).

Proof: The proof follows from Theorem 2 by setting U as a

constant and by replacing X

2

by V

2

where V

2

is obtained by passing

X

2

through a DMC p(v

2

|x

2

). Note that

I(V

2

; Y

1

|V

1

) = H(V

2

|V

1

) H(V

2

|Y

1

, V

1

)

(a)

= H(V

2

) H(V

2

|Y

1

, V

1

) = I(V

2

; Y

1

, V

1

)

I(V

2

; Y

1

)

(b)

I(V

2

; Y

2

)

where (a) happens since V

1

and V

2

are independent and (b)

results from considering less noisy model. Therefore, we have

min{I(V

2

; Y

1

|V

1

), I(V

2

; Y

2

)} = I(V

2

; Y

2

).

Note that although the outer and inner bounds for this model has

the same characteristic, they are over two different joint distributions.

B. Gaussian CMAC-CM

In this section, we consider Gaussian CMAC-CM as depicted in

Fig. 2, and extend the achievable rate region of discrete memoryless

CMAC-CM to the Gaussian case. Relationships between the inputs

and outputs of the channel are given by

Y

1

=

h

1

X

1

+

h

2

X

2

+N

1

(46)

Y

2

=

g

1

X

1

+

g

2

X

2

+N

2

(47)

where h

i

, g

i

(for i = 1, 2) are known channel gains as shown in

Fig. 2. The noise terms N

1

and N

2

are independent zero-mean unit-

variance complex Gaussian random variables and independent of the

real random variables X

1

, X

2

. The inputs of the channel are also

subject to an average power constraint

1

n

n

i=1

E[X

2

k,i

] P

k

k = 1, 2. (48)

Let V

1

, V

2

, X

1

, and X

2

be jointly Gaussian random variables with

V

1

=

P

U

1

U +

P

U

U

, V

2

=

P

U

2

U +

P

U

U

(49)

V

1

= X

1

, V

2

= X

2

(50)

where U, U

and U

ance random variables. The terms P

U

1

, P

U

2

, P

U

, and P

U

denote

the corresponding power allocation, where

P

1

= P

U

1

+P

U

, P

2

= P

U

2

+P

U

. (51)

Following the achievability proof for the discrete memoryless

channel, we obtain the following result for the Gaussian CMAC-CM.

Fig. 3 Achievable rate region of Gaussian CMAC-CM and the ca-

pacity region of Compound MAC for P

1

= P

2

= 1, h

1

= h

2

= 0.6,

g

1

= 0.4 and g

2

= 0.5.

Theorem 5: An inner bound on the secrecy capacity region of

Gaussian CMAC-CM is:

R

1

C(h

1

P

U

) C(g

1

P

U

)

R

2

min{C(h

2

P

U

), C(

g

2

P

U

1+g

1

P

U

)}

R

0

+R

2

C(

g

1

P

U

1

+g

2

P

2

+2

g

1

g

2

P

U

1

P

U

2

1+g

1

P

U

)

R

1

+R

2

C(h

1

P

U

+h

2

P

U

) C(g

1

P

U

)

R

0

+R

1

+R

2

C(h

1

P

1

+h

2

P

2

+ 2

h

1

h

2

P

U

1

P

U

2

)

C(g

1

P

U

)

(52)

where C(x) = (1/2) log(1 + x) and the union is taken over all

0 P

U

1

+P

U

P

1

and 0 P

U

2

+P

U

P

2

.

Proof: Following from Theorem 2, by choosing random variables

the same as (46)-(51) the region in (52) is derived.

Theorem 6: The capacity region of Compound Gaussian MAC is

given by:

0 R

1

min{C(h

1

P

U

), C(g

1

P

U

)}

0 R

2

min{C(h

2

P

U

), C(g

2

P

U

)}

0 R

1

+R

2

min{C(h

1

P

U

+h

2

P

U

)

, C(g

1

P

U

+g

2

P

U

)}

0 R

0

+R

1

+R

2

min{C(h

1

P

1

+h

2

P

2

+ 2

h

1

h

2

P

U

1

P

U

2

),

C(g

1

P

1

+g

2

P

2

+ 2

g

1

g

2

P

U

1

P

U

2

)}

(53)

where C(x) = (1/2) log(1 + x) and the union is taken over all

0 P

U

1

+P

U

P

1

and 0 P

U

2

+P

U

P

2

.

Proof: Considering Propositions 6.1 and 6.2 in [10] which are

outer and inner bounds on the capacity region of compound MAC

with conferencing links, then ignoring conferencing links in that

model (i.e, setting C

12

= C

21

= 0 in [10]), and by considering

(46) to (51), the region in (53) is derived.

As an example, for the values P

1

= P

2

= 1, h

1

= h

2

= .6,

g

1

= .4 and g

2

= .5 the achievable rate region in Theorem 5 and the

capacity region in Theorem 6 are depicted in Fig. 3. As it can be seen

in Fig. 3, for this scenario the achievable rate R

0

(rate of W

0

which

is decoded by both receivers) is nearly the same for both CMAC-

CM and compound MAC models. Moreover, the achievable rate R

2

(rate of W

2

which is also decoded by both receivers) is equal for

both CMAC-CM and compound MAC models. Moreover, as it can

be seen from Fig. 3, for these channel gain parameters the achievable

rate R

1

for CMAC-CM is less than that for Compound MAC due to

secrecy constraint for decoding message W

1

.

According to (53), it is clear that if the channel gain between

transmitter 1 and receiver 2 (i.e., g

1

) decreases, the capacity region

of compound MAC does not increase (i.e., the capacity region may

remain as before or decreases). On the other hand, there exist

scenarios for CMAC-CM (refer to (52)) for which decreasing g

1

results in increasing its achievable rate region. For comparison,

assume changing g

1

= 0.4 to g

1

= 0.1 in the above example. As

it can be seen in Fig. 4, the achievable rate region of CMAC-CM is

larger than the capacity region of compound MAC model for the new

parameters. This can be interpreted as follows: transmitted signal of

Fig. 4 Achievable rate region of Gaussian CMAC-CM and the

capacity region of Compound MAC P

1

= P

2

= 1, h

1

= h

2

= 0.6,

g

1

= 0.1 and g

2

= 0.5.

transmitter 1 is extremely attenuated at the receiver 2. So, for this

case, not forcing the receiver 2 to decode W

1

(keeping W

1

as a secret

for receiver 2) can increase the achievable rate region.

Also, it should be noted that according to (52), for the scenarios

where g

1

equals or greater than h

1

, the achievable rate R

1

is zero.

This happens since for these scenarios the channel gain between

transmitter 1 and receiver 2 (i.e., illegal user in terms of message

W

1

) is equal or better than the channel gain between transmitter 1

and receiver 1 (i.e., legitimate user in terms of message W

1

), and

this makes zero R

1

.

V. CONCLUSIONS

In this paper, we have studied the secrecy capacity region of the

Compound MAC with one Condential Message (CMAC-CM). We

have obtained inner and outer bounds on the secrecy capacity for the

general CMAC-CM. We have further studied less noisy and Gaussian

CMAC-CM. Providing numerical examples for the Gaussian case we

have shown that there are scenarios for which keeping secret some

messages can increase the rate region compared to being required to

communicate those messages reliably to the other receiver.

REFERENCES

[1] A. D. Wyner, The wire-tap channel, Bell System Technical Journal,

vol. 57, no. 8, p. 13551367, Oct 1975.

[2] I. Csisz ar and J. K orner, Broadcast channels with condential mes-

sages, IEEE Trans. Inf. Theory, vol. 24, no. 3, pp. 339348, May 1978.

[3] X. Tang, R. Liu, P. Spasojevi c, and H. V. Poor, Multiple access channels

with generalized feedback and condential messages, in Proc. IEEE

Info. Theory Workshop (ITW), CA, USA, Sep 2007, pp. 608613.

[4] L. Liang and H. V. Poor, Multiple-access channels with condential

messages, IEEE Trans. Inf. Theory, vol. 3, no. 3, pp. 9761002, Mar

2008.

[5] E. Ekrem and S. Ulukus, On the secrecy of multiple access wiretap

channel, in Proc. 46th Annual Allerton Conference on Communication,

Control, and Computing, Monticello,IL, Sep 2008, pp. 10141021.

[6] M. Yassaee and M. Aref, Multiple access wiretap channels with strong

secrecy, in Proc. IEEE Info. Theory Workshop (ITW), Dublin, Ireland,

Sep 2010, pp. 15.

[7] M. Wiese and H. Boche, An achievable region for the wiretap multiple-

access channel with common message, in Proc. IEEE Int. Symp. on

Info. Theory (ISIT), Cambridge, MA, Jul 2012, pp. 249253.

[8] R. Liu, I. Mari c, R. D. Yates, and P. Spasojevi c, The discrete memory-

less multiple access channel with condential messages, in Proc. IEEE

Int. Symp. on Info. Theory (ISIT), Seattle, WA, Sep 2006, pp. 957961.

[9] Y. Liang, H. V. Poor, and S. Shamai, Information Theoretic Security,

1st ed. Hanover, MA, USA: Now Publishers Inc., 2009.

[10] O. Simeone, D. G und uz, H. V. Poor, A. J. Goldsmith, and S. Shamai,

Compound multiple-access channels with partial cooperation, IEEE

Trans. Inf. Theory, vol. 55, no. 6, pp. 24252441, Jun 2009.

[11] A. El Gamal and Y. H-Kim, Network information theory, 1st ed.

Cambridge, U.K: Cambridge University Press, 2012.

[12] D. Slepian and J. K. Wolf, A coding theorem for multiple access chan-

nels with correlated sources, Bell System Technical Journal, vol. 52,

no. 7, pp. 10371076, Sep 1973.

- intro to DCUploaded bySmreeti Tiwary
- ITC Imp QuesUploaded byjormn
- Stats 121 AssignmentUploaded byDiana Cretu
- Radio-News-1938-01-RUploaded byRoberto Vitor
- ps2Uploaded byThinh
- 11Uploaded byhk_scribd
- Frsky L9R ManualUploaded byAlicia Gordon
- Statistics Unit 6 NotesUploaded bygopscharan
- Modifications for the Yaesu Ft te3tetUploaded byLoco Soy Loco Soy
- Abstract- Implementing FFT and IFFT on FPGAUploaded byAbhijay Sisodia
- Ex 5642Uploaded bySeongsoo Choi
- Assert Hand02Uploaded byMorxiah
- ReportUploaded byAritra De
- CHARACTERISTIC VALUE.pdfUploaded byDelvin Jacobs
- NASSIM NICHOLAS TALEB ON BOTTLE NECK AND NOISE EFFECTUploaded byzendome
- John Kanzius water fuel patentUploaded bykatzray
- V.karthikeyan Published Article AaaUploaded bykarthikeyan.v
- IR Receiver AX 1838HSUploaded byJuan Carlos Mondragón Gaytán
- Simetar Users Manual 2008 CompleteUploaded bycobrac46
- ECB - ecbwp1237Uploaded byANURAG RATHI
- C597_Standard Test Method for Pulse Velocity Through Concrete_2009Uploaded byedwinmacabeo
- Page 028Uploaded bydocpot2008
- deeplook_em_brochure.pdfUploaded byhamed1725
- ECE-1Uploaded bySaranya Prabakaran
- RRU3839 DescriptionUploaded byMuhammad Gamal
- UPSC mains New syllabusUploaded byMudita Malhotra
- Lyon Normal DistributionsUploaded byNav Sahi
- AOR AR2300 Catalog Rev1Uploaded byBlack Onion
- Wireless Controlled Live Streaming BOTUploaded byIJSTE
- Ch-6Uploaded bytsegay

- ch02Uploaded bybrayan
- MJMS_Holling Type IIUploaded byMada Sanjaya Ws
- Btech(Pe) SyllabusUploaded byNamit Rastogi
- EGP MergedUploaded byKarina
- LinAlg1Uploaded bySusilo Wati
- vc-RZUploaded byHSi92
- Calculations of Thermodynamic Derivative Properties From the NRTL and UNIQUAC ModelsUploaded byMario Ricardo Urdaneta Parra
- B_summaryUploaded byAmy Li
- Theoretical Computer Science Cheat SheetUploaded byקђเɭเקק
- PCA.xlsUploaded byViacheslav Baykov
- Section 11.3 Arcs and ChordsUploaded byAmelita Pepito-Omega Niez-Tupaz
- 1. Math for DevelopersUploaded byCan Mert
- EINTFUploaded byNam Vo
- Basic Maths Formulae 2Uploaded bysandeep9008860100
- Thesis 1Uploaded bySnehal Bhayani
- A Bayesian Approach to Time Series Forecasting Towards Data ScienceUploaded byHongtao
- En09 301 Engineering Mathematics IIIUploaded byHarishMv
- surv138-endmatter.pdfUploaded byRajesh Shahi
- PracUploaded byMichael Ball
- 21b_HW01Uploaded bymhersher
- Documentslide.com Mt2dinvmatlaba Program in Matlab and Fortran for Two Dimensional MagnetotelluricUploaded byAbraham Raymundo Rebollo Tapia
- Linear System Theory and Design OCRUploaded byPurav Makwana
- Imp Study MaterialUploaded byarul
- EMTL-NOTESUploaded byramyaraki
- ice pack - geometryUploaded byapi-389704410
- Cheaper by the DozenUploaded byCarol H. APSI teacher
- linear programing notesUploaded byPrerana Sachan
- M6_Frisnedi_29.pdfUploaded bynadayn
- Experiment 1Uploaded byVikram Saini
- Fourier Series ExpansionsUploaded bysrk_mce8506