You are on page 1of 14

4.

PREFERENTIAL ATTACHMENT

The rich gets richer


Empirical evidences

Many large networks are scale free


The degree distribution has a power-law
behavior for large k (far from a Poisson
distribution)
Random graph theory and the Watts-Strogatz
model cannor reproduce this feature
We can construct power-law networks by hand
Which is the mechanism that makes scale-free
networks to emerge as they grow?
Emphasis: network dynamics rather to construct
a graph with given topological features
Topology is a result of the dynamics
But only a random growth?
In this case the distribution is exponential!
Barabasi-Albert model (1999)

Two generic mechanisms common in many real


networks
– Growth (www, research literature, ...)
– Preferential attachment (idem): attractiveness of
popularity
The two are necessary
Growth

t=0, m0 nodes
Each time step we add a new node with m (m0)
edges that link the new node to m different
nodes already present in the system
Preferential attachment

When choosing the nodes to which the new


connects, the probability  that a new node will
be connected to node i depends on the degree ki
of node i
ki
 ( ki )  Linear attachment (more general models)

j
kj Sum over all existing nodes
Numerical simulations

Power-law P(k)k- SF=3


The exponent does not depend on m (the only
parameter of the model)
Preferential attachment but no
growth
t=0, N nodes, no links
ki
 ( ki ) 
j
kj
Power-laws at early times
P(k) not stationary, all nodes get connected
ki(t)=2t/N
Average shortest-path

<k>=k SF model

just a fit
l  A ln( N  B)  C
No theoretical stimations up to now
The growth introduces nontrivial corrections
Whereas random graphs with a power-law
degree distribution are uncorrelated
Clustering coefficient

5 times larger

CSF N 0.75
CRG  k  N 1

SW: C is independent
of N

NO analytical prediction for the SF model


Algorithm

Book that is being written up to N words


fN(i) number of different words that each occurred
exactly i times in the text
Continue adding words
With probability p we add a new word
With probability 1-p the word is already written
The probability that the (n+1)th word has already
appeared i times is proportional to i fN(i) [the total
number of words that have occurred i times]
Mapping into a network model

With p a new node is added


With 1-p a directed link is added. The starting
point is randomly selected. The endpoint is
selected such that the probability that a node
belonging to the Nk nodes with k incoming links
will be chosen is

 (class)  kN k

You might also like