You are on page 1of 76

w

o
r
k
s
h
o
p
2
1
.
0
5
.
2
0
1
4


/


1
3
.
4
0
h

/

C
l
a
s
s
r
o
o
m
S
0
7
CENTRE INTEGRAT PBLIC
DE FORMACI PROFESSIONAL
Departamento de
Informtica y
Comunicaciones
CIPFP AUSIS MARCH
Bare-Metal
Hypervisors and
High Availability
Systems
J o s R a m n R u i z
I
n
d
e
x
Workshop goals
Type I (bare-metal) hypervisors
An example: Proxmox
Beyond virtualization
Maintenance tasks: MV migration
Setting up a HA environment
Bare-metal Hypervisors & High Availability Systems 2
W
o
r
k
s
h
o
p
g
o
a
l
s
To know how the production
systems (really) work
Bare-metal Hypervisors & High Availability Systems 3
W
o
r
k
s
h
o
p
g
o
a
l
s
To know how the production
systems (really) work
To know and implement the
production virtualization: type I (or
bare-metal)
Bare-metal Hypervisors & High Availability Systems 4
W
o
r
k
s
h
o
p
g
o
a
l
s
To know how the production
systems (really) work
To know and implement the
production virtualization: type I (or
bare-metal)
To know a good (and free)
virtualization platform: Proxmox
Bare-metal Hypervisors & High Availability Systems 5
W
o
r
k
s
h
o
p
g
o
a
l
s
To know how the production
systems (really) work
To know and implement the
production virtualization: type I (or
bare-metal)
To know a good (and free)
virtualization platform: Proxmox
To test this platformsetting up an
approach to a production
environment
Bare-metal Hypervisors & High Availability Systems 6
W
o
r
k
s
h
o
p
g
o
a
l
s
Why?
Bare-metal Hypervisors & High Availability Systems 7
W
o
r
k
s
h
o
p
g
o
a
l
s
Why?
In my opinion most of us have
never worked with this kind of
systems
Bare-metal Hypervisors & High Availability Systems 8
W
o
r
k
s
h
o
p
g
o
a
l
s
Why?
In my opinion most of us have
never worked with this kind of
systems
It is important to know how they
work in order to provide a valid
systems view to our pupils
Bare-metal Hypervisors & High Availability Systems 9
W
o
r
k
s
h
o
p
g
o
a
l
s
Why?
In my opinion most of us have
never worked with this kind of
systems
It is important to know how they
work in order to provide a valid
systems view to our pupils
It would be an interesting end-of-
year project shared between
different subjects
Bare-metal Hypervisors & High Availability Systems 10
Type I (bare-metal) hypervisors
Bare-metal Hypervisors & High Availability Systems 11
T
y
p
e
I

(
b
a
r
e
-
m
e
t
a
l
)

h
y
p
e
r
v
i
s
o
r
s
Type I hypervisors structure
Bare-metal Hypervisors & High Availability Systems 12
HARDWARE
HYPERVISOR (really OS+hypervisor)
OS 1 OS 2 OS N

T
y
p
e
I

(
b
a
r
e
-
m
e
t
a
l
)

h
y
p
e
r
v
i
s
o
r
s
Advantages
Performance
Behaviour (less points of failure)
Production structures allowed
Weak points
Non-obvious configuration
Dedicated server (of course)
Bare-metal Hypervisors & High Availability Systems 13
T
y
p
e
I

(
b
a
r
e
-
m
e
t
a
l
)

h
y
p
e
r
v
i
s
o
r
s
Main examples
VMWare ESXi
Difficult to configure
Expensive licenses
Proxmox
Good balance performance/effort
Free
Microsoft Hyper-V
Poor performance
Easy configuration
Parallels Server Bare Metal
Xen Server
Bare-metal Hypervisors & High Availability Systems 14
Proxmox
Bare-metal Hypervisors & High Availability Systems 15
P
r
o
x
m
o
x
OS:
Debian
Virtualization platform:
KVM+Containers
Graphical remote access:
Java required
Bare-metal Hypervisors & High Availability Systems 16
P
r
o
x
m
o
x
.

I
n
s
t
a
l
l
a
t
i
o
n
Downloaded fromwww.proxmox.org
Bare-metal Hypervisors & High Availability Systems 17
I
n
s
t
a
l
l
a
t
i
o
n
.

K
e
y

s
c
r
e
e
n
s
Bare-metal Hypervisors & High Availability Systems 18
e.g. ausiasHA
I
n
s
t
a
l
l
a
t
i
o
n
.

K
e
y

s
c
r
e
e
n
s
Bare-metal Hypervisors & High Availability Systems 19
A
f
t
e
r
I
n
s
t
a
l
l
a
t
i
o
n
.

W
e
b

A
c
c
e
s
s
Bare-metal Hypervisors & High Availability Systems 20
N
o
d
e
1
Bare-metal Hypervisors & High Availability Systems 21
Our first VM
Bare-metal Hypervisors & High Availability Systems 22
O
u
r
F
i
r
s
t
V
M
Structure
Bare-metal Hypervisors & High Availability Systems 23
VM1 VM2 VMn

O
u
r
f
i
r
s
t
V
M
.

U
p
l
o
a
d
a
n
I
S
O
Bare-metal Hypervisors & High Availability Systems 24
O
u
r
f
i
r
s
t
V
M
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 25
O
u
r
f
i
r
s
t
V
M
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 26
O
u
r
f
i
r
s
t
V
M
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 27
Bare-metal Hypervisors & High Availability Systems 28 O
u
r
f
i
r
s
t
V
M
.

S
e
t
t
i
n
g
s
O
u
r
f
i
r
s
t
V
M
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 29
O
u
r
f
i
r
s
t
V
M
Bare-metal Hypervisors & High Availability Systems 30
O
u
r
f
i
r
s
t
V
M
.

C
o
n
s
o
l
e
Bare-metal Hypervisors & High Availability Systems 31
Our first CT
Bare-metal Hypervisors & High Availability Systems 32
O
u
r
f
i
r
s
t
C
T
What is a CT?
OpenVZ Container
Instead of trying to run an entire
guest OS, container
virtualization isolates the guests,
It doesn't try to virtualize the
hardware.
Recommended for running
GNU/Linux
Fastest approach
Bare-metal Hypervisors & High Availability Systems 33
O
u
r
f
i
r
s
t
C
T
Bare-metal Hypervisors & High Availability Systems 34
O
u
r
f
i
r
s
t
C
T
Bare-metal Hypervisors & High Availability Systems 35
O
u
r
f
i
r
s
t
V
M
.

D
o
w
n
l
o
a
d
Bare-metal Hypervisors & High Availability Systems 36
O
u
r
F
r
i
s
t
C
T
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 37
O
u
r
F
r
i
s
t
C
T
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 38
O
u
r
F
r
i
s
t
C
T
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 39
O
u
r
F
r
i
s
t
C
T
.

S
e
t
t
i
n
g
s
Bare-metal Hypervisors & High Availability Systems 40
O
u
r
F
r
i
s
t
C
T
Bare-metal Hypervisors & High Availability Systems 41
S
t
a
t
i
s
t
i
c
s
Bare-metal Hypervisors & High Availability Systems 42
O
u
r
f
i
r
s
t
C
T
.

W
o
r
k
i
n
g
Bare-metal Hypervisors & High Availability Systems 43
Our first cluster
Bare-metal Hypervisors & High Availability Systems 44
L
e
t

s
c
r
e
a
t
e
a

c
l
u
s
t
e
r
Update packages
In each node:
aptitude update && aptitude full-upgrade
Create a cluster
Master node: pvecm create NameCluster
Node2: pvecm add IPMaster
Node3: pvecm add IPMaster
Bare-metal Hypervisors & High Availability Systems 45
O
u
r
f
i
r
s
t
c
l
u
s
t
e
r
Structure
Bare-metal Hypervisors & High Availability Systems 46
O
u
r
f
i
r
s
t
c
l
u
s
t
e
r
Bare-metal Hypervisors & High Availability Systems 47
C
T

M
i
g
r
a
t
i
o
n
Bare-metal Hypervisors & High Availability Systems 48
C
T

M
i
g
r
a
t
i
o
n
p
r
o
c
e
s
s
Bare-metal Hypervisors & High Availability Systems 49
C
T

M
i
g
r
a
t
i
o
n
Bare-metal Hypervisors & High Availability Systems 50
Hot migration: it keeps working
C
T

M
i
g
r
a
t
i
o
n
This is not HA
Too much meatware
HA automates the process
Bare-metal Hypervisors & High Availability Systems 51
Our first HA cluster
Bare-metal Hypervisors & High Availability Systems 52
O
u
r
f
i
r
s
t
H
A

c
l
u
s
t
e
r
Structure
Bare-metal Hypervisors & High Availability Systems 53
HA cluster
Network Shared storage
Management device
O
u
r
f
i
r
s
t
H
A

c
l
u
s
t
e
r
Structure
Bare-metal Hypervisors & High Availability Systems 54
HA cluster
Network Shared storage
Management device
There are several critical points
I
m
p
l
e
m
e
n
t
i
n
g
H
A
Before starting
Remove any previous VM
Add the NAS to the cluster
Bare-metal Hypervisors & High Availability Systems 55
A
d
d
i
n
g
t
h
e
N
A
S
Bare-metal Hypervisors & High Availability Systems 56
A
d
d
i
n
g
t
h
e
N
A
S
Bare-metal Hypervisors & High Availability Systems 57
F
e
n
c
i
n
g
Fencing?
Bare-metal Hypervisors & High Availability Systems 58
F
e
n
c
i
n
g
Fencing
Bare-metal Hypervisors & High Availability Systems 59
F
e
n
c
i
n
g
If a node does not respond
after a given time-threshold
non-operational
Two types of fencing
Disabling a node itself,
Disallowing access to resources
such as shared disks
Bare-metal Hypervisors & High Availability Systems 60
F
e
n
c
i
n
g
If a node does not respond
after a given time-threshold
non-operational
Two types of fencing
Disabling a node itself
Disallowing access to resources
such as shared disks
Bare-metal Hypervisors & High Availability Systems 61
STONITH
Resource Fencing
F
e
n
c
i
n
g
In every node:
nano /etc/default/redhat-cluster-pve
Uncomment the line
FENCE_JOIN="yes"
Join the fencing domain
fence_tool join
Bare-metal Hypervisors & High Availability Systems 62
F
e
n
c
i
n
g
.

O
n
l
y
i
n

t
h
e
M
a
s
t
e
r
cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new
nano /etc/pve/cluster.conf.new
Increase the version number
<cluster config_version="8" name="h1">
Validate the configuration
ccs_config_validate -v -f /etc/pve/cluster.conf.new
Bare-metal Hypervisors & High Availability Systems 63
F
e
n
c
i
n
g
.

A
c
t
i
v
a
t
e
Bare-metal Hypervisors & High Availability Systems 64
H
A

m
a
n
a
g
e
d
C
T
Bare-metal Hypervisors & High Availability Systems 65
H
A

m
a
n
a
g
e
d
C
T
Bare-metal Hypervisors & High Availability Systems 66
H
A

m
a
n
a
g
e
d
C
T
Bare-metal Hypervisors & High Availability Systems 67
H
A

m
a
n
a
g
e
d
C
T
Bare-metal Hypervisors & High Availability Systems 68
H
A

m
a
n
a
g
e
d
C
T
In each node:
/etc/init.d/rgmanager start
Bare-metal Hypervisors & High Availability Systems 69
H
A

m
a
n
a
g
e
d
C
T
Fencing devices
Managed switches
PS switches
Manual fencing
Scripting+pseudo manual fencing
Bare-metal Hypervisors & High Availability Systems 70
H
A

m
a
n
a
g
e
d
C
T
Fencing devices
Managed switches
PS switches
Manual fencing
Scripting+pseudo manual fencing
Bare-metal Hypervisors & High Availability Systems 71
/
e
t
c
/
p
v
e
/
c
l
u
s
t
e
r
.
c
o
n
f
.
n
e
w
<?xml version="1.0"?>
<cluster config_version="8" name="h1">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey expected_votes="1"/>
<fencedevices>
<fencedevice name="human" agent="fence_manual"/>
</fencedevices>
<clusternodes>
<clusternode name=node1" nodeid="1" votes="1">
<fence>
<method name="single">
<device name="human" nodename=node1"/>
</method>
</fence>
</clusternode>
<clusternode name=node2" nodeid="2" votes="1">
<fence>
<method name="single">
<device name="human" nodename=node2"/>
</method>
</fence>
</clusternode>
Bare-metal Hypervisors & High Availability Systems 72
/
e
t
c
/
p
v
e
/
c
l
u
s
t
e
r
.
c
o
n
f
.
n
e
w
<clusternode name=node3" nodeid=3" votes="1">
<fence>
<method name="single">
<device name="human" nodename=node2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="100"/>
</rm>
</cluster>
Bare-metal Hypervisors & High Availability Systems 73
D
o
e
s
i
t
w
o
r
k
?
Start VM 100 in node1
Poweroff node 1 (or disable
the network)
Go to node2 or node3
Manual fencing:
fence_ack_manual node1
Confirmwith: absolutely
Bare-metal Hypervisors & High Availability Systems 74
D
o
e
s
i
t
w
o
r
k
?
Bare-metal Hypervisors & High Availability Systems 75
Thanks for your attendance
Questions?
Slides available on:
http://bit.ly/JRRuiz-HA
Bare-metal Hypervisors & High Availability Systems 76

You might also like