Professional Documents
Culture Documents
Contents
Preface
Intended Audience
Documentation History
2
3
Usage
File Format
Usage
File Format
engine.yaml
17
Usage
17
Description
17
dnsmasq.template
18
Usage
18
File Format
18
See also
19
network_1.yaml
20
Usage
20
File Format
20
See also
22
openstack.yaml
23
Usage
23
modes-metadata section
23
networks-metadata section
24
volumes-metadata section
24
settings.yaml
25
Usage
25
File Format
25
Page i
Preface
Preface
This documentation provides information on how to use Mirantis Fuel to deploy OpenStack environment. The
information is for reference purposes and is subject to change.
Intended Audience
This documentation is intended for OpenStack administrators and developers; it assumes that you have
experience with network and cloud concepts.
Documentation History
The following table lists the released revisions of this documentation:
Revision Date
Description
October, 2014
December, 2014
6.0 GA
Page 1
Warning
Be very careful when modifying the configuration files. A simple typo when editing these files may
severely damage your environment.
When you modify the YAML files, you will receive a warning that some attributes were modified from the
outside. Some features may become inaccessible from the UI after you do this.
These pages are under development; the information presented here has been reviewed but may not be
complete.
File
Node
Description
astute.yaml
Fuel Master
Target
dnsmasq.template
Fuel Master
engine.yaml
Fuel Master
network_1.yaml
Fuel Master
Network Groups
openstack.yaml
Fuel Master
settings.yaml
Fuel Master
Page 2
astute.yaml
astute.yaml
Fuel Master Node: /etc/fuel/astute.yaml
Fuel uses the astute.yaml file to pass configuration attributes to puppet.
Usage
The /etc/fuel/astute.yaml file is installed on the Fuel Master node and must not be deleted.
File Format
The xxx.yaml file <detailed-description>
Page 3
Usage
The /etc/astute.yaml file is placed on each target node when it is deployed by mcollective and must not be
deleted. Facter extension reads data from this file and uses it to create the $::fuel_settings data structure. This
structure contains all variables as a single hash and supports embedding of other rich structures such as nodes
hash or arrays.
File Format
The astute.yaml file <detailed-description>
Basic networking configuration
libvirt_type: qemu
disable_offload: true
network_scheme:
roles:
management: br-mgmt
private: br-prv
fw-admin: br-fw-admin
storage: br-storage
provider: ovs
version: "1.0"
interfaces:
eth4:
L2:
vlan_splinters: "off"
eth3:
L2:
vlan_splinters: "off"
eth2:
L2:
vlan_splinters: "off"
eth1:
L2:
vlan_splinters: "off"
eth0:
L2:
vlan_splinters: "off"
endpoints:
br-prv:
IP: none
br-mgmt:
Page 4
other_nets: []
IP:
- 10.108.22.6/24
br-storage:
other_nets: []
IP:
- 10.108.24.5/24
br-fw-admin:
other_nets:
- 10.108.20.0/24
IP:
- 10.108.20.7/24
default_gateway: true
gateway: 10.108.20.2
transformations:
- action: add-br
name: br-eth0
- bridge: br-eth0
action: add-port
name: eth0
- action: add-br
name: br-eth1
- bridge: br-eth1
action: add-port
name: eth1
- action: add-br
name: br-eth2
- bridge: br-eth2
action: add-port
name: eth2
- action: add-br
name: br-eth3
- bridge: br-eth3
action: add-port
name: eth3
- action: add-br
name: br-eth4
- bridge: br-eth4
action: add-port
name: eth4
- action: add-br
name: br-mgmt
- action: add-br
name: br-storage
- action: add-br
name: br-fw-admin
Page 5
- trunks:
- 0
action: add-patch
bridges:
- br-eth4
- br-storage
- trunks:
- 0
action: add-patch
bridges:
- br-eth2
- br-mgmt
- trunks:
- 0
action: add-patch
bridges:
- br-eth0
- br-fw-admin
- action: add-br
name: br-prv
- action: add-patch
bridges:
- br-eth3
- br-prv
Nova configuration
nova:
db_password: Ns08DOge
state_path: /var/lib/nova
user_password: z8sJBhvw
Swift configuration
swift:
user_password: Li9DPL0d
mp configuration
mp:
- point: "1"
weight: "1"
- point: "2"
weight: "2"
Page 6
Glance configuration
glance:
db_password: DgVvco7J
image_cache_max_size: "5368709120"
user_password: sRX4ksp6
role: primary-mongo
deployment_mode: ha_compact
Mellanox configuration
neutron_mellanox:
plugin: disabled
metadata:
label: Mellanox Neutron components
enabled: true
toggleable: false
weight: 50
vf_num: "16"
mongo:
enabled: false
auth_key: ""
NTP configuration
external_ntp:
ntp_list: 0.pool.ntp.org, 1.pool.ntp.org
metadata:
label: Upstream NTP
weight: 100
Zabbix configuration
zabbix:
db_password: 7hQFiVYa
db_root_password: xB33AjUw
password: zabbix
metadata:
label: Zabbix Access
restrictions:
- condition: not ('experimental' in version:feature_groups)
action: hide
weight: 70
username: admin
Definition of puppet tasks
2014, Mirantis Inc.
Page 7
tasks:
- type: puppet
priority: 100
parameters:
puppet_modules: /etc/puppet/modules
cwd: /
timeout: 3600
puppet_manifest: /etc/puppet/manifests/site.pp
uids:
- "12"
auto_assign_floating_ip: false
Ceilometer configuration
ceilometer:
db_password: ReBB1hdT
metering_secret: jzHL7r76
enabled: true
user_password: p0JVzpHv
Public networking configuration
public_vip: 10.108.21.2
public_network_assignment:
assign_to_all_nodes: false
metadata:
label: Public network assignment
restrictions:
- condition: cluster:net_provider != 'neutron'
action: hide
weight: 50
Heat configuration
heat:
db_password: Vv6vslci
enabled: true
rabbit_password: TOYQuiwH
auth_encryption_key: 3775079699142c1bcd7bd8b814648b01
user_password: s54JsapR
Fuel version
fuel_version: "6.1"
Page 8
NSX configuration
nsx_plugin:
nsx_password: ""
nsx_username: admin
packages_url: ""
l3_gw_service_uuid: ""
transport_zone_uuid: ""
connector_type: stt
metadata:
label: VMware NSX
enabled: false
restrictions:
- condition: cluster:net_provider != 'neutron' or networking_parameters:net_l23_provide
action: hide
weight: 20
replication_mode: true
nsx_controllers: ""
Controller nodes configuration
nodes:
- role: primary-controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.3
uid: "9"
swift_zone: "9"
public_netmask: 255.255.255.0
public_address: 10.108.21.3
name: node-9
storage_address: 10.108.24.2
fqdn: node-9.test.domain.local
- role: controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.4
uid: "10"
swift_zone: "10"
public_netmask: 255.255.255.0
public_address: 10.108.21.4
name: node-10
storage_address: 10.108.24.3
fqdn: node-10.test.domain.local
- role: controller
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
Page 9
internal_address: 10.108.22.5
uid: "11"
swift_zone: "11"
public_netmask: 255.255.255.0
public_address: 10.108.21.5
name: node-11
storage_address: 10.108.24.4
fqdn: node-11.test.domain.local
MongoDB nodes configuration
Each OpenStack environment that uses Ceilometer and MongoDB must have a definition for each MongoDB node
in the astute.yaml file; one node is designated the primary-mongo node and all other nodes have just mongo
specified for the role. Ideally, you should have one MongoDB node for each Controller node in the environment.
You can use the Fuel Web UI to deploy as many MongoDB nodes as you like when you initially create your
environment. You must edit this file and use command line tools to add MongoDB nodes to a deployed
environment; see Add a MongoDB node for instructions.
The configuration for the primary MongoDB node is:
- role: primary-mongo
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.6
uid: "12"
swift_zone: "12"
name: node-12
storage_address: 10.108.24.5
fqdn: node-12.test.domain.local
The fields are:
internal_netmas
k:
storage_netmask
:
internal_address
:
uid:
swift_zone:
name:
storage_address:
fqdn:
The configuration for each non-primary MongoDB node: has the same fields. The astute.yaml file includes one
section like this for each configured MongoDB node:
Page 10
- role: mongo
internal_netmask: 255.255.255.0
storage_netmask: 255.255.255.0
internal_address: 10.108.22.7
uid: "13"
swift_zone: "13"
name: node-13
storage_address: 10.108.24.6
fqdn: node-13.test.domain.local
Sahara configuration
sahara:
db_password: 0VDkceJQ
enabled: false
user_password: 4zs7JZaY
deployment_id: 9
Provisioning configuration
provision:
method: cobbler
metadata:
label: Provision
restrictions:
- condition: not ('experimental' in version:feature_groups)
action: hide
weight: 80
image_data:
/:
uri: http://10.108.20.2:8080/targetimages/ubuntu_1204_amd64.img.gz
format: ext4
container: gzip
/boot:
uri: http://10.108.20.2:8080/targetimages/ubuntu_1204_amd64-boot.img.gz
format: ext2
container: gzip
nova_quota: false
uid: "12"
repo_metadata:
2014.2-6.0: http://10.108.20.2:8080/2014.2-6.0/ubuntu/x86_64 precise main
Storage configuration
Page 11
storage:
objects_ceph: false
pg_num: 128
vc_user: ""
iser: false
images_ceph: false
ephemeral_ceph: false
vc_datastore: ""
vc_password: ""
osd_pool_size: "2"
volumes_vmdk: false
metadata:
label: Storage
weight: 60
vc_host: ""
volumes_lvm: true
images_vcenter: false
vc_image_dir: /openstack_glance
volumes_ceph: false
vc_datacenter: ""
Keystone configuration
keystone:
db_password: rwTdR4Vd
admin_token: YXauBQbY
priority: 200
Cinder configuration
cinder:
db_password: fv85YGzr
user_password: cIVtXdbp
Corosync configuration
corosync:
group: 226.94.1.1
verified: false
metadata:
label: Corosync
restrictions:
- condition: "true"
action: hide
Page 12
weight: 50
port: "12000"
Miscellaneous configs to look at later
management_vip: 10.108.22.2
test_vm_image:
img_path: /usr/share/cirros-testvm/cirros-x86_64-disk.img
img_name: TestVM
min_ram: 64
public: "true"
glance_properties: "--property murano_image_info='{\"title\": \"Murano Demo\", \"type\":
os_name: cirros
disk_format: qcow2
container_format: bare
quantum: true
cobbler:
profile: ubuntu_1204_x86_64
status: discover
management_network_range: 10.108.22.0/24
fail_if_error: true
puppet_modules_source: rsync://10.108.20.2:/puppet/2014.2-6.0/modules/
master_ip: 10.108.20.2
puppet_manifests_source: rsync://10.108.20.2:/puppet/2014.2-6.0/manifests/
resume_guests_state_on_host_boot: true
Syslog configuration
syslog:
syslog_transport: tcp
syslog_port: "514"
metadata:
label: Syslog
weight: 50
syslog_server: ""
debug: false
online: true
metadata:
label: Common
weight: 30
access:
email: admin@localhost
user: admin
password: admin
metadata:
label: Access
Page 13
weight: 10
tenant: admin
openstack_version_prev:
use_cow_images: true
last_controller: node-11
kernel_params:
kernel: console=ttyS0,9600 console=tty0 rootdelay=90 nomodeset
metadata:
label: Kernel parameters
weight: 40
mysql:
wsrep_password: 6JoYdvoz
root_password: ZtwW8gk8
external_dns:
dns_list: 8.8.8.8, 8.8.4.4
metadata:
label: Upstream DNS
weight: 90
rabbit:
password: GGcZVT4f
compute_scheduler_driver: nova.scheduler.filter_scheduler.FilterScheduler
openstack_version: 2014.2-6.0
External MongoDB configuration
external_mongo:
mongo_replset: ""
mongo_password: ceilometer
mongo_user: ceilometer
metadata:
label: External MongoDB
restrictions:
- condition: settings:additional_components.mongo.value == false
action: hide
weight: 20
hosts_ip: ""
mongo_db_name: ceilometer
Murano configuration
murano:
db_password: 0PVsOHo9
enabled: false
rabbit_password: FGjWVooK
user_password: crpWYkaY
Page 14
Page 15
metadata:
metadata_proxy_shared_secret: qoEcTup3
fqdn: node-12.test.domain.local
storage_network_range: 10.108.24.0/24
vCenter configuration
vcenter:
datastore_regex: ""
host_ip: ""
vc_user: ""
vlan_interface: ""
vc_password: ""
cluster: ""
metadata:
label: vCenter
restrictions:
- condition: settings:common.libvirt_type.value != 'vcenter'
action: hide
weight: 20
use_vcenter: true
Syslog configuration
base_syslog:
syslog_port: "514"
syslog_server: 10.108.20.2
Page 16
engine.yaml
engine.yaml
Fuel Master Node: /root/provisioning_1
The engine.yaml file defines the basic configuration of the target nodes that Fuel deploys for the OpenStack
environment. Initially, it contains Fuel defaults; these are adjusted in response to configuration choices the user
makes through the Fuel UI and then fed to Nailgun.
Usage
1. Dump provisioning information using this fuel CLI command:
fuel --env 1 provisioning default
where --env1 should be set to the specific environment (id=1 in this example).
2. Edit file.
3. Upload the modified file:
fuel --env-1 provisioning upload
Description
The engine.yaml file defines the provisioning engine being used (Cobbler by default) along with the password and
URLs used to access it.
Page 17
dnsmasq.template
dnsmasq.template
Fuel Master Node: /etc/cobbler/dnsmasq.template
The dnsmasq.template file defines the DHCP networks used for Multiple Cluster Networks. The networks listed here
must match the fuelweb_admin networks that are defined in Fuel.
Usage
1. Log into the cobbler Docker containers and dockerctl container:
dockerctl shell cobbler
2. Edit file.
vi /etc/cobbler/dnsmasq.template
3. Rebuild the dnsmasq configuration and reload it:
cobbler sync
4. Exit the Cobbler docker container:
exit
File Format
Each fuelweb_admin network must be defined in this file:
dhcp-range=<name>,<start-IP-addr>,<end-IP-addr>,<netmask>,[<leasetime>]
dhcp-option=net:<name>,option:router,<IP-addr-of-gateway>
dhcp-boot=net:<name>,pxelinux.0,boothost,<Fuel-Master-IP-addr>
env-name:
start-IP-addr:
end-IP-addr:
netmask:
leasetime:
IP-addr-of-gate
way:
Fuel-Master-IP-a
ddr:
For example:
Page 18
dnsmasq.template
dhcp-range=alpha,10.110.1.68,10.110.1.127,255.255.255.192,120m
dhcp-option=net:alpha,option:router,10.110.1.65
dhcp-boot=net:alpha,pxelinux.0,boothost,10.110.0.2
The network must forward the DHCP packets for the other logical networks that are defined in the network_1.yaml
file. It is also possible to set up a proxy using the Linux dhcp-helper program so that the target nodes can boot.
Note that the dnsmasq.template file is managed by Puppet so all changes are overwritten/removed when the
Puppet container is restarted or the Fuel Master node is rebooted.
See also
Configuring Multiple Cluster Networks
Implementing Multiple Cluster Networks
network_1.yaml
Page 19
network_1.yaml
network_1.yaml
Fuel Master Node: /root/network_1.yaml
The network_1.yaml file contains the network configuration information for the environment.
To implement Multiple Cluster Networks, follow the instructions in Configuring Multiple Cluster Networks to create
additional Node Groups, then download this file and configure the new Network Group(s).
Usage
1. Dump network information using this fuel CLI command:
fuel --env 1 network --download
where --env1 points to the specific environment (id=1 in this example).
2. Edit file and add information about the new Network Group(s).
3. Upload the modified file:
fuel --env-1 network --upload
If you make a mistake when populating this file, it seems to upload normally but no network data changes are
applied; if you then download the file again, the unmodified file may overwrite the modifications you made to
the file. To protect yourself, we recommend the following process:
After you edit the file but before you upload it, make a copy in another location.
Upload the file.
Download the file again.
Compare the current file to the one you saved. If they match, you successfully configured your networks.
If you configure your networking by editing this file, you should create and configure the rest of your
environment using the Fuel CLI rather than the Web UI. Especially do not attempt to configure your networking
using the Web UI screens.
File Format
The network_1.yaml file contains global settings and the networks section.
Note that the network_1.yaml is dumped in dictionary order so the sections may appear in a different order than
documented here.
Global settings
are mostly at the beginning of the file but one (public_vip) is at the end of the file, When configuring a new
environment, you must set values for the management_vip, floating_ranges, and public_vip parameters.
Page 20
network_1.yaml
management_vip: 10.108.37.2
networking_parameters:
base_mac: fa:16:3e:00:00:00
dns_nameservers:
- 8.8.4.4
- 8.8.8.8
floating_ranges:
- - 10.108.36.128
- 10.108.36.254
gre_id_range:
- 2
- 65535
internal_cidr: 192.168.111.0/24
internal_gateway: 192.168.111.1
net_l23_provider: ovs
segmentation_type: gre
vlan_range:
- 1000
- 1030
. . .
public_vip: 10.108.36.2
networks section
The networks section contains the configurations of each Network Group that has been created.
You must set values for the cidr, gateway, and ip_ranges parameters for each logical network in the group. This is
what the configuration of one logical network (public)looks like. A similar section is provided for each of the
logical networks that belong to the Node Group.
networks:
- cidr: 10.108.36.0/24
gateway: 10.108.36.1
group_id: 1
id: 1
ip_ranges:
- - 10.108.36.2
- 10.108.36.127
meta:
assign_vip: true
cidr: 172.16.0.0/24
configurable: true
floating_range_var: floating_ranges
ip_range:
- 172.16.0.2
- 172.16.0.126
map_priority: 1
name: public
Page 21
network_1.yaml
notation: ip_ranges
render_addr_mask: public
render_type: null
use_gateway: true
vlan_start: null
name: public
vlan_start: pull
- 10.108.35.254
vlan_start: null
If you create additional Node Groups, the file contains segments for each Node Group, each identified by a unique
group_id, with configuration blocks for each of the four logical networks associated with that Node Group.
See also
Configuring Multiple Cluster Networks
Implementing Multiple Cluster Networks
Page 22
openstack.yaml
openstack.yaml
Fuel Master Node: /usr/lib/python2.6/site-packages/nailgun/fixtures/openstack.yaml
The openstack.yaml file defines the basic configuration of the target nodes that Fuel deploys for the OpenStack
environment. Initially, it contains Fuel defaults; these are adjusted in response to configuration choices the user
makes through the Fuel UI and then fed to Nailgun.
Usage
1. Log into the nailgun Docker containers and dockerctl container:
dockerctl shell nailgun
2. Edit file.
3. Run the following commands to Nailgun to reread its settings and restart:
manage.py dropdb && manage.py syncdb && manage.py loaddefault
killall nailgund
4. Exit the Nailgun docker container:
exit
File Format
The openstack.yaml file contains a number of blocks, each of which may contain multiple parameters. The major
ones are described here.
The file has two major sections:
The first is for VirtualBox and other limited deployments.
The second is for full bare-metal deployments.
modes-metadata section
Lists each of the roles available on the Assign a role or roles to each node server screen with the description. Note
that there are two roles-metadata sections in the file:
The limited deployments section lists only the Controller, Compute, and Cinder LVM roles.
The "full_release" section lists the Controller, Compute, Cinder LVM, Ceph-OSD, MongoDB, and Zabbix
Server roles.
Roles that should not be deployed on the same server are identified with "conflicts" statements such as the
following that prevents a Compute role from being installed on a Controller node:
controller:
name: "Controller"
Page 23
openstack.yaml
Warning
Deploying Fuel on VirtualBox is a much better way to install Fuel on minimal hardware for demonstration
purposes than using this procedure. Be extremely careful when using this "all-in-one" deployment; if you
create too many VM instances, they may consume all the available CPUs, causing serious problems
accessing the MySQL database. Resource-intensive services such as Ceilometer with MongoDB, Zabbix,
and Ceph are also apt to cause problems when OpenStack is deployed on a single server.
networks-metadata section
volumes-metadata section
Page 24
settings.yaml
settings.yaml
Fuel Master Node: /root/settings_x.yaml/
The settings.yaml file contains the current values for the information on the Settings page of the Fuel UI.
Usage
1. Dump provisioning information using this fuel CLI command:
fuel --env 1 settings default
where --env1 that to the specific environment (id=1 in this example).
2. Edit file.
3. Upload the modified file:
fuel --env-1 settings upload
File Format
Warning
You should usually modify these values using the Settings tab of the Fuel UI.
Page 25