Está en la página 1de 37

Instalacin

Configuracin de Interfaces nodo Controller


Asignacin de IPs

Confirmacin de IPs

Configuracin de Interfaces nodo Network


Asignacin de IPs

Confirmacin de IPs

Configuracin de Interfaces nodo de Cmputo


Asignacin de IPs
Confirmacin de IPs

Pruebas de Conectividad
Controller

Network

Compute

Instalacin de NTP
Apt-get install ntp
Controller

Network y Compute

Actualizacin Paquetes de Openstack en todos los Nodos


# apt-get install ubuntu-cloud-keyring
# echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu" \
"trusty-updates/juno main" > /etc/apt/sources.list.d/cloudarchivejuno.list

Actualizacin de Paquetes todos los Nodos


apt-get update && apt-get dist-upgrade

Instalacin de la Base de Datos Mysql en el Nodo Controller


apt-get install mariadb-server python-mysqldb

Configuracin de la Base
Se edita el archivo /etc/mysql/my.cnf
Y se agrega

[mysqld]
bind-address = 10.0.0.11
[mysqld]
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8

Reiniciamos la BD
service mysql restart

Instalacin servidor de Mensajera RabbitMQ


apt-get install rabbitmq-server

Una vez instalado configuramos la nueva contrasea


rabbitmqctl change_password guest RABBIT_PASS

Instalacin Servicio de Identificacin


Nos conectamos a la base
$ mysql -u root p

Creamos la base de Datos


CREATE DATABASE keystone;

Damos los accesos a la base


GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';

Instalar mdulo keystone


Generacin de token

openssl rand -hex 10

Comandos
apt-get install keystone python-keystoneclient

Luego editamos /etc/keystone/keystone.conf luego editamos con los siguientes


cambios
[DEFAULT]
admin_token = ADMIN_TOKEN

Cambiamos admin_token con el cdigo generado


[database]
connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
verbose = True

Iniciar la BD
su -s /bin/sh -c "keystone-manage db_sync" keystone

Reiniciar el servicio keystone


service keystone restart

Configuramos un cron
(crontab -l -u keystone 2>&1 | grep -q token_flush) || \
echo '@hourly /usr/bin/keystone-manage token_flush
>/var/log/keystone/keystone-tokenflush.log 2>&1' \
>> /var/spool/cron/crontabs/keystone

Configuramos el token
export OS_SERVICE_TOKEN=ADMIN_TOKEN

Configuracin de Keystone
Creamos el tenant
keystone tenant-create --name admin --description "Admin Tenant"

Creamos un usuario
keystone user-create --name admin --pass ADMIN_PASS --email
EMAIL_ADDRESS

Creamos un rol administrador


keystone role-create --name admin

Asignamos roles
keystone user-role-add --tenant admin --user admin --role admin

Crear servicio tenant


keystone tenant-create --name service --description "Service Tenant"

Crear servicio

keystone service-create --name keystone --type identity \


--description "OpenStack Identity"

Crear endpoint
$ keystone endpoint-create \
--service-id $(keystone service-list | awk '/ identity / {print
$2}') \
--publicurl http://controller:5000/v2.0 \
--internalurl http://controller:5000/v2.0 \
--adminurl http://controller:35357/v2.0 \
--region regionOne

Verificamos la conexin

Instalacin de Servicio de Imgenes Glance


Ingresamos a mysql
mysql -u root -p

Creamos la base de Datos de Glance


CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';

Creamos las credenciales con Keystone


keystone user-create --name glance --pass GLANCE_PASS

Enlazar glance con el usuario y el rol


keystone user-role-add --user glance --tenant service --role admin

Creando el servicio de Glance


$ keystone service-create --name glance --type image \
--description "OpenStack Image Service"

Creando endpoint
keystone endpoint-create \
--service-id $(keystone service-list | awk '/ image / {print $2}') \
--publicurl http://controller:9292 \
--internalurl http://controller:9292 \
--adminurl http://controller:9292 \
--region regionOne

Instalamos los paquetes de glance


apt-get install glance python-glanceclient

Editamos el archivo /etc/glance/glance-api.conf con los siguientes parmetros


[database]
connection = mysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[DEFAULT]
verbose = True

Editamos el archivo /etc/glance/glance- registry.conf con los siguientes parmetros


[database]
connection = mysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
admin_password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[DEFAULT]
verbose = True

Desplegamos la base de datos


su -s /bin/sh -c "glance-manage db_sync" glance

Reiniciamos los servicios


service glance-registry restart
# service glance-api restart

Removemos enlace a SQLITE


rm -f /var/lib/glance/glance.sqlite

Verificar Instalacin
Para probar descargamos una imagen con el siguiente comando
wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64disk.img

Direccionamos el archivo
source admin-openrc.sh

Cargamos la imagen con el comando


glance image-create --name "cirros-0.3.3-x86_64" --file cirros-0.3.3x86_64-disk.img \
--disk-format qcow2 --container-format bare --is-public True -progress

Para listar las imgenes utlizamos


glance image-list

Crear Servicio de Computo (Nova)


Nos conectamos a Mysql
mysql -u root -p

Creamos la Base de Datos de Nova


CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';

Creamos el usuario Nova


keystone user-create --name nova --pass NOVA_PASS

Enlazamos el usuario con el servicio y el rol


keystone user-role-add --user nova --tenant service --role admin

Creamos el servicio de Nova


keystone service-create --name nova --type compute \
--description "OpenStack Compute"

Creamos el endpoint
$ keystone endpoint-create \
--service-id $(keystone service-list | awk '/ compute / {print $2}')
--publicurl http://controller:8774/v2/%\(tenant_id\)s \
--internalurl http://controller:8774/v2/%\(tenant_id\)s \
--adminurl http://controller:8774/v2/%\(tenant_id\)s \
--region regionOne

Instalamos los paquetes de Nova


apt-get install nova-api nova-cert nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler python-novaclient

Editamos el archivo /etc/nova/nova.conf y agregamos


[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova
[DEFAULT]
rpc_backend = rabbit

rabbit_host = controller
rabbit_password = RABBIT_PASS
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
[DEFAULT]
my_ip = 10.0.0.11
[DEFAULT]
vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11
[glance]
host = controller
[DEFAULT]
verbose = True

Creamos la base
su -s /bin/sh -c "nova-manage db sync" nova

Reiniciamos los servicios


#
#
#
#
#
#

service
service
service
service
service
service

nova-api restart
nova-cert restart
nova-consoleauth restart
nova-scheduler restart
nova-conductor restart
nova-novncproxy restart

Quitar enlace a SQLITE


rm -f /var/lib/nova/nova.sqlite

Configurar Nodo de Computo


Instalamos los paquetes
apt-get install nova-compute sysfsutils

Editamos el archivo /etc/nova/nova.conf y se agrega


[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
auth_strategy = keystone

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS
[DEFAULT]
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
[DEFAULT]
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
host = controller
[DEFAULT]
verbose = True

Activamos virtualizacin
Editamos /etc/nova/nova-compute.conf con
[libvirt]
virt_type = qemu

Reiniciamos nova
service nova-compute restart

Removemos enlace con SQLITE


# rm -f /var/lib/nova/nova.sqlite

Listamos los componentes con el comando


nova service-list

Listamos la imagen
nova image-list

Controller instalacin de Neutron


Nos conectamos a Mysql
mysql -u root -p

Creamos la Base de Datos de Nova


CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';

Creamos el usuario Neutron

keystone user-create --name neutron --pass NEUTRON_PASS

Enlazamos el usuario con el rol y tenant


keystone user-role-add --user neutron --tenant service --role admin

Creamos el servicio de Neutron


keystone service-create --name neutron --type network --description
"OpenStack Networking"

Creamos el EndPoint
keystone endpoint-create
--service-id $(keystone service-list | awk '/ network / {print $2}')
--publicurl http://controller:9696
--adminurl http://controller:9696
--internalurl http://controller:9696
--region regionOne

Instalar paquetes de Neutron


apt-get install neutron-server neutron-plugin-ml2 python-neutronclient

Editar el archivo /etc/neutron/neutron.conf con las siguientes lneas


[database]
connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[DEFAULT]
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2
nova_admin_auth_url = http://controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova

nova_admin_tenant_id = SERVICE_TENANT_ID
nova_admin_password = NOVA_PASS

Editamos el archivo /etc/neutron/plugins/ml2/ml2_conf.ini con las siguientes lneas


[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Editamos nuevamente el archivo /etc/nova/nova.conf


[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver =
nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS

Llenamos la base de datos con el siguiente comando


su -s /bin/sh -c "neutron-db-manage --config-file
/etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno"
neutron

Reiniciamos los servicios de compute


# service nova-api restart
# service nova-scheduler restart
# service nova-conductor restart

Reiniciamos los servicios de neutrn


service neutron-server restart

listamos los servicios de neutron


neutron ext-list

Configuramos el nodo Network


Editamos el archivo /etc/sysctl.conf con los siguientes parmetros
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Para aplicar los cambios en el terminal ejecutamos el comando


sysctl -p

Instalamos los paquetes de red necesarios


apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent

Editamos el archivo /etc/neutron/neutron.conf

Comentamos la lnea connection de [database]

Agregamos las siguientes lneas


[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
[DEFAULT]
core_plugin = ml2

service_plugins = router
allow_overlapping_ips = True
[DEFAULT]
verbose = True

Editamos el archivo /etc/neutron/plugins/ml2/ml2_conf.ini para los parmetros


del plugin ml2
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
Ahora se configura el Agente de Capa 3 (L3)
Editando el archivo /etc/neutron/l3_agent.ini con las siguientes lneas

[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
[DEFAULT]
verbose = True

Configuramos el agente DHCP


Editamos el archivo /etc/neutron/dhcp_agent.ini y agregamos las lneas
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True

[DEFAULT]
verbose = True

Configuramos el Agente Metadata

Editamos el archivo /etc/neutron/metadata_agent.ini con las siguientes lneas


[DEFAULT]
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
[DEFAULT]
nova_metadata_ip = controller
[DEFAULT]
metadata_proxy_shared_secret = METADATA_SECRET
[DEFAULT]
verbose = True

En el nodo controller editamos nova /etc/nova/nova.conf y agregamos


[neutron]
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET

Reiniciamos el servicio API con


service nova-api restart

Configurar OpenVSwitch
Reiniciamos el servicio con el comando
service openvswitch-switch restart

Agregamos un bridge externo


ovs-vsctl add-br br-ex

Agregamos un puerto que se conecte a la interfaz fsica


ovs-vsctl add-port br-ex INTERFACE_NAME

Reiniciamos los servicios


#
#
#
#

service
service
service
service

neutron-plugin-openvswitch-agent restart
neutron-l3-agent restart
neutron-dhcp-agent restart
neutron-metadata-agent restart

Para verificar si el agente est listo corremos el comando


neutron agent-list

Configuracin del Nodo de Cmputo


Editamos el archivo /etc/sysctl.conf con las siguientes lneas

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

Aplicamos los cambios


sysctl -p

Instalamos los paquetes necesarios


apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent

Editamos el archivo
/etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron

admin_password = NEUTRON_PASS
[DEFAULT]
plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[DEFAULT]
verbose = True
Activar plugin capa 2 (L2)
[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
enable_tunneling = True
[agent]
tunnel_types = gre

Reiniciamos el OpenVSwitch
service openvswitch-switch restart

Configuramos Nova
Editamos el archivo /etc/nova/nova.conf
[DEFAULT]
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver =
nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = NEUTRON_PASS

Reiniciamos el servicio Compute

service nova-compute restart

Reiniciamos OpenVSwitch
service neutron-plugin-openvswitch-agent restart

Verificamos con el comando


$ neutron agent-list

Configuramos la red externa


Creamos la red con el comando
neutron net-create ext-net --shared --router:external True \ -provider:physical_network external --provider:network_type flat

Creamos la subred externa


neutron subnet-create ext-net --name ext-subnet \ --allocation-pool
start=FLOATING_IP_START,end=FLOATING_IP_END \ --disable-dhcp --gateway
EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR
Ejemplo
neutron subnet-create ext-net --name ext-subnet \ --allocation-pool
start=203.0.113.101,end=203.0.113.200 \ --disable-dhcp --gateway
203.0.113.1 203.0.113.0/24

La red inquilino ofrece acceso a la red interna de instancias. La arquitectura asla este tipo de
red de otros inquilinos. El inquilino de demostracin posee esta red, ya que slo permite el
acceso a la red para las instancias que la integran.

Creamos la red
neutron net-create demo-net

Creamos la subred
neutron subnet-create demo-net --name demo-subnet \ --gateway
TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
Ejemplo

neutron subnet-create demo-net --name demo-subnet \ --gateway


192.168.1.1 192.168.1.0/24

Creamos el Router
neutron router-create demo-router

Conectamos el router con la Subred


neutron router-interface-add demo-router demo-subnet

Conectamos el router con la interfaz externa


neutron router-gateway-set demo-router ext-net

Instalamos Dashboard en el Nodo Controller


Instalamos los paquetes
apt-get install openstack-dashboard apache2 libapache2-mod-wsgi
memcached python-memcache

Editamos el archivo /etc/openstack-dashboard/local_settings.py

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
TIME_ZONE = "TIME_ZONE"

Reiniciamos los servicios


# service apache2 restart
# service memcached restart

Accedemos al Openstack escribiendo la direccion http://controller/horizon en un


navegador

Servicio de Block Storage


Nos conectamos a Mysql
$ mysql -u root -p

Creamos la Base de Datos


CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED
BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY
'CINDER_DBPASS';

Se crea el usuario Cinder


$ keystone user-create --name cinder --pass CINDER_PASS

Enlazar el usuario con el rol y el tenant


keystone user-role-add --user cinder --tenant service --role admin

Crear los servicios de cinder


keystone service-create --name cinder --type volume \ --description
"OpenStack Block Storage"
keystone service-create --name cinderv2 --type volumev2 \ -description "OpenStack Block Storage"

Crear el Endpoint de Cinder


keystone endpoint-create \ --service-id $(keystone service-list | awk
'/ volume / {print $2}') \ --publicurl
http://controller:8776/v1/%\(tenant_id\)s \ --internalurl
http://controller:8776/v1/%\(tenant_id\)s \ --adminurl
http://controller:8776/v1/%\(tenant_id\)s \ --region regionOne
keystone endpoint-create \ --service-id $(keystone service-list | awk
'/ volumev2 / {print $2}') \ --publicurl
http://controller:8776/v2/%\(tenant_id\)s \ --internalurl
http://controller:8776/v2/%\(tenant_id\)s \ --adminurl
http://controller:8776/v2/%\(tenant_id\)s \ --region regionOne

Instalacin de paquetes
apt-get install cinder-api cinder-scheduler python-cinderclient

Editamos el archivo /etc/cinder/cinder.conf


[database]
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS
[DEFAULT]
my_ip = 10.0.0.11
[DEFAULT]
verbose = True

Cargamos la base
su -s /bin/sh -c "cinder-manage db sync" cinder

Reiniciamos los servicios


# service cinder-scheduler restart # service cinder-api restart

Quitamos el enlace hasta SQLITE


rm -f /var/lib/cinder/cinder.sqlite

Servicio Orchestation
Nos conectamos a Mysql
mysql -u root -p

Creamos la Base de Datos


CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' \ IDENTIFIED BY
'HEAT_DBPASS';

GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' \ IDENTIFIED BY


'HEAT_DBPASS';

Creamos el usuario de Orchestation


keystone user-create --name heat --pass HEAT_PASS

Enlazamos el usuario, el tol y el tenant


keystone user-role-add --user heat --tenant service --role admin

Creamos dos roles nuevos


$ keystone role-create --name heat_stack_user
$ keystone role-create --name heat_stack_owner

Creamos dos servicios para Orchestation


keystone service-create --name heat --type orchestration \ -description "Orchestration"
keystone service-create --name heat-cfn --type cloudformation \ -description "Orchestration"

Creamos los EndPoint de los Servicios


keystone endpoint-create \ --service-id $(keystone service-list | awk
'/ orchestration / {print $2}') \ --publicurl
http://controller:8004/v1/%\(tenant_id\)s \ --internalurl
http://controller:8004/v1/%\(tenant_id\)s \ --adminurl
http://controller:8004/v1/%\(tenant_id\)s \ --region regionOne
keystone endpoint-create \ --service-id $(keystone service-list | awk
'/ cloudformation / {print $2}') \ --publicurl
http://controller:8000/v1 \ --internalurl http://controller:8000/v1 \
--adminurl http://controller:8000/v1 \ --region regionOne

Instalamos los paquetes necesarios para Orchestation


apt-get install heat-api heat-api-cfn heat-engine python-heatclient

Editamos el archivo

/etc/heat/heat.conf con los parmetros necesarios

[database]
connection = mysql://heat:HEAT_DBPASS@controller/heat
[DEFAULT]

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = heat
admin_password = HEAT_PASS
[ec2authtoken]
auth_uri = http://controller:5000/v2.0

[DEFAULT]
heat_metadata_server_url = http://controller:8000
heat_waitcondition_server_url =
http://controller:8000/v1/waitcondition

Cargamos la base de Datos


su -s /bin/sh -c "heat-manage db_sync" heat

Reiniciamos servicios
# service heat-api restart # service heat-api-cfn restart # service
heat-engine restart

Borramos el enlace con SQLITE


rm -f /var/lib/heat/heat.sqlite

Con el demo
source demo-openrc.sh

Creamos un plantilla
$ NET_ID=$(nova net-list | awk '/ demo-net / { print $2 }')
$ heat stack-create -f test-stack.yml \ -P "ImageID=cirros-0.3.3x86_64;NetID=$NET_ID" testStack

Para comprobar que se creo usamos el comando


heat stack-list

Servicio de Telemetra
Instalamos los paquetes de Telemetry
apt-get install mongodb-server

Editamos el archivo /etc/mongodb.conf


bind_ip = 10.0.0.11
smallfiles = true

Detenemos el servicio e iniciamos con los siguientes comandos


service mongodb stop
rm /var/lib/mongodb/journal/prealloc.*
service mongodb start

Reiniciamos el servicio
service mongodb restart

Creamos la base de Datos


mongo --host controller --eval ' db = db.getSiblingDB("ceilometer");
db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [
"readWrite", "dbAdmin" ]})'

Creamos el usuario celiometer


keystone user-create --name ceilometer --pass CEILOMETER_PASS

Enlazamos con el usuario, el rol y el tenant


keystone user-role-add --user ceilometer --tenant service --role admin

Creamos el servicio
keystone service-create --name ceilometer --type metering \ -description "Telemetry"
Creamos el Endpoint

keystone endpoint-create \ --service-id $(keystone service-list | awk


'/ metering / {print $2}') \ --publicurl http://controller:8777 \ -internalurl http://controller:8777 \ --adminurl http://controller:8777 \
--region regionOne

Instalamos los paquetes


apt-get install ceilometer-api ceilometer-collector ceilometer-agentcentral \ ceilometer-agent-notification ceilometer-alarm-evaluator
ceilometer-alarm-notifier \ python-ceilometerclient

Editamos el archivo /etc/ceilometer/ceilometer.conf


[database]
connection =
mongodb://ceilometer:CEILOMETER_DBPASS@controller:27017/ceilometer
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS
[service_credentials]
...
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
[publisher]
...
metering_secret = METERING_SECRET

Reiniciamos los servicios


# service ceilometer-agent-central restart
# service ceilometer-agent-notification restart

# service ceilometer-api restart


# service ceilometer-collector restart
# service ceilometer-alarm-evaluator restart
# service ceilometer-alarm-notifier restart

Instalar en el nodo de Compute


Instalamos los paquetes necesarios
apt-get install ceilometer-agent-compute

Editamos el archivo /etc/nova/nova.conf


[DEFAULT]
instance_usage_audit = True
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
notification_driver = nova.openstack.common.notifier.rpc_notifier
notification_driver = ceilometer.compute.nova_notifier

Reiniciamos el servicio
service nova-compute restart

Editamos el archivo /etc/ceilometer/ceilometer.conf


[publisher]
# Secret value for signing metering messages (string value)
metering_secret = CEILOMETER_TOKEN
[DEFAULT]
rabbit_host = controller
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = ceilometer
admin_password = CEILOMETER_PASS
[service_credentials]
os_auth_url = http://controller:5000/v2.0
os_username = ceilometer
os_tenant_name = service
os_password = CEILOMETER_PASS
os_endpoint_type = internalURL
[DEFAULT]
log_dir = /var/log/ceilometer

Reiniciamos el servicio
service ceilometer-agent-compute restart

Configuramos la imagen en Glance en el nodo Controller


notification_driver = messaging
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

Reiniciamos los servicios


# service glance-registry restart
# service glance-api restart

Editamos los archivos de almacenamiento Cinder /etc/cinder/cinder.conf


control_exchange = cinder
notification_driver = cinder.openstack.common.notifier.rpc_notifier

Reiniciamos los servicios


service cinder-api restart
service cinder-scheduler restart
service cinder-volume restart

Probamos la instalacin de Telemetra

Usamos el comando
ceilometer meter-list

Descargamos la imagen
glance image-download "cirros-0.3.3-x86_64" > cirros.img

Verificamos la descarga de los mdulos de Telemetra


ceilometer meter-list

Detalles de las imgenes

ceilometer statistics -m image.download -p 60

Servicio de Base de Datos


Instalamos los paquetes requeridos
apt-get install python-trove python-troveclient python-glanceclient \
trove-common trove-api trove-taskmanager

Creamos el usuario trova y lo enlazamos con el rol y el tenant


keystone user-create --name trove --pass TROVE_PASS $ keystone userrole-add --user trove --tenant service --role admin

Editamos los archivos

trove.conf
trove-taskmanager.conf
trove-conductor.conf

Agregando las siguientes lneas


[DEFAULT]
log_dir = /var/log/trove
trove_auth_url = http://controller:5000/v2.0
nova_compute_url = http://controller:8774/v2
cinder_url = http://controller:8776/v1
swift_url = http://controller:8080/v1/AUTH_
sql_connection = mysql://trove:TROVE_DBPASS@controller/trove
notifier_queue_hostname = controller
[DEFAULT]
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS

Editamos el archivo api-paste.ini


[filter:authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_user = trove
admin_password = ADMIN_PASS
admin_tenant_name = service
signing_dir = /var/cache/trove

Editamos el archivo trove.conf


[DEFAULT]

default_datastore = mysql
# Config option for showing the IP address that nova doles out
add_addresses = True
network_label_regex = ^NETWORK_LABEL$
api_paste_config = /etc/trove/api-paste.ini

Editamos el archivo trove-taskmanager.conf


[DEFAULT]
# Configuration options for talking to nova via the novaclient.
# These options are for an admin user in your keystone config.
# It proxy's the token received from the user to send to nova via this
admin users creds,
# basically acting like the client via that proxy token.
nova_proxy_admin_user = admin
nova_proxy_admin_pass = ADMIN_PASS
nova_proxy_admin_tenant_name = service
taskmanager_manager = trove.taskmanager.manager.Manager

Creamos la Base de Datos


mysql -u root -p mysql> CREATE DATABASE trove;
mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@'localhost' \
IDENTIFIED BY 'TROVE_DBPASS';
mysql> GRANT ALL PRIVILEGES ON trove.* TO trove@'%' \ IDENTIFIED
BY 'TROVE_DBPASS';
Inicializamos los datos a la base
su -s /bin/sh -c "trove-manage db_sync" trove

Cargamos los datos


su -s /bin/sh -c "trove-manage datastore_update mysql ''" trove

Editamos el archivo trove-guestagent.conf


rabbit_host = controller
rabbit_password = RABBIT_PASS
nova_proxy_admin_user = admin
nova_proxy_admin_pass = ADMIN_PASS
nova_proxy_admin_tenant_name = service
trove_auth_url = http://controller:35357/v2.0

Crear un almacn de datos MySQL 5.5


trove-manage --config-file /etc/trove/trove.conf
datastore_version_update \ mysql mysql-5.5 mysql glance_image_ID mysqlserver-5.5 1

Crear el servicio y el end-point


keystone service-create --name trove --type database \ --description
"OpenStack Database Service" $ keystone endpoint-create \ --service-id
$(keystone service-list | awk '/ trove / {print $2}') \ --publicurl
http://controller:8779/v1.0/%\(tenant_id\)s \ --internalurl
http://controller:8779/v1.0/%\(tenant_id\)s \ --adminurl
http://controller:8779/v1.0/%\(tenant_id\)s \ --region regionOne

Creamos los servicios


service trove-api restart
service trove-taskmanager restart
service trove-conductor restart

Para crear una base de datos


trove create name 2 --size=2 --databases DBNAME \ --users
USER:PASSWORD --datastore_version mysql-5.5 \ --datastore mysql

Servicio de Procesamiento de Datos


Edite el archivo /etc/sahara/sahara.conf
connection = mysql://sahara:SAHARA_DBPASS@controller/sahara

[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
[mysqld]
max_allowed_packet = 256M

Creamos el esquema
sahara-db-manage --config-file /etc/sahara/sahara.conf upgrade head

Creamos el servicio y el EndPoint


keystone service-create --name sahara --type data_processing \ -description "Data processing service" $ keystone endpoint-create \ -service-id $(keystone service-list | awk '/ sahara / {print $2}') \ -publicurl http://controller:8386/v1.1/%\(tenant_id\)s \ --internalurl
http://controller:8386/v1.1/%\(tenant_id\)s \ --adminurl
http://controller:8386/v1.1/%\(tenant_id\)s \ --region regionOne

Iniciamos el servicio
systemctl start openstack-sahara-all
systemctl enable openstack-sahara-all

Verificamos con el comando


sahara cluster-list

Lanzar una instancia con Neutron


Generamos las claves
ssh-keygen

Agregamos la llave pblica la entorno


nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key

Listamos las key-pair


nova keypair-list

Listamos los flavor o plantilla de instancia


nova flavor-list

Lanzamos la instancia
nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic netid=DEMO_NET_ID \ --security-group default --key-name demo-key demoinstance1

Obtenemos una URL para acceder a la instancia por interfaz web


nova get-vnc-console demo-instance1 novnc

Agregamos reglas de seguridad

Permitir ICMP
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

Permitir acceso SSH


nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

Crear una ip-flotante


neutron floatingip-create ext-net

Asociar la IP con la instancia


nova floating-ip-associate demo-instance1 203.0.113.102

Revisamos el estado de la IP
nova list

Enlazamos el volumen con la instancia


nova volume-attach demo-instance1 158bea89-07db-4ac2-8115-66c0d6a4bb48

Listamos las instancias


nova volume-list

Accedemos con SSH


ssh cirros@203.0.113.102

También podría gustarte