Skip to content

Instantly share code, notes, and snippets.

@rafael-leo-almeida
Created February 14, 2017 12:31
Show Gist options
  • Select an option

  • Save rafael-leo-almeida/c55f1b6f28c4fedc21cd3aee0125f673 to your computer and use it in GitHub Desktop.

Select an option

Save rafael-leo-almeida/c55f1b6f28c4fedc21cd3aee0125f673 to your computer and use it in GitHub Desktop.
---
# You can use this file to override _any_ variable throughout Kolla.
# Additional options can be found in the 'kolla/ansible/group_vars/all.yml' file.
# Default value of all the commented parameters are shown here, To override
# the default value uncomment the parameter and change its value.
###################
# Kolla options
###################
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
config_strategy: "COPY_ALWAYS"
# Valid options are [ centos, oraclelinux, ubuntu ]
kolla_base_distro: "centos"
# Valid options are [ binary, source ]
kolla_install_type: "binary"
# Valid option is Docker repository tag
openstack_release: "master"
# Location of configuration overrides
node_custom_config: "/etc/kolla/config"
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. When running an All-In-One
# without haproxy and keepalived, this should be the first IP on your
# 'network_interface' as set in the Networking section below.
kolla_internal_vip_address: "10.10.0.250"
# This is the DNS name that maps to the kolla_internal_vip_address VIP. By
# default it is the same as kolla_internal_vip_address.
#kolla_internal_fqdn: "{{ kolla_internal_vip_address }}"
# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. It defaults to the
# kolla_internal_vip_address, allowing internal and external communication to
# share the same address. Specify a kolla_external_vip_address to separate
# internal and external requests between two VIPs.
# kolla_external_vip_address: "XXX.XXX.XXX.XXX"
# The Public address used to communicate with OpenStack as set in the public_url
# for the endpoints that will be created. This DNS name should map to
# kolla_external_vip_address.
#kolla_external_fqdn: "mydomain.com"
####################
# Docker options
####################
# Below is an example of a private repository with authentication. Note the
# Docker registry password can also be set in the passwords.yml file.
#docker_registry: "172.16.0.10:4000"
#docker_namespace: "companyname"
#docker_registry_username: "sam"
#docker_registry_password: "correcthorsebatterystaple"
###############################
# Neutron - Networking Options
###############################
# This interface is what all your api services will be bound to by default.
# Additionally, all vxlan/tunnel and storage network traffic will go over this
# interface by default. This interface must contain an IPv4 address.
# It is possible for hosts to have non-matching names of interfaces - these can
# be set in an inventory file per host or per group or stored separately, see
# http://docs.ansible.com/ansible/intro_inventory.html
# Yet another way to workaround the naming problem is to create a bond for the
# interface on all hosts and give the bond name here. Similar strategy can be
# followed for other types of interfaces.
network_interface: "eth0"
# These can be adjusted for even more customization. The default is the same as
# the 'network_interface'. These interfaces must contain an IPv4 address.
#kolla_external_vip_interface: "{{ network_interface }}"
#api_interface: "{{ network_interface }}"
#storage_interface: "{{ network_interface }}"
#cluster_interface: "{{ network_interface }}"
#tunnel_interface: "{{ network_interface }}"
# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
neutron_external_interface: "eth1"
# Valid options are [ openvswitch, linuxbridge ]
#neutron_plugin_agent: "openvswitch"
####################
# keepalived options
####################
# Arbitrary unique number from 0..255
#keepalived_virtual_router_id: "51"
####################
# TLS options
####################
# To provide encryption and authentication on the kolla_external_vip_interface,
# TLS can be enabled. When TLS is enabled, certificates must be provided to
# allow clients to perform authentication.
#kolla_enable_tls_external: "no"
#kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/haproxy.pem"
####################
# OpenStack options
####################
# Use these options to set the various log levels across all OpenStack projects
# Valid options are [ True, False ]
#openstack_logging_debug: "False"
# Valid options are [ novnc, spice ]
#nova_console: "novnc"
# OpenStack services can be enabled or disabled with these options
#enable_aodh: "no"
#enable_barbican: "no"
#enable_ceilometer: "no"
#enable_central_logging: "no"
#enable_ceph: "no"
#enable_ceph_rgw: "no"
#enable_chrony: "no"
#enable_cinder: "no"
#enable_cinder_backend_hnas_iscsi: "no"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "no"
#enable_cinder_backend_lvm: "no"
#enable_cinder_backend_nfs: "no"
#enable_cloudkitty: "no"
#enable_congress: "no"
#enable_designate: "no"
#enable_destroy_images: "no"
#enable_etcd: "no"
#enable_freezer: "no"
#enable_gnocchi: "no"
#enable_grafana: "no"
#enable_heat: "yes"
#enable_horizon: "yes"
#enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
#enable_horizon_ironic: "{{ enable_ironic | bool }}"
#enable_horizon_karbor: "{{ enable_karbor | bool }}"
#enable_horizon_magnum: "{{ enable_magnum | bool }}"
#enable_horizon_manila: "{{ enable_manila | bool }}"
#enable_horizon_mistral: "{{ enable_mistral | bool }}"
#enable_horizon_murano: "{{ enable_murano | bool }}"
#enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}"
#enable_horizon_sahara: "{{ enable_sahara | bool }}"
#enable_horizon_searchlight: "{{ enable_searchlight | bool }}"
#enable_horizon_senlin: "{{ enable_senlin | bool }}"
#enable_horizon_solum: "{{ enable_solum | bool }}"
#enable_horizon_trove: "{{ enable_trove | bool }}"
#enable_horizon_watcher: "{{ enable_watcher | bool }}"
#enable_influxdb: "no"
#enable_ironic: "no"
#enable_karbor: "no"
#enable_kuryr: "no"
#enable_magnum: "no"
#enable_manila: "no"
#enable_manila_backend_generic: "no"
#enable_manila_backend_hnas: "no"
#enable_mistral: "no"
#enable_mongodb: "no"
#enable_murano: "no"
#enable_multipathd: "no"
#enable_neutron_dvr: "no"
#enable_neutron_lbaas: "no"
#enable_neutron_fwaas: "no"
#enable_neutron_qos: "no"
#enable_neutron_agent_ha: "no"
#enable_neutron_vpnaas: "no"
#enable_nova_serialconsole_proxy: "no"
#enable_octavia: "no"
#enable_panko: "no"
#enable_rally: "no"
#enable_sahara: "no"
#enable_searchlight: "no"
#enable_senlin: "no"
#enable_solum: "no"
#enable_swift: "no"
#enable_telegraf: "no"
#enable_tacker: "no"
#enable_tempest: "no"
#enable_trove: "no"
#enable_vmtp: "no"
#enable_watcher: "no"
###################
# Ceph options
###################
# Ceph can be setup with a caching to improve performance. To use the cache you
# must provide separate disks than those for the OSDs
#ceph_enable_cache: "no"
# Valid options are [ forward, none, writeback ]
#ceph_cache_mode: "writeback"
# A requirement for using the erasure-coded pools is you must setup a cache tier
# Valid options are [ erasure, replicated ]
#ceph_pool_type: "replicated"
# Integrate ceph rados object gateway with openstack keystone
#enable_ceph_rgw_keystone: "no"
##############################
# Keystone - Identity Options
##############################
# Valid options are [ uuid, fernet ]
#keystone_token_provider: 'uuid'
# Interval to rotate fernet keys by (in seconds). Must be an interval of
# 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min),
# 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min),
# 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour),
# 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week).
#fernet_token_expiry: 86400
#########################
# Glance - Image Options
#########################
# Configure image backend.
#glance_backend_file: "yes"
#glance_backend_ceph: "no"
#######################
# Ceilometer options
#######################
# Valid options are [ mongodb, mysql, gnocchi ]
#ceilometer_database_type: "mongodb"
# Valid options are [ mongodb, gnocchi, panko ]
#ceilometer_event_type: "mongodb"
#######################
## Panko options
#######################
# Valid options are [ mongodb, mysql ]
#panko_database_type: "mysql"
#######################
# Gnocchi options
#######################
# Valid options are [ file, ceph ]
#gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
#################################
# Cinder - Block Storage Options
#################################
# Enable / disable Cinder backends
#cinder_backend_ceph: "{{ enable_ceph }}"
#cinder_volume_group: "cinder-volumes"
#cinder_backup_driver: "nfs"
#cinder_backup_share: ""
#cinder_backup_mount_options_nfs: ""
#######################
# Designate options
#######################
# Valid options are [ bind9 ]
designate_backend: "bind9"
designate_ns_record: "sample.openstack.org"
#########################
# Nova - Compute Options
#########################
#nova_backend_ceph: "{{ enable_ceph }}"
##############################
# Horizon - Dashboard Options
##############################
#horizon_backend_database: "no"
#######################################
# Manila - Shared File Systems Options
#######################################
# HNAS backend configuration
#hnas_ip:
#hnas_user:
#hnas_password:
#hnas_evs_id:
#hnas_evs_ip:
#hnas_file_system_name:
##################################
# Swift - Object Storage Options
##################################
# Swift expects block devices to be available for storage. Two types of storage
# are supported: 1 - storage device with a special partition name and filesystem
# label, 2 - unpartitioned disk with a filesystem. The label of this filesystem
# is used to detect the disk which Swift will be using.
# Swift support two mathcing modes, valid options are [ prefix, strict ]
#swift_devices_match_mode: "strict"
# This parameter defines matching pattern: if "strict" mode was selected,
# for swift_devices_match_mode then swift_device_name should specify the name of
# the special swift partition for example: "KOLLA_SWIFT_DATA", if "prefix" mode was
# selected then swift_devices_name should specify a pattern which would match to
# filesystems' labels prepared for swift.
#swift_devices_name: "KOLLA_SWIFT_DATA"
################################################
# Tempest - The OpenStack Integration Test Suite
################################################
# following value must be set when enable tempest
tempest_image_id:
tempest_flavor_ref_id:
tempest_public_network_id:
tempest_floating_network_name:
# tempest_image_alt_id: "{{ tempest_image_id }}"
# tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}"
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy stats started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy rabbitmq_management started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy keystone_internal started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy keystone_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy keystone_admin started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy glance_registry started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy glance_api started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy glance_api_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_api started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_metadata started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_novncproxy started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_api_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_metadata_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy nova_novncproxy_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy neutron_server started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy neutron_server_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy horizon started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy horizon_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy heat_api started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy heat_api_cfn started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy heat_api_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy heat_api_cfn_external started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Proxy mariadb started."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server rabbitmq_management/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy rabbitmq_management has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server keystone_internal/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy keystone_internal has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server keystone_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy keystone_external has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server keystone_admin/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy keystone_admin has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server glance_registry/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy glance_registry has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server glance_api/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy glance_api has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server glance_api_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy glance_api_external has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server nova_api/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy nova_api has no server available!"}
{"Payload":"Feb 14 10:24:22 haproxy[11]: Server nova_metadata/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:22 haproxy[11]: proxy nova_metadata has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server nova_novncproxy/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy nova_novncproxy has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server nova_api_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy nova_api_external has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server nova_metadata_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy nova_metadata_external has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server nova_novncproxy_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy nova_novncproxy_external has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server neutron_server/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy neutron_server has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server neutron_server_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy neutron_server_external has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server horizon/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy horizon has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server horizon_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy horizon_external has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server heat_api/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy heat_api has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server heat_api_cfn/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy heat_api_cfn has no server available!"}
{"Payload":"Feb 14 10:24:23 haproxy[12]: Server heat_api_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:23 haproxy[12]: proxy heat_api_external has no server available!"}
{"Payload":"Feb 14 10:24:24 haproxy[12]: Server heat_api_cfn_external/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:24 haproxy[12]: proxy heat_api_cfn_external has no server available!"}
{"Payload":"Feb 14 10:24:24 haproxy[12]: Server mariadb/controller is DOWN, reason: Layer4 connection problem, info: \"Connection refused\", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue."}
{"Payload":"Feb 14 10:24:24 haproxy[12]: proxy mariadb has no server available!"}
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0c:29:89:ee:9c brd ff:ff:ff:ff:ff:ff
inet 10.10.0.199/24 brd 10.10.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe89:ee9c/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0c:29:89:ee:a6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::20c:29ff:fe89:eea6/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:d3:a3:a9:51 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
ip address associated with VRID not present in received packet : 10.10.0.250
Keepalived_vrrp[14]: ip address associated with VRID not present in received packet : 10.10.0.250
one or more VIP associated with VRID mismatch actual MASTER advert
Keepalived_vrrp[14]: one or more VIP associated with VRID mismatch actual MASTER advert
bogus VRRP packet received on eth0 !!!
Keepalived_vrrp[14]: bogus VRRP packet received on eth0 !!!
VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
Keepalived_vrrp[14]: VRRP_Instance(kolla_internal_vip_51) ignoring received advertisment...
TASK [haproxy : Starting haproxy container] ************************************
changed: [controller] => {"changed": true, "result": false}
TASK [haproxy : Starting keepalived container] *********************************
changed: [controller] => {"changed": true, "result": false}
TASK [haproxy : Ensuring latest haproxy config is used] ************************
ok: [controller] => {"changed": false, "cmd": ["docker", "exec", "haproxy", "/usr/local/bin/kolla_ensure_haproxy_latest_config"], "delta": "0:00:00.057160", "end": "2017-02-14 10:24:23.102365", "rc": 0, "start": "2017-02-14 10:24:23.045205", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
TASK [haproxy : Waiting for virtual IP to appear] ******************************
fatal: [controller]: FAILED! => {"changed": false, "elapsed": 301, "failed": true, "msg": "Timeout when waiting for 10.10.0.250:3306"}
to retry, use: --limit @/usr/share/kolla/ansible/site.retry
docker ps on my controller node:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
730e7c79d1f5 kolla/centos-binary-keepalived:master "kolla_start" 23 seconds ago Up 23 seconds keepalived
ef45371e0d15 kolla/centos-binary-haproxy:master "kolla_start" 23 seconds ago Up 23 seconds haproxy
2b33d4fa4ed5 kolla/centos-binary-cron:master "kolla_start" 29 seconds ago Up 29 seconds cron
4703b01ba847 kolla/centos-binary-kolla-toolbox:master "kolla_start" 31 seconds ago Up 31 seconds kolla_toolbox
e0e03f24e7fb kolla/centos-binary-fluentd:master "kolla_start" 32 seconds ago Up 31 seconds fluentd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment