On part one of OpenContrail Deployment, we looked at Installing OpenContrail with Ansible on CentOS 7 and Kubernetes. The problem with this is that the deployment was done on a physical server, where you can’t easily scale if you have fewer resources. Another disadvantage of the same is if you screw up the setup, it’s not easy to do a cleanup and redeploy OpenContrail, full OS re-installation might be necessary.
Using Virtual Machines comes with some advantages:
- You can take a snapshot of VM and restore to its initial state if you break things.
- You can test HA functions of OpenContrail using a single server and multiple VMs
- With KVM Nested Virtualization, you can use a VM to test hypervisor functionalities of OpenContrail Openstack integration.
- Contrail Ansible will deploy and configure VMs for you, no manual work required.
This setup will be done purely on CentOS 7 server. You can do the same on Ubuntu hypervisor, only that OpenContrail VMs will run on CentOS 7.
Step 1: Prepare Host system(s)
Let’s start by installing packages that will be required on the hypervisors that will run KVM. You can have a deployment VM used to deploy OpenContrail services, this is not necessary though since one Host system can be used to deploy services with Ansible.
Install Ansible on CentOS 7:
Install Ansible on CentOS using the commands:
sudo yum -y install epel-release
sudo yum -y install git ansible
For git v2, check:
How to install git 2.x on CentOS 7
Disable SELinux and Stop firewalld
To avoid any further configurations needed for Firewalld and SELinux, you can disable both as below:
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sudo systemctl disable firewalld && systemctl stop firewalld
Install KVM and all tools required for the hypervisor.
Let’s install KVM and other tools that we’ll require for this setup.
sudo yum install -y python-urllib3 libguestfs-tools net-tools \
libvirt-python virt-install libvirt git ansible python-pip qemu-kvm bridge-utils
Confirm that you have Virtualization enabled and start libvirt service.
$ lsmod | grep kvm
kvm_intel 170086 0
kvm 566340 1 kvm_intel
irqbypass 13503 1 kvm
$ sudo systemctl enable libvirtd && sudo systemctl start libvirtd
Configure Host Bridge for OpenContrail VMs
Since we installed bridge-utils on the previous step, we can configure management bridge that we’ll plug into the VMs. My bridge configuration looks like below, you can modify to suit your environment.
$ cat /etc/sysconfig/network-scripts/ifcfg-em1
TYPE="Ethernet"
NAME="em1"
DEVICE="em1"
ONBOOT="yes"
BRIDGE=br-mgmt
The bridge interface configuration:
$ cat /etc/sysconfig/network-scripts/ifcfg-br-mgmt
TYPE="Bridge"
BOOTPROTO="static"
NAME="br-mgmt"
DEVICE="br-mgmt"
ONBOOT="yes"
IPADDR=192.168.10.235
PREFIX=24
GATEWAY=192.168.10.1
DNS1=8.8.8.8
ZONE=public
Start the interfaces and confirm that bridge is working fine.
$ sudo ifup br-mgmt
$ sudo brctl show
bridge name bridge id STP enabled interfaces
br-mgmt 8000.c6240d70921d no em1
docker0 8000.0242c6420fbf no
virbr0 8000.525400466fa2 yes virbr0-nic
From the output, we can confirm bridge br-mgmt is configured with correct interface. You can also do network connectivity test using ping or similar tools.
Clone contrail-ansible-deployer repository
Now that we have all the prerequisites satisfied. We can clone the repository and get ready to start deploying OpenContrail on KVM.
git clone http://github.com/Juniper/contrail-ansible-deployer
Change to the contrail-ansible-deployer directory and create
cd contrail-ansible-deployer
Edit config/instances.yaml and fill in appropriate values. Here is my sample file for 3 node KVM install.
provider_config:
kvm:
image: CentOS-7-x86_64-GenericCloud-1901.qcow2.xz
image_url: https://cloud.centos.org/centos/7/images/
ssh_pwd: Password123
ssh_user: root
ssh_public_key: ~/.ssh/id_rsa.pub
ssh_private_key: ~/.ssh/id_rsa
vcpu: 8
vram: 32000
vdisk: 50G
subnet_prefix: 192.168.10.0
subnet_netmask: 255.255.255.0
gateway: 192.168.10.1
nameserver: 192.168.10.1
ntpserver: 192.168.10.1
domainsuffix: local
instances:
contail-controller-01:
provider: kvm
host: 192.168.10.235
bridge: br-mgmt
ip: 192.168.10.234
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
contail-controller-02:
provider: kvm
host: 192.168.10.235
bridge: br-mgmt
ip: 192.168.10.233
roles:
config_database:
config:
control:
analytics_database:
analytics:
webui:
contail-compute-01:
provider: kvm
host: 192.168.10.235
bridge: br-mgmt
ip: 192.168.10.232
roles:
vrouter:
contrail_configuration:
CONTAINER_REGISTRY: opencontrailnightly
CONTRAIL_VERSION: latest
VROUTER_GATEWAY: 192.168.10.1
My inventory/hosts don’t have any custom settings. It looks like below.
localhost:
hosts:
localhost:
config_file: ../config/instances.yaml
connection: local
ansible_connection: local
python_interpreter: python
ansible_python_interpreter: python
Provision VMs with Ansible
I got an error that the bridge defined not found. I had to modify the file:
playbooks/roles/create_kvm_instances/tasks/build_and_start_container_hosts.yml
To have
- name: Install container vm {{ container_vm_hostname }} without portgroup
command: |
virt-install --name {{ container_vm_hostname }} \
--disk /var/lib/libvirt/images/{{ container_vm_hostname }}.qcow2 \
--cpu host-passthrough \
--vcpus={{ vcpu }} \
--ram={{ vram }} \
--network network={{ item.value.bridge }},model=virtio \
--network network=default,model=virtio \
--virt-type kvm \
--import \
--os-variant rhel7 \
--graphics vnc \
--serial pty \
--noautoconsole \
--console pty,target_type=virtio
I modified the first network line to
--network bridge={{ item.value.bridge }},model=virtio \
Start VMs provisioning using the command:
# ansible-playbook -i inventory/ playbooks/provision_instances.yml | tee /root/provision-instances.log
This will log everything to the file /root/provision-instances.log. This provisioning part will define three VMs on KVM with names kvm1, kvm2, and kvm3. You can change this names before deploying.
After successful execution, confirm if the VMs were successfully created.
# virsh list
contail-controller-01 running
contail-controller-02 running
contail-compute-01 running
The next step is to configure created instances. Run the command:
# ansible-playbook -i inventory/ playbooks/configure_instances.yml | tee -a /root/configure_instances.log
This may take a while to finish executing. After a successful finish, you should get a message like this:
PLAY RECAP *********************************************************************
192.168.10.232 : ok=34 changed=26 unreachable=0 failed=0
192.168.10.233 : ok=34 changed=26 unreachable=0 failed=0
192.168.10.234 : ok=34 changed=26 unreachable=0 failed=0 localhost : ok=9 changed=4 unreachable=0 failed=0
Then run the last playbook to get OpenContrail services installed.
# ansible-playbook -i inventory/ playbooks/install_contrail.yml | tee /root/install_contrail.log
All output will be logged on the file –> /root/install_contrail.log
ERROR! Unable to retrieve file contents
Could not find or access '/root/contrail-kolla-ansible/ansible/post-deploy-contrail.yml'
I did copy /root/contrail-kolla-ansible/ansible/post-deploy.yml to /root/contrail-kolla-ansible/ansible/post-deploy-contrail.yml
# cp /root/contrail-kolla-ansible/ansible/post-deploy.yml \
/root/contrail-kolla-ansible/ansible/post-deploy-contrail.yml
Successful execution should give you output similar to this.
PLAY RECAP *********************************************************************************************************************
192.168.10.232 : ok=71 changed=26 unreachable=0 failed=0
192.168.10.233 : ok=98 changed=43 unreachable=0 failed=0
192.168.10.234 : ok=109 changed=48 unreachable=0 failed=0
localhost : ok=2 changed=2 unreachable=0 failed=0
Log in to one of the nodes and check running containers
# ssh [email protected] # docker ps
On both Opencontrail Controller VMs, you should have these containers running:
# docker ps --format '{{.Names}}'
kubemanager_kubemanager_1
analytics_query-engine_1
analytics_topology_1
analytics_api_1
analytics_snmp-collector_1
analytics_collector_1
analytics_alarm-gen_1
analytics_nodemgr_1
analyticsdatabase_cassandra_1
analyticsdatabase_zookeeper_1
analyticsdatabase_nodemgr_1
analyticsdatabase_kafka_1
control_named_1
control_dns_1
control_control_1
control_nodemgr_1
webui_redis_1
webui_job_1
webui_web_1
config_nodemgr_1
config_svcmonitor_1
config_devicemgr_1
config_api_1
config_schema_1
configdatabase_cassandra_1
configdatabase_rabbitmq_1
configdatabase_zookeeper_1
The total number of containers is 26:
# docker ps --format '{{.Names}}' | wc -l
26
On the compute VM which runs and vrouter, I have the following containers running:
# docker ps --format '{{.Names}}'
Some containers have long names, the Kubernetes ones.
k8s_POD_kube-dns-6f4fd4bdf-kj4lc_kube-system_28ddfbb8-3273-11e8-ae30-5254007554af_32
k8s_kube-proxy_kube-proxy-fwn7p_kube-system_5a9648cd-3273-11e8-ae30-5254007554af_0
k8s_POD_kube-proxy-fwn7p_kube-system_5a9648cd-3273-11e8-ae30-5254007554af_0
vrouter_vrouter-agent_1
vrouter_vrouter-net-watchdog_1
vrouter_nodemgr_1
At the end of the procedure, you should also be able to connect to the Contrail webUI. The UI dashboard should be accessible on:
https://192.168.10.233:8143/
https://192.168.10.234:8143/
Username: admin
Password: contrail123
To get a shell of any container, use docker exec command.
# docker exec -it webui_web_1 /bin/bash
TIP: Empty the /root/.ssh/known_hosts if you need to redeploy OpenContrail.
Further reading:
HAproxy configuration for contrail WebUI