Virtualized Lab Setup for Red Hat OpenStack 12

Virtualized Lab Setup for Red Hat OpenStack 12

I believe most of the stackers are just like me - scratch my head to get a hands-on lab, so I can practice and catch up the latest development of the fast-evolving OpenStack technology. But, the challenge is, not everyone can have a luxury lab filled with multiple servers/switches. Without the multiple nodes, it will be difficult to understand & test OpenStack, especially the deployment tool like Red Hat OSP Director (which is based on the upstream TripleO project).

I’m quite lucky to get a shared demo machine, which comes with a 4-nodes Xeon-D Supermicro boards with in-built IPMI switch. As the latest Red Hat OpenStack Platform 12 (a.k.a. RHOSP12) was just released on 13-Dec 2017, I decided to build a virtualized environment to have a taste of RHOSP12.

Why do I choose a virtualized environment, not a bare-metal environment? The main consideration is:

  • First, it is a shared environment. A virtualized approach will give us the flexibility to run several different testing environments in parallel.
  • Second, it is easy to backup, just as simple as snapshot or VM Image copy to my portable hard disk (as the internal disk for the demo machine is quite limited.)

So in the following note, I would like to share the steps/procedures to prepare a virtualized environment for RHOSP12 deployment via OSP Director (which uses ironic’s bare-metal provisioning approach).

Environment Setup

First of all, standard RHOSP deployment needs about several networks (i.e. provisioning, internal API, external, storage & storage management, etc.). To simplify the deployment and network cost, I only keep two networks: one for internal including provisioning and IPMI networks, and the other one for the external network which connects to office switch in order to access Red Hat CDN network for online installation. I took my retired home switch and WiFi router, and connect the network as follows,

In this setup, I plan to create three VMs on top of RHEL7 KVM, they are:

  • undercloud (a.k.a. RHOSP Director), as a bare-metal provisioning and OSP deployment node.
  • overcloud controller - ctrl01-12: for the simplest deployment, I only run one virtual controller (this is the limitation of the hardware resource, as there are other VMs running in these hosts as well.)
  • overcloud compute - comp01-12: same reason as above, I only run one virtual compute.

To run Director-based OpenStack deployment on RHEL7.4/KVM and get a decent performance (especially for virtual nova compute), I enable two features:

  • Nested Virtualization: nested virtualization is Technology Preview for KVM guest from RHEL 7.3. With this feature, a guest virtual machine (also referred to as level 1 or L1) running on a physical host (level 0 or L0) can act as a hypervisor, and create its own (L2) guest virtual machines. This is a must to test OpenStack compute deployment via L1 virtualization (without nested virtualization, the compute performance will be quite unbearable).
  • Virtual Bare Metal Controller (VBMC): VBMC enables OSP Director to perform power management on KVM virtual machines through the emulated IPMI devices. This will also make it possible to drive an automatic bare-metal provisioning

Reference Steps

The reference steps to set up this lab is explained below:

The assumption for the physical servers:

  • RHEL 7.4 with KVM virtualization environment have been set up.
  • Two Linux Bridges have been configured for KVM guests, as follows. br-ex is bridged to the external network, and br-internal is bridged to the internal network as the above-mentioned diagram:
[root@server1 ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br-ex		8000.002590f142e7	no		enp0s20f1
							vnet1
br-internal		8000.002590f142e6	no		enp0s20f0
							vnet0
virbr0		8000.525400575db5	yes		virbr0-nic

Enable nested KM and Prepare VM nodes, as follows,

  • KVM Server1-3: Enable nested virtualization:
[root@server1 ~]# vi /etc/modprobe.d/kvm-nested.conf
# enable nested virtualization
options kvm_intel nested=1
[root@server1 ~]# modprobe -r kvm_intel
[root@server1 ~]# modprobe kvm_intel
[root@server1 ~]# cat /sys/module/kvm_intel/parameters/nested
Y
  • KVM Server1: Copy RHEL Server 7.4 qcow2 cloud image (assume you have downloaded it), enable root password and enlarge the disk size (default is only 10GB)
# cp /tmp/rhel-server-7.4-x86_64-kvm.qcow2 /tmp/rhel-server-7.4-x86_64-kvm-changeme.qcow2
# cd /tmp
# virt-customize -a rhel-server-7.4-x86_64-kvm-changeme.qcow2 --root-password password:changeme
# qemu-img resize rhel-server-7.4-x86_64-kvm-changeme.qcow2 +70G
# cp rhel-server-7.4-x86_64-kvm-changeme.qcow2 /var/lib/libvirt/images/undercloud12.qcow2
  • KVM Server1: Import this virtual machine to KVM server 1 (as OSP Director VM)
[root@server1 ~]# virt-install --name undercloud12 --memory 8192 \
--arch x86_64 --vcpus 2 \
--os-type linux --os-variant rhel7 \
--disk /var/lib/libvirt/images/undercloud12.qcow2,format=qcow2,bus=virtio \
--vnc --noautoconsole \
--network bridge:br-internal,model=virtio \
--network bridge=br-ex,model=virtio \
--import
  • KVM Server 1: Wait for undercloud12 VM to complete the booting. After confirming cloud-init has performed xfs expansion completely, then remove cloud-init.
[root@server1 images]# virsh console undercloud12
Red Hat Enterprise Linux Server 7.4 (Maipo)
Kernel 3.10.0-693.el7.x86_64 on an x86_64
  localhost login: root
  Password:
  Last login: Mon Jan  1 22:46:51 on ttyS0
  [root@localhost ~]# df -h
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/vda1        80G  887M   80G   2% /
  devtmpfs        3.9G     0  3.9G   0% /dev
  tmpfs           3.9G     0  3.9G   0% /dev/shm
  tmpfs           3.98.43.9G   1% /run
  tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
  tmpfs           783M     0  783M   0% /run/user/0
[root@localhost ~]# yum remove cloud-init
  • KVM Server 2 & 3: just copy the qcow2 image to server 2 & 3 (or create a dummy qcows image file), and name them as ctrl01-12.qcow2 and comp01-12.qcow2 under /var/lib/libvirt/images/ respectively. 
  • KVM2: Import controller VM, create VBMC and enable VBMC ports
[root@server2 images]# sudo virt-install --name ctrl01-12 --memory 8192 \
> --arch x86_64 --vcpus 6,sockets=1 \
> --os-type linux --os-variant rhel7 \
> --disk /var/lib/libvirt/images/ctrl01-12.qcow2,format=qcow2,bus=virtio \
> --vnc --noautoconsole \
> --network bridge:br-internal,model=virtio \
> --network bridge=br-ex,model=virtio \
> --import
Starting install...
Domain creation completed.
[root@server2 images]# virsh shutdown ctrl01-12
Domain ctrl01-12 is being shutdown
[root@server2 images]# vbmc add ctrl01-12 --port 6330 --username admin --password redhat123!
 Exception TypeError: "'NoneType' object is not callable" in <function _removeHandlerRef at 0x1985cf8> ignored (ps. you can ignore this minor bug)
[root@server2 images]# vbmc list
+-------------+--------+---------+------+
| Domain name | Status | Address | Port |
+-------------+--------+---------+------+
|    ctrl01   |  down  |    ::   | 6230 |
|  ctrl01-12  |  down  |    ::   | 6330 |
+-------------+--------+---------+------+
 Exception TypeError: "'NoneType' object is not callable" in <function _removeHandlerRef at 0x142fcf8> ignored (ps. you can ignore this minor bug)
[root@server2 images]# firewall-cmd --add-port=6330/udp
 success
[root@server2 images]#  firewall-cmd --add-port=6330/udp --permanent
 success
[root@server2 images]# vbmc start ctrl01-12
2018-01-03 10:53:33,282.282 7316 INFO VirtualBMC [-] Virtual BMC for domain ctrl01-12 started
  • KVM3: Import compute VM, create VBMC and enable VBMC ports
[root@server3 images] sudo virt-install --name comp01-12 --memory 16384 \
--arch x86_64 --vcpus 4,sockets=2 \
--os-type linux --os-variant rhel7 \
--cpu host-passthrough \
--disk /var/lib/libvirt/images/comp01-12.qcow2,format=qcow2,bus=virtio \
--vnc --noautoconsole \
--network bridge:br-internal,model=virtio \
--network bridge=br-ex,model=virtio \
--import
[root@server3 images]# virsh shutdown comp01-12
[root@server3 images]# vbmc add comp01-12 --port 6330 --username admin --password redhat123!
Exception TypeError: "'NoneType' object is not callable" in <function _removeHandlerRef at 0x2292cf8> ignored (ps. you can ignore this minor bug)
[root@server3 images]# firewall-cmd --add-port=6330/udp
success
[root@server3 images]# firewall-cmd --add-port=6330/udp --permanent
success
[root@server3 images]# vbmc start comp01-12
2018-01-03 11:06:42,153.153 6645 INFO VirtualBMC [-] Virtual BMC for domain comp01-12 started

Follow OSP12 Director Installation and Usage guide to install OSP Director. I ignore the details for OSP12 Director installation (assume you are already familiar with Director installation)

After OSP12 Director installation, the following is the "instackenv.json" example to include these two virtual controller / compute nodes:

 (undercloud) [stack@undercloud12 ~]$ vim instackenv.json
{
  "nodes": [
    {
      "pm_type": "pxe_ipmitool",
      "mac": [
        "52:54:00:14:a7:05"
      ],
      "pm_user": "admin",
      "pm_password": "redhat123!",
      "pm_addr": "<KVM Server2 host IP>",
      "pm_port": "6330",
      "name": "ctrl01-12"
    },
    {
      "pm_type": "pxe_ipmitool",
      "mac": [
        "52:54:00:68:e1:8b"
      ],
      "pm_user": "admin",
      "pm_password": "redhat123!",
      "pm_addr": "<KVM Server 3 host IP>",
      "pm_port": "6330",
      "name": "comp01-12"
    }
  ]
}

The introspection for VBMC just like the normal bare-metal nodes, you can see the following example to import the virtual nodes and run the introspection for these nodes.

(undercloud) [stack@undercloud12 ~]$ openstack overcloud node import ~/instackenv.json
Started Mistral Workflow tripleo.baremetal.v1.register_or_update. Execution ID: 8e66838f-c537-45d8-b2ac-fb19f6751394
Waiting for messages on queue '7ff2baa7-f845-477a-9cf3-ff7081712ee7' with no timeout.
(undercloud) [stack@undercloud12 ~]$ openstack baremetal node list
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name      | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| 9eb3b26a-9e54-46a1-ab0f-6bda3694345c | ctrl01-12 | None          | power off   | manageable         | False       |
| 240e8584-727e-4632-82f7-c562bdea7a17 | comp01-12 | None          | power off   | manageable         | False       |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
(undercloud) [stack@undercloud12 ~]$ openstack overcloud node introspect --all-manageable –provide
Successfully set nodes state to available.
(undercloud) [stack@undercloud12 ~]$ mkdir swift-data
(undercloud) [stack@undercloud12 ~]$ cd swift-data
(undercloud) [stack@undercloud12 swift-data]$ for node in $(ironic node-list | grep -v UUID| awk '{print $2}'); do \
> openstack baremetal introspection data save $node | jq . > $node.json; done
(undercloud) [stack@undercloud12 swift-data]$ ll
total 24
-rw-rw-r--. 1 stack stack 11183 Jan  3 12:24 240e8584-727e-4632-82f7-c562bdea7a17.json
-rw-rw-r--. 1 stack stack 10469 Jan  3 12:23 9eb3b26a-9e54-46a1-ab0f-6bda3694345c.json

After introspection, you can just treat these two virtual nodes as normal bare-metal nodes, and proceed the virtual OSP12 installation with the standard installation procedure.

Hopefully, this small article is helpful for the people who like to try virtual Red Hat OpenStack lab. Have fun!

HI Derak, i have not clear yet, are you using a single server to deply TripleO with virtualBMC?

Like
Reply

Hey Derek, do you have anything on the continuation of the 'standart installation process'? We got easily to the introspection step, but deploying the overcloud has been the pain. I don't know what templates to use, what sequence and what configuration for a environment let's says with 1 nic (we could try other environments with more NICs if you have the yamls for that).

To view or add a comment, sign in

More articles by Derek Li

Others also viewed

Explore content categories