Building an ONAP Lab – ONAP Amsterdam using OpenStack Ocata and OPNFV Euphrates

Estimated Reading Time: 4 minutes

Word Count: 938

Welcome to the new blog series part 1 of 3 comprising of ONAP, OPNFV, OpenStack, and the components of Open Networking. This blog is about building and deploying a working ONAP lab, to test out its components and services. It has been five months in the making – not five months to write these two pages, but five months to deploy, test, troubleshoot, document, and then reach a stage to talk about what we accomplished and how in retrospect with a smile on our faces.

One may ask why open source – Using proprietary software and the ecosystem built around them is adopted by everyone in the mainstream world – Then why go open source? The answer to that lies in the high-level of understanding of using proprietary software, and its highly-limited modifiability.

You cannot simply make changes to commercially available solutions without breaking a handful of laws and/or voiding the support received on them. Nor can one modify bugs or fix the defects they find in the commercial product – It involves bug reports, incident reports, research, analysis, possible fixes, developer release, alpha release, beta release, test release, and then finally a commercial release.

Whereas, with open source licensed software, a developer or an end-user on finding a bug or a defect can attempt to address it themselves – and if successful upon their subsequent testing, choose to push the code upstream and share it with the global community.

Open Platform for Network Function Virtualization or OPNFV is one such open source project started by The Linux Foundation to better facilitate carrier-grade networking, performance analysis, and based on that make changes to the carrier’s network to enhance Key Performance Indicators for network service providers. OPNFV is OpenStack with additional plug-ins around SDN and NFV built-in.

Open Network Automation Platform or ONAP provides scaling for NFV over OpenStack using automated orchestration. ONAP utilizes the underlay provided by OpenStack to facilitate its own overlay network where the network orchestration pieces interwork.

We deployed ONAP Amsterdam using two methods – one on plain OpenStack Ocata, and one on OPNFV Euphrates. Both these setups were done in separate infrastructures, which were carried out in a lab environment, but were treated as production deployments in terms of modifying the scripts.

The first method was to deploy ONAP Amsterdam using a modified YAML Heat Template and the corresponding environment variables over a “vanilla” OpenStack Ocata deployment. This OpenStack Ocata deployment was realised using the Ansible based installer. Here we readied the deployment host (a JumpHost of sorts) and the target hosts (the bare metal servers which would serve as either control, compute or storage for the OpenStack), a total of nine bare metal servers. One was the deployment host, three controllers, four compute nodes, and one rsyslog cum storage node. Once we had OpenStack Ocata running and all the services verified, we used the YAML template and the environment file to deploy the Stack with fourteen ONAP services including Portal, Policy, AAI, DNS, etc. In this method, we were constrained by the compute resources at our disposal.

For the second method, that is deploying ONAP Amsterdam over OPNFV Euphrates, we used Huawei’s Compass4NFV installer to get things rolling. We deployed OPNFV Euphrates with the scenario of ODL+SFC+HA initially. OPNFV requires an odd number of controllers to deploy in HA – so 3 was the least (and the most we could do). On completing the initial deployment of OPNFV Euphrates using Compass installer, we found that the compute resources available to us (especially the vCPU count) were highly constrained if we wanted to accomplish an ONAP installation close to what we performed using the “vanilla” OpenStack Ocata. Hence we redeployed OPNFV using only one controller (no-HA or HA with one node only scenario). This gave us enough compute resources to deploy ONAP’s fourteen services. We then proceeded to verify this OPNFV installation for the service status and deployed the Stack using the same heat template and the environment file, making some tweaks with the timeout values – which were affecting a successful ONAP deployment as we now had only one controller node. This final deployment of OPNFV running ONAP over it had six bare metal servers at its disposal – one JumpHost, one control node and four compute nodes.

Ideally, in a scaled setup, the preferred method of installation needs to be automated. In vanilla OpenStack, all the deployment resources need to be manually installed from the underlying operating system, additional software components to the configuration of network components, and services like HAProxy, rSyslog, etc. This involves needing a high-level of manual intervention during the installation process.

For a production environment, as the number of hosts for installation increases, it is preferable to go for an automatic deployment of OPNFV, which may involve any of the flavours for deployment – namely Compass4NFV, Juju, Apex, and Triple-O. We chose the Euphrates release of OPNFV using the Compass4NFV installer for deployment. The configuration files for No-SDN scenario and the networking files were relatively simpler to modify.

The process to deploy ONAP starts from the planning stage, where planning should focus on providing the relevant number of resources to OpenStack Under-Cloud computes. ONAP can be deployed without the DCAE services present. The Amsterdam heat templates have to be modified for the OpenStack parameters such as Horizon URLs, SSH keys, OS flavours to be used for VMs and the quotas limits. The environment templates for Heat take care of the Docker container versions and their downloads. For older releases of OPNFV, the Docker images needed to be manually downloaded and installed.

To learn more about our experience and to engage our team, please reach out to us on engage@serro.com.