How to Build an Operational ONAP Amsterdam Lab, using Two Methods: OpenStack Ocata and OPNFV Euphrates – Part 3 of 3

Estimated Reading Time: 15 minutes

Word Count: 2,828

Welcome to the new blog series comprising of ONAP, OPNFV, OpenStack, and the components of Open Networking. This blog (part 3 of 3) is about how we implemented and deployed a working ONAP Amsterdam lab. We deployed ONAP Amsterdam using two methods – one on plain OpenStack Ocata, and one on OPNFV Euphrates. Both these setups were done in separate infrastructures, which were carried out in a lab environment, but were treated as production deployments in terms of modifying the scripts.

The steps laid out in this blog are intended for an advanced audience having the knowledge of Networking and Linux configuration, troubleshooting and administration.


Part 1 of this blog series can be found on: Building an ONAP Lab – ONAP Amsterdam using OpenStack Ocata and OPNFV Euphrates

Part 2 of this blog series can be found on: How to Build an Operational ONAP Amsterdam Lab, using Two Methods: OpenStack Ocata and OPNFV Euphrates

 

Section 1: Installing ONAP over OpenStack Ocata

    Systems Layout
    Prerequisites for OpenStack Ocata installation 
    Prepare the Deployment Host (JumpHost)  
    Prepare the Target Hosts
    Configure the OpenStack deployment using Ansible playbooks
    Run the Ansible playbooks to install OpenStack Ocata
    Verifying OpenStack Ocata installation
    Deploying ONAP Amsterdam over OpenStack Ocata

 

Table of Contents

Section 2: Installing ONAP over OPNFV Euphrates

3
Systems Layout 3
Figure 1: Physical Topology: OPNFV Euphrates 4
Figure 2: Logical Topology: OPNFV Euphrates 4
Deploying OPNFV Euphrates 5
Prerequisites to running the Compass4NFV deployment 5
Installing OPNFV Euphrates using Compass installer 6
Configuring the network_cfg.yaml file 6
nstallation of OPNFV using Compass 7
Verifying OPNFV Installation 7
Deploying ONAP Amsterdam over OPNFV Euphrates 8
Section 3: Appendices 9
Appendix A: OpenStack Method: Network Configuration “/etc/network/interfaces” Sample 9
Appendix B: OpenStack Method: “/etc/openstack_deploy/openstack_user_config.yml” Sample 11
Appendix C: OPNFV Method: “compass4nfv/network.yml” Sample 18
Appendix D: OPNFV Method: DHA file – “compass4nfv/os-nosdn-nofeature-ha.yml” 20
Appendix E: OPNFV Method: deploy.sh file 22
Appendix F: ONAP Amsterdam: “onap_openstack.env” ONAP Environment file 23
Section 4: References 28
OpenStack Ocata Ansible installation method: 28
OPNFV Euphrates Installation using Compass4NFV method
http://docs.opnfv.org/en/stable-euphrates/submodules/compass4nfv/docs/release/installation/index.html#compass4nfv-installation
28
ONAP Amsterdam deployment
https://onap.readthedocs.io/en/amsterdam/guides/onap-developer/settingup/fullonap.html
28
About the Authors 28

Section 2: Installing ONAP over OPNFV Euphrates:

Systems Layout:

OPNFV Euphrates

Hostname System Role OS Version
JumpHost JumpHost Ubuntu 14.04 LTS
Host1 Compute Ubuntu 16.04 LTS
Host2 Compute Ubuntu 16.04 LTS
Host3 Compute Ubuntu 16.04 LTS
Host4 Compute Ubuntu 16.04 LTS
Host5 Controller Ubuntu 16.04 LTS

OPNFV Euphrates

Hostname Interface VLANs
All IPMI Mode: Access - 100
All eth0 Mode: Access - 102
All eth1 Mode: Trunk - 101, 103, 104

OPNFV Euphrates

VLAN Name VLAN ID VLAN Function
vlan - 100 100 IPMI
vlan - 101 101 Management
vlan - 102 102 PXE Boot
vlan - 103 103 Compute
vlan - 104 104 Storage

The following diagrams show how we laid out the physical and logical topologies for our setup.

Figure 1: Physical Topology: OPNFV Euphrates:

Figure 2: Logical Topology: OPNFV Euphrates:

Deploying OPNFV Euphrates:

 

To deploy we used the Compass4NFV installer for the Euphrates release of OPNFV. This process involved us to make use of a bare metal host as a JumpHost, through which all the deployment would be controlled. The procedure involves using IPMI interfaces to PXE boot the target hosts and install the Ubuntu 16.04 LTS OS on it, and configure it as defined in the network.yml file in our scenario. On configuring the required files, we ran the modified deploy.sh script, which pulled the image files and repositories needed for the PXE install of the target host OS, and configuration of OPNFV Euphrates.

Prerequisites to running the Compass4NFV deployment:

  1. Retrieving the installation tarball. A stable release tarball can be retrieved from the OPNFV software download page (http://artifacts.opnfv.org/compass4nfv.html)
    • a. Search the keyword “compass4nfv/Euphrates” to locate the tarball
    • b. Example – compass4nfv/euphrates/opnfv-2017-03-29_08-55-09.tar.gz
  2. Retrieve the Deployment Scripts from Gerrit repository.
    • a. Command: “git clone https://gerrit.opnfv.org/gerrit/compass4nfv”
    • b. Please do not clone the Git repository in the Root directory structure, do it inside a subfolder.
    • c. To be on the stable/euphrates release, use the command: git checkout opnfv-5.1.0
  3. JumpHost requirements:
    • a. Ubuntu 14.04 LTS Pre-installed with Root (sudo) access.
      • i. Please use Ubuntu 14.04 LTS only on the JumpHost, the Compass installer scripts won’t work smoothly on Xenial (16.04) and would lead the Docker containers running Compass components to fail
    • b. libvirt virtualization support.
    • c. Minimum 2 NICs.
      • i. PXE installation Network (Receiving PXE request from nodes and providing OS provisioning)
      • ii. IPMI Network (Nodes power control and set boot PXE first via IPMI interface)
      • iii. External Network
    • d. 16GB of RAM for a Bare Metal deployment
    • e. CPU cores: 32, Memory: 64 GB, Hard Disk: 500 GB
  4. Bare Metal Node requirements:
    • a. IPMI enabled for OOB or LOM power control.
    • b. BIOS boot priority should be set to PXE (first) then local hard disk (second).
    • c. Minimum 3 NICs – Refer to the Physical Topology diagram above in Figure 1.
      • i. eth0 – PXE installation Network (Broadcasting PXE request)
      • ii. ipmi – IPMI Network (Receiving IPMI commands from Jumphost)
      • iii. eth1 (trunked port) – OpenStack Management, External, Storage, Tenant networks
  5. Network requirements (on the Switching and Routing layer):
    • a. No DHCP or TFTP server running on networks used by OPNFV.
    • b. 2-6 separate networks with connectivity between Jumphost and nodes:
      • i. PXE installation Network
      • ii. IPMI Network
      • iii. [*] Openstack Management Network
      • iv. [*] Openstack External Network
      • v. [*] Openstack Storage Network
      • vi. [*] Openstack Tenant Network
    • c. Lights out OOB network access from Jumphost with IPMI node enabled.
    • d. External network needs to have Internet access, meaning a gateway and DNS availability.
    • e. Note: The networks with [*] could be sharing one physical NIC, or use a dedicated physical NIC each – which is configurable through the network.yml file.
  6. Gather information on the following before starting execution of the script:
    • a. IPMI IP addresses of the nodes.
    • b. IPMI login credentials for the nodes – their usernames and passwords.
    • c. MAC address of Control Plane / Provisioning interfaces of the Bare Metal nodes.
  7. Compass installer needs the following three files modified:
    • a. network_cfg.yml – For OpenStack networks on the target hosts
    • b. dha file – For target host role, IPMI credentials and target host NIC information including the MAC addresses
    • c. deploy.sh – For declaring the target host OS and OpenStack versions

Installing OPNFV Euphrates using Compass installer:

Configuring the network_cfg.yaml file:

  1. Configure the OpenStack network in the network_cfg.yaml file.
    • a. Note: All interface names in this file must be defined in the dha file through MAC Addresses.
  2. Configure the provider network – Do not use eth0, as eth0 is tied to the PXE Boot network.
  3. Configure OpenStack Management, Tenant and Storage networks.
    • a. Change the VLAN tags according to your setup.
  4. Assign IP address ranges to networks in the “ip_settings” section
    • a. External networks
  5. Configure a Public IP for the OpenStack Ocata Horizon dashboard
  6. Sample network_cfg.yaml file used by us shown in the Appendix C.

Installation of OPNFV using Compass:

  1. Open the DHA file – We used the one in the hierarchy “compass4nfv/deploy/conf/hardware_environment/huawei-pod1/dha.yml”. The DHA file is the inventory template to record the Hosts, their IPMI IP addresses and the corresponding MAC Addresses.
  2. Set TYPE/FLAVOR and POWER TOOL
  3. Set ipmiUser/ipmiPass and ipmiVer
  4. Assign roles to the servers
    • a. NOTE: The “ha” role MUST BE selected with Controllers, even if there is only one Controller node in the setup.
  5. A sample DHA file shown in Appendix D.
  6. Modify the network configuration for Bare Metal deployment in the “compass4nfv/deploy/conf/hardware_environment/huawei-pod1/network.yml” file.
  7. Modify the “deploy.sh” script to reference the DHA and the network.yml files you modified.
    • a. Set OS version for deployment nodes. Compass4NFV supports Ubuntu based OpenStack Ocata.
    • b. Set tarball corresponding to your code
    • c. Set hardware deploy JumpHost PXE NIC.
    • d. Set scenario that you want to deploy – We went with the “” scenario
  8. A sample deploy.sh file is shown in Appendix E.
  9. Create a new file “/etc/docker/daemon.json”; adding the line “storage-driver: devicemapper” to it.
    • a. This was to circumvent the DB error seen with containers.
    • b. Compass containers don’t support docker storage of overlay2 – Either change to devicemapper or aufs.
  10. Before running deploy.sh script, make sure no other services (apache2, nginx, etc) are installed or running on port 80. Compass installer needs port 80 free for itself for starting the Cobbler container.
  11. Run deploy.sh script – ./deploy.sh
  12. Monitor the status by tailing the “compass4nfv/work/deploy/log/compass-deploy.log” file.

Verifying OPNFV Installation:

  1. Login to the Controller node
  2. Connect to the Linux Container running the Utility service
    • a. lxc-ls | grep utility
    • b. lxc-attach -n
  3. Source the openrc file – source /root/openrc
  4. Get the credentials for the admin user – printenv | grep OS
  5. Login to Horizon dashboard using user “admin” and password acquired in the previous step
  6. Create a new subnet, create floating IP ranges
  7. Create a test Cirros instance, assign it a floating IP address.
    • a. Confirm you can log into the instance and the instance can reach out to the internet (issue a wget or curl on www.google.com)

Deploying ONAP Amsterdam over OPNFV Euphrates:

  1. Login to the Controller node (Host5 in our case)
  2. Connect to the Linux Container running the Utility service
    • a. lxc-ls | grep utility
    • b. lxc-attach -n
  3. Source the openrc file – source /root/openrc
  4. Create the onap_openstack.yaml and onap_openstack.env files
  5. Modify them as per your environment
  6. Use OpenStack stack create command to deploy ONAP
    • a. Sample – openstack stack create -t onap_openstack.yaml -e onap_openstack.env onap
  7. Verify if your ONAP deployment by checking for the Stack status – openstack stack list
  8. Check the status of the instances – openstack instance list | grep onap
  9. Modify your local machine’s /etc/hosts file and add the links to the portal, policy, sdc and vid services of ONAP – thus telling your local machine to resolve those FQDNs to the defined IPs

    ### Example ###
    10.100.202.101 policy.api.simpledemo.onap.org
    10.100.202.102 portal.api.simpledemo.onap.org
    10.100.202.103 sdc.api.simpledemo.onap.org
    10.100.202.104 vid.api.simpledemo.onap.org
    10.100.202.105 aai.api.simpledemo.onap.org
  10. Once done, connect to the ONAP Portal using the URL – http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
    • a. Default username and password is demo / demo123456!

Section 3: Appendices

Appendix A: OpenStack Method: Network Configuration “/etc/network/interfaces” Sample

### JumpHost / Deployment Host – Sample configuration example ###

auto eno1
iface eno1 inet static
address 172.16.12.10
netmask 255.255.252.0
network 172.16.12.0
broadcast 172.16.15.255
gateway 172.16.12.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 8.8.8.8 8.8.4.4


### Target Host - Sample config example ###
auto eno1
iface eno1 inet manual
bond-master bond0
bond-primary eno1
auto eno2
iface eno2 inet manual
bond-master bond1
bond-primary eno2
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-downdelay 200
bond-updelay 200
# Container/Host management bridge
auto br-mgmt
iface br-mgmt inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond0
address 172.16.12.11
netmask 255.255.252.0
gateway 172.16.12.1
dns-nameservers 8.8.8.8 8.8.4.4
# This bond will carry VLAN and VXLAN traffic to ensure isolation from control plane traffic on bond0.
auto bond1
iface bond1 inet manual
bond-slaves none
bond-mode active-backup
bond-miimon 100
bond-down-delay 250
bond-updelay 250


# OpenStack Networking VXLAN (tunnel/overlay) VLAN interface
auto bond1.1000
iface bond1.1000 inet manual
vlan-raw-device bond1


# Storage network VLAN interface (optional)
auto bond1.1001
iface bond1.1001 inet manual
vlan-raw-device bond1


auto bond1.1002
iface bond1.1002 inet manual
vlan-raw-device bond1
auto bond1.9
iface bond1.9 inet manual
vlan-raw-device bond1
# compute1 VXLAN (tunnel/overlay) bridge config
auto br-vxlan
iface br-vxlan inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond1.1000
address 172.29.240.11
netmask 255.255.252.0
# OpenStack Networking VLAN bridge
auto br-vlan
iface br-vlan inet manual
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond1.1002
# Create veth pair, do not abort if already exists
pre-up ip link add br-vlan-veth type veth peer name eth12 || true
# Set both ends UP
pre-up ip link set br-vlan-veth up
pre-up ip link set eth12 up
# Delete veth pair on DOWN
post-down ip link del br-vlan-veth || true
bridge_ports bond1 br-vlan-veth
auto br-storage
iface br-storage inet static
bridge_stp off
bridge_waitport 0
bridge_fd 0
bridge_ports bond1.1001
address 172.29.244.11
netmask 255.255.252.0
source /etc/network/interfaces.d/*.cfg

Appendix B: OpenStack Method: “/etc/openstack_deploy/openstack_user_config.yml” Sample


---
cidr_networks:
container: 172.16.12.0/22
tunnel: 172.29.240.0/22
storage: 172.29.244.0/22
used_ips:
- "172.16.12.1,172.16.12.50"
- "172.29.240.1,172.29.240.50"
- "172.29.244.1,172.29.244.50"
- "172.29.248.1,172.29.248.50"
global_overrides:
internal_lb_vip_address: 172.16.12.14
# The below domain name must resolve to an IP address
# in the CIDR specified in haproxy_keepalived_external_vip_cidr.
# If using different protocols (https/http) for the public/internal
# endpoints the two addresses must be different.
external_lb_vip_address: 172.16.9.14
tunnel_bridge: "br-vxlan"
management_bridge: "br-mgmt"
provider_networks:
- network:
container_bridge: "br-mgmt"
container_type: "veth"
container_interface: "eth1"
ip_from_q: "container"
type: "raw"
group_binds:
- all_containers
- hosts
is_container_address: true
is_ssh_address: true
- network:
container_bridge: "br-vxlan"
container_type: "veth"
container_interface: "eth10"
ip_from_q: "tunnel"
type: "vxlan"
range: "1:1000"
net_name: "vxlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth12"
host_bind_override: "eth12"
type: "flat"
net_name: "flat"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-vlan"
container_type: "veth"
container_interface: "eth11"
type: "vlan"
range: "1:1"
net_name: "vlan"
group_binds:
- neutron_linuxbridge_agent
- network:
container_bridge: "br-storage"
container_type: "veth"
container_interface: "eth2"
ip_from_q: "storage"
type: "raw"
group_binds:
- glance_api
- cinder_api
- cinder_volume
- nova_compute


# Infrastructure
# galera, memcache, rabbitmq, utility
shared-infra_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# repository (apt cache, python packages, etc)
repo-infra_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# load balancer
# Ideally the load balancer should not use the Infrastructure hosts.
# Dedicated hardware is best for improved performance and security.
haproxy_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# rsyslog server
log_hosts:
log1:
ip: 172.16.12.18
# OpenStack
# keystone
identity_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# cinder api services
storage-infra_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16


# glance
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
image_hosts:
infra1:
ip: 172.16.12.14
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.18"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra2:
ip: 172.16.12.15
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.18"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"
infra3:
ip: 172.16.12.16
container_vars:
limit_container_types: glance
glance_nfs_client:
- server: "172.29.244.18"
remote_path: "/images"
local_path: "/var/lib/glance/images"
type: "nfs"
options: "_netdev,auto"


# nova api, conductor, etc services
compute-infra_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# heat
orchestration_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# horizon
dashboard_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# neutron server, agents (L3, etc)
network_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16


# ceilometer (telemetry API)
metering-infra_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# aodh (telemetry alarm service)
metering-alarm_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# gnocchi (telemetry metrics storage)
metrics_hosts:
infra1:
ip: 172.16.12.14
infra2:
ip: 172.16.12.15
infra3:
ip: 172.16.12.16
# nova hypervisors
compute_hosts:
compute1:
ip: 172.16.12.11
compute2:
ip: 172.16.12.12
compute3:
ip: 172.16.12.13
compute4:
ip: 172.16.12.17
# ceilometer compute agent (telemetry)
metering-compute_hosts:
compute1:
ip: 172.16.12.11
compute2:
ip: 172.16.12.12
compute3:
ip: 172.16.12.13
compute4:
ip: 172.16.12.17


# cinder volume hosts (NFS-backed)
# The settings here are repeated for each infra host.
# They could instead be applied as global settings in
# user_variables, but are left here to illustrate that
# each container could have different storage targets.
storage_hosts:
infra1:
ip: 172.16.12.14
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_attempts: 3
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
nas_secure_file_permissions: False
nas_secure_file_operations: False
shares:
- ip: "172.29.244.18"
share: "/vol/cinder"
infra2:
ip: 172.16.12.15
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_attempts: 3
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
nas_secure_file_permissions: False
nas_secure_file_operations: False
shares:
- ip: "172.29.244.18"
share: "/vol/cinder"
infra3:
ip: 172.16.12.16
container_vars:
cinder_backends:
limit_container_types: cinder_volume
nfs_volume:
volume_backend_name: NFS_VOLUME1
volume_driver: cinder.volume.drivers.nfs.NfsDriver
nfs_mount_attempts: 3
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"
nfs_shares_config: /etc/cinder/nfs_shares
nas_secure_file_permissions: False
nas_secure_file_operations: False
shares:
- ip: "172.29.244.18"
share: "/vol/cinder"

Appendix C: OPNFV Method: “compass4nfv/network.yml” Sample


---
nic_mappings: []
bond_mappings: []
provider_net_mappings:
- name: br-prv
network: physnet
interface: eth10
type: ovs
role:
- controller
sys_intf_mappings:
- name: mgmt
interface: eth0
type: normal
vlan_tag: None
role:
- controller
- compute
- name: tenant
interface: eth1
type: normal
vlan_tag: 152
role:
- controller
- compute
- name: storage
interface: eth1
type: normal
vlan_tag: 153
role:
- controller
- compute
- name: external
interface: eth1


type: ovs
vlan_tag: 103
role:
- controller
- compute
ip_settings:
- name: mgmt
ip_ranges:
- - "10.1.0.50"
- "10.1.0.100"
dhcp_ranges:
- - "10.1.0.2"
- "10.1.0.49"
cidr: "10.1.0.0/24"
gw: "10.1.0.1"
role:
- controller
- compute
- name: tenant
ip_ranges:
- - "172.16.1.1"
- "172.16.1.50"
cidr: "172.16.1.0/24"
role:
- controller
- compute
- name: storage
ip_ranges:
- - "172.16.2.1"
- "172.16.2.50"
cidr: "172.16.2.0/24"
role:
- controller
- compute
- name: external
ip_ranges:
- - "10.100.202.30"
- "10.100.202.50"
cidr: "10.100.202.0/24"
gw: "10.100.202.252"
role:
- controller
- compute
internal_vip:
ip: 10.1.0.222


netmask: "24"
interface: mgmt
public_vip:
ip: 10.100.202.222
netmask: "24"
interface: external
#onos_nic: eth2
tenant_net_info:
type: vxlan
range: "1:1000"
provider_network: None
public_net_info:
enable: "True"
network: ext-net
type: flat
segment_id: 10
subnet: ext-subnet
provider_network: physnet
router: router-ext
enable_dhcp: "False"
no_gateway: "False"
external_gw: "10.100.202.252"
floating_ip_cidr: "10.100.202.0/24"
floating_ip_start: "10.100.202.150"
floating_ip_end: "10.100.202.200"

Appendix D: OPNFV Method: DHA file – “compass4nfv/os-nosdn-nofeature-ha.yml”


---
TYPE: baremetal
FLAVOR: cluster
POWER_TOOL: ipmitool
ipmiUser: ADMIN
ipmiVer: '2.0'
hosts:
- name: host1
mac: '0C:C4:7A:8E:9D:80'
interfaces:
- eth1: '0C:C4:7A:8E:9D:81'
ipmiIp: 10.100.202.10
ipmiPass: ADMIN
roles:
- compute
- name: host2
mac: '0C:C4:7A:8F:AF:EA'
interfaces:
- eth1: '0C:C4:7A:8F:AF:EB'
ipmiIp: 10.100.202.7
ipmiPass: ADMIN
roles:
- compute
- name: host3
mac: '0C:C4:7A:8E:9D:8A'
interfaces:
- eth1: '0C:C4:7A:8E:9D:8B'
ipmiIp: 10.100.202.8
ipmiPass: ADMIN
roles:
- compute
- name: host4
mac: '0C:C4:7A:8E:9E:CA'
interfaces:
- eth1: '0C:C4:7A:8E:9E:CB'
ipmiIp: 10.100.202.9
ipmiPass: ADMIN
roles:
- compute
- name: host5
mac: '0C:C4:7A:88:79:E6'
interfaces:
- eth1: '0C:C4:7A:88:79:E7'
ipmiIp: 10.100.202.2
ipmiPass: ADMIN
roles:
- controller
- ha

Appendix E: OPNFV Method: deploy.sh file


# Set OS version for target hosts
# Ubuntu16.04 or CentOS7
#export OS_VERSION=xenial/centos7
export OS_VERSION=xenial
# Set ISO image corresponding to your code
# export TAR_URL=file:///home/compass/compass4nfv.iso
export TAR_URL=file:///home/opnfvadmin/compass/compass4nfv/opnfv.tar.gz
# Set hardware deploy jumpserver PXE NIC
# You need to comment out it when virtual deploy.
export INSTALL_NIC=p1p1
# DHA is your dha.yml's path
# export DHA=/home/compass4nfv/deploy/conf/vm_environment/os-nosdn-nofeature-ha.yml
export DHA=/home/opnfvadmin/compass/compass4nfv/os-nosdn-nofeature-ha.yml
# NETWORK is your network.yml's path
# export NETWORK=/home/compass4nfv/deploy/conf/vm_environment/huawei-virtual1/network.yml
export NETWORK=/home/opnfvadmin/compass/compass4nfv/network.yml
export OPENSTACK_VERSION=${OPENSTACK_VERSION:-ocata}
export DEPLOY_FIRST_TIME="false"
export OFFLINE_DEPLOY=Enable
if [[ "x"$KUBERNETES_VERSION != "x" ]]; then
unset OPENSTACK_VERSION
fi
COMPASS_DIR=`cd ${BASH_SOURCE[0]%/*}/;pwd`
export COMPASS_DIR
if [[ -z $DEPLOY_COMPASS && -z $DEPLOY_HOST && -z $REDEPLOY_HOST ]]; then
export DEPLOY_COMPASS="true"
export DEPLOY_HOST="true"
fi
LOG_DIR=$COMPASS_DIR/work/deploy/log
export LOG_DIR
mkdir -p $LOG_DIR
$COMPASS_DIR/deploy/launch.sh $* 2>&1 | tee $LOG_DIR/compass-deploy.log
if [[ $(tail -1 $LOG_DIR/compass-deploy.log) != 'compass deploy success' ]]; then
exit 1
fi

Appendix F: ONAP Amsterdam: “onap_openstack.env” ONAP Environment file


parameters:
##############################################
# #
# Parameters used across all ONAP components #
# #
##############################################
public_net_id: ae5afe9c-8d45-4dd9-b002-7a8ec898125d
public_net_name: ext-net
ubuntu_1404_image: ubuntu-1404
ubuntu_1604_image: ubuntu-1604
flavor_small: m1.small
flavor_medium: m1.medium
flavor_large: m1.large
flavor_xlarge: m1.xlarge
flavor_xxlarge: m1.xxlarge
vm_base_name: onap
key_name: onap_key
pub_key: < Insert your RSA Key here >
nexus_repo: https://nexus.onap.org/content/sites/raw
nexus_docker_repo: nexus3.onap.org:10001
nexus_username: docker
nexus_password: docker
dmaap_topic: AUTO
artifacts_version: 1.1.1
openstack_tenant_id: 7890c8e804e84a779cd0ef406a07d386
openstack_tenant_name: admin
openstack_username: admin
openstack_api_key: 49ef27251b38c5124378010e7be8758eb
openstack_auth_method: password
openstack_region: RegionOne
horizon_url: https://10.100.202.222/
keystone_url: https://10.100.202.222:5000/
cloud_env: openstack
######################
# #
# Network parameters #
# #
######################
dns_list: 8.8.8.8
external_dns: 8.8.8.8
dns_forwarder: 10.0.100.1
oam_network_cidr: 10.0.0.0/16
### Private IP addresses ###
aai1_ip_addr: 10.0.1.1
aai2_ip_addr: 10.0.1.2
appc_ip_addr: 10.0.2.1


sdc_branch: amsterdam
sdnc_branch: amsterdam
vid_branch: amsterdam
clamp_branch: amsterdam
vnfsdk_branch: amsterdam
aai_docker: v1.1.1
aai_sparky_docker: v1.1.1
appc_docker: v1.2.0
so_docker: v1.1.2
dcae_docker: v1.1.2
policy_docker: v1.1.3
portal_docker: v1.3.0
robot_docker: 1.1-STAGING-latest
sdc_docker: v1.1.0
sdnc_docker: v1.2.2
vid_docker: v1.1.2
clamp_docker: v1.1.0
msb_docker: 1.0.0
mvim_docker: v1.0.0
uui_docker: v1.0.1
esr_docker: v1.0.0
dgbuilder_docker: v0.1.1
cli_docker: v1.1.0
vfc_nokia_docker: v1.0.2
vfc_ztevmanagerdriver_docker: v1.0.2
vfc_ztesdncdriver_docker: v1.0.0
vfc_vnfres_docker: v1.0.1
vfc_vnfmgr_docker: v1.0.1
vfc_vnflcm_docker: v1.0.1
vfc_resmanagement_docker: v1.0.0
vfc_nslcm_docker: v1.0.2
vfc_huawei_docker: v1.0.2
vfc_jujudriver_docker: v1.0.0
vfc_gvnfmdriver_docker: v1.0.1
vfc_emsdriver_docker: v1.0.1
vfc_catalog_docker: v1.0.2
vfc_wfengine_mgrservice_docker: v1.0.0
vfc_wfengine_activiti_docker: v1.0.0


#####################
# #
# ONAP repositories #
# #
#####################
aai_repo: http://gerrit.onap.org/r/aai/test-config
appc_repo: http://gerrit.onap.org/r/appc/deployment.git
mr_repo: http://gerrit.onap.org/r/dcae/demo/startup/message-router.git
so_repo: http://gerrit.onap.org/r/so/docker-config.git
policy_repo: http://gerrit.onap.org/r/policy/docker.git
portal_repo: http://gerrit.onap.org/r/portal.git
robot_repo: http://gerrit.onap.org/r/testsuite/properties.git
sdc_repo: http://gerrit.onap.org/r/sdc.git
sdnc_repo: http://gerrit.onap.org/r/sdnc/oam.git
vid_repo: http://gerrit.onap.org/r/vid.git
clamp_repo: http://gerrit.onap.org/r/clamp.git
vnfsdk_repo: http://gerrit.onap.org/r/vnfsdk/refrepo.git

Section 4: References

OpenStack Ocata Ansible installation method:

https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/

OPNFV Euphrates Installation using Compass4NFV method:

http://docs.opnfv.org/en/stable-euphrates/submodules/compass4nfv/docs/release/installation/index.html#compass4nfv-installatio

ONAP Amsterdam deployment:

https://onap.readthedocs.io/en/amsterdam/guides/onap-developer/settingup/fullonap.html

For the full & complete 30 page Methods of Procedure (MOP) download below

To learn more about our experience and to engage our team, please reach out to us on engage@serro.com.