Jun
01
2017

[ AWS Transit VPC ]: Scalability Tests on Juniper vSRX based AWS Transit VPC using Automation Tools

AWS Transit VPC

AWS Transit VPC

Welcome to the latest edition of our blog series “AWS Transit VPC inside AWS using Juniper vSRX”.

This is the final blog in a series of four blog posts on the topic of AWS Transit VPC. If you’re new to this series, here are the first three blog posts on this subject:

And here’s  a summary of what was previously covered:

  • Introduction to AWS Transit VPCs
  • Introduction to AWS CloudFormation and AWS Lambda
  • The way AWS Transit VPCs are configured
  • Why we chose Juniper vSRX as our hub firewall
  • How we pieced it all together

 

This brings us to the final post in the series – How we performed scale testing of deploying multiple Spoke VPCs across different AWS Regions and the use cases of this AWS Transit VPC solution. To complete the solution from an end-to-end perspective – we created Spoke VPCs and instances inside Spoke VPCs.

 

Each Spoke VPC is associated with the following:

  • 1 Spoke VPC CIDR block
  • 2 subnets – One public and one private
  • 2 Elastic Network Interfaces (2 each per Linux instance)
  • 1 Linux-based t2.micro instances with two ENIs
  • 1 Elastic IP address – Associated with the Public subnet of the instance
  • 2 Security Groups – One public and one private
  • 1 Security Key Pair per region
  • 1 Internet Gateway
  • 2 Route Tables – One public and one private

 

Manually creating one or two Spoke VPCs per region is fairly simple. However, to test at scale and deploy at scale, automation is the way to go. Let us walk through the automation tools designed by the Serro team, to accomplish the scale testing and deployment.

 

We wrote Python scripts to create the Spoke VPCs – grab the user’s AWS credentials, log into AWS using CLI based API calls. The purpose – to create Spoke VPCs with one VGW each, and add the desired tag.

 

After creating the Spoke VPC artifacts, the VGW Poller (the Lambda function – refer Blog # 2) detected the Virtual Private Gateways with the desired tags and proceeded to create VPN Connections towards the vSRX instance’s public IP address.

 

This creation invoked the Juniper Configurator Lambda function once the IPSec VPN and BGP configuration was placed inside S3 bucket using a PUT function. The Juniper Configurator then pushed these configurations to the vSRX instances and committed them. Upon a successful commit, IPSec VPN tunnels towards the Virtual Private Gateway came up, thus establishing BGP peering sessions over the IPSec VPN tunnels.

 

The Scale Testing script takes the following parameters as inputs and creates Spoke VPCs inside desired Regions in AWS:

  • Number of Spoke VPCs per each Region
  • Tag to be applied to the Virtual Private Gateway
  • Spoke VPC CIDR from the 10.0.0.0/8 supernet
  • Interval between creation of each Spoke VPC

 

Based on the above parameters, the script ran to implement the following:

  1. Create Spoke VPCs in the CIDR block specified as the parameter; incrementing the third octet of the next Spoke VPC serially.
  2. Create a new Virtual Private Gateway and attach it to the Spoke VPC.
  3. Add the desired tag to the Spoke VGW.
  4. Create a local log entry for the Spoke VPC and the Spoke VGW.

This all lead to the compilation of a VPC-List file in the local directory from where the script was invoked.

 

Once the creation of the Spoke VPCs completed – along with creation and establishment of BGP peering sessions from the VGW towards the vSRX instances, we proceeded to perform data plane testing of the complete topology. To perform data plane testing we designed a new script, which ran as follows:

  1. Read the VPC-List file created as a result of Spoke VPC creation script.
  2. From the VPC-List, the script went through all the Spoke VPCs that were previously created, and created two subnets (one public, one private), two security groups (one public, one private), one Internet Gateway, two routing tables (one public, one private), and one EC2 instance with two NICs inside it.
  3. These EC2 instances were locally saved as a list on the user’s machine.
  4. Performance monitoring tools, such as iperf3, fping, and mtr were installed on each of these EC2 instances.
  5. Tests were run across each of these EC2 instances using the tools installed, with each EC2 instance having iperf started in server mode – listening on the default port. All other EC2 instances initiated 10 second iperf test towards all other (n-1) instances.
  6. The results of these iperf tests were pushed to a central location in CSV format.
  7. Once the results are collected, the instances can be deleted by invoking the delete function of the data plane testing script – including the deletion of other artifacts such as security groups, routing tables, subnets, Elastic IP addresses, etc. before moving further to delete the Spoke VPCs.

 

The delete function of our Scale Testing script waited for the VPN Connections to be deleted from inside AWS, and then proceeded to delete the Spoke VPCs. At the end of a successful delete cycle, none of the Spoke VPCs created would be left inside AWS. The test bed was completely cleared, allowing fresh use.

 

Here’s a simplified description of how it worked:

  1. The user ran script to delete the data plane scale testing artifacts.
  2. The delete function of the Scale Testing script was invoked; which read the VPC-List file and started the Spoke VPC delete process.
  3. The tags on the Spoke VGWs were replaced with dummy values.
  4. The detection of the dummy value on the VGW tags invoked the VGW Poller Lambda function.
  5. That VGW Poller deleted the VPN connections towards the vSRX instances and placed the delete configuration in Junos CLI format in the S3 bucket.
  6. The delete configuration to remove the IPSec VPN tunnels and the BGP peering sessions running over them was pushed towards the vSRX instances by the Juniper Configurator.
  7. The configuration changes on the vSRX CLI were then committed by the Juniper Configurator.

 

For more details about this successful deploy-at-scale process, reach out to us at engage@serro.com.

 

About Serro:

At SERRO, we design, deploy and operate the world’s largest and complex technology environments. We specialize in NFV / SDN system design, Workflow Automation, and the development of network-centric core code. Our global operational experience combined with our engineering heritage drives business outcomes, which enable service automation and process efficiencies to power tomorrow’s software-defined businesses. Follow us on Twitter @TeamSerro and visit us at www.serro.com. To find our development contributions to the open source community, visit Serro’s GitHub Repository: https://github.com/serrollc.

 

About the Author:

Shreyans is a Solutions Engineer at Serro since early 2014. He has a Master of Science in Electrical Engineering from San Jose State University.  His experience includes enterprise, data center and service provider routing, switching and security solutions across multiple vendors – Juniper Networks, Cisco, Palo Alto Networks, Brocade and Huawei; as well as cloud computing solutions like Amazon Web Services and OpenStack. In his free time Shreyans takes pictures of landscapes around the Bay Area, with the Golden Gate Bridge being his muse. He can be found on social media on LinkedIn, and on Instagram.

Leave a Reply

Your email address will not be published. Required fields are marked *