Source: Pipeline Volume 13, Issue 4
Transformation is nothing new to the telecommunications industry. In fact, it’s pretty much a given. The breakup of the Bell system more than two decades ago encouraged competition amongst service providers, all striving to provide additional value in a commodity space. Technology later created a new competitive dimension. Wireless communications, mobility, and the advent of smart phones and tablets that began with the introduction of the first iPhone in 2007, all triggered a drastic change within the industry. Today is no different, with major cloud providers honing in on telcos’ market share by providing services that compete directly with their core businesses. Businesses and consumers clamor for faster and faster connectivity, and a slew of advanced technologies like IoT, virtual reality and robotics, are all placing increased demands on the existing telecommunications infrastructures and services. The question is now on how to deliver more services, better and faster.
Consider Deloitte’s report, “2016 Telecommunications Industry Outlook,” which noted that carriers have to provide high-quality, reliable and affordable data and voice services in a market where there is increasing usage, declining rates, and scarce spectrum.
Usage alone is astronomical. According to Deloitte’s latest Global Mobile Consumer Survey (GMCS), U.S. consumers look at their mobile devices over 8 billion times a day in the aggregate. IoT is also having a big impact and will continue to do so as more consumers acquire and engage with a variety of smart, connected systems like home control and monitoring solutions, fitness bands and smart watches, and car-based connected systems. All of these are adding new layers of service, which telcos will have to support and even manage.
In the article, Deloitte & Touche’s Craig Wigginton, vice-chairman and U.S. Telecommunications leader, highlights how businesses in different industries are creating strategic partnerships with service providers to increase their speed for time-to-market for various solutions such as connected cars or smart cities. As an example, he suggests carriers team up with key players in, for example, the retail, automotive or healthcare industries as a way to expand business. Wigginton says that cross-sector mergers and acquisitions (M&A) among telcos and verticals such as media or technology companies could generate resolutions to complex business scenarios. He adds that by leveraging each other’s natural strengths, the partners can reduce or even avoid having to invest time and significant resources to develop capabilities themselves. Specifically, integration with key players in these industries is a way for carriers to expand their business, and do so in a way that is timely and less risky.
A critical challenge lies in the telcos’ own communications environments. Their legacy infrastructures, current cloud implementations and existing bandwidth have serious limitations that cannot be solved with standard equipment upgrade cycles, more bandwidth, or even the addition of software-defined networking (SDN) technologies against legacy infrastructures.
Instead, telcos should embrace the concepts of fog computing, leverage Network Functions Virtualization (NFV), and design architectures with automated services orchestration so they have the flexibility to move application resources—whether network, compute or storage—to wherever they are needed between the end user and the cloud. This flexibility and robustness paves the way for the concept of providing Zero Latency Computing (ZLC) to end users, where it makes contextual sense.
Similar to the way fog lies close to the Earth’s surface, fog computing is a resource (network, computing or storage) delivered near the point of consumption. More specifically, it is a concept that allows those resources to be most logically and efficiently positioned anywhere on the continuum from the data source to the cloud. NFV defines the standards for compute, storage, and networking resources on which virtual networks are built. Virtualized Network Functions (VNF) is then controlled by software-defined networking (SDN) and management and orchestration (MANO) tools.
Taking fog computing a step further, Zero Latency Computing is a concept where little or no time is lost during the exchange of information between one interface and another, or when a system is able to respond instantly to an input of information. It’s essential for applications such as medical robots where “lag” or latency could delay the work between a surgeon and a patient in a geographically remote operating table. ZLC also could serve as a foundation for a new generation of services and applications like IoT, virtual reality and other next-gen applications where real time interactions are required. Automated services orchestration is, of course, the automated arrangement and management of computer resources, and typically involves aligning business objectives and requests with applications, data and infrastructure through workflows, provisioning and change management.
All of these technologies and computing concepts will fundamentally change how telcos deliver services and operate their businesses. They also dovetail with their efforts to become a connectivity and services platform, as opposed to just providing the communications plumbing. Nonetheless, implementing fog computing, NFVi, Zero Latency Computing and automated services automation will require sweeping changes in business models, design approach, and operations. Also, service providers will have to tackle technical and cultural challenges.
As an example, the silos that demarcate telcos’ network and applications operations should be eliminated. These silos impede agility, adding days or even weeks to service change requests that (more often than not) should happen in seconds. To boost network efficiency, operators are adopting SDN and NFV and moving away from proprietary, hardware-based network equipment, according to Deloitte’s telecommunications industry report. Service changes and requests that spread across five departments and 30 people today, could be automated to involve a few people and a collaborative system that resolves in one cohesive movement. NFV is taking hold. A market research study by IHS Markit in August found that all of the service providers queried in a survey will use NFV at some point, with 81 percent expecting to do so by 2017. For the survey, IHS Markit interviewed purchase-decision makers at 27 service providers around the world.
It is imminent that telcos are leveraging open source technologies, but there are still limitations around capabilities, compatibility, and support.
While much of the existing equipment that uphold the underlying infrastructure has been around for a few years, virtualizing and automating it all to work together is not an easy task. Because many of the capabilities in legacy equipment are topology or vendor specific, they can sometimes create bottlenecks when applications traverse multiple clouds. These incongruities hinder NFV’s goal of grouping network assets and then instructing the overall network on how to move services and data through it, regardless of topology. There are open source technologies under development that address this; Service Function Chaining (SFC), as part of the Open Daylight SDN framework, is designed to provide the ability to define an ordered list of a network services (e.g., firewalls, load balancers) that can be stitched together in the network to create a service chain. These unique chained services combinations are what comprise the product to be sold and monetized to the consumer.
It’s important to note that telcos will still have to support and use proprietary vendor platforms in order to run the mission-critical functions of their existing and near-term services. This is where the use of automated services orchestration and robust management of all the combined elements – new and old – is paramount. Telcos will have to rely on flexible, adaptable and reusable software development as well as lifecycle service orchestration for the identification, consumption, and release of resources at a high level. As next-generation applications become more entrenched in our society, telcos will have to ensure orchestration happens within the context of Zero Latency Computing. Orchestration will have to be adaptive and be able to scale in time, not just scale.
Delivering connectivity will increase in complexity, and it will not be just about capacity. The diverse ecosystem to deliver a service implies telcos will not ultimately control service levels because they, at times, will not have direct command over all of the infrastructure (physical, cloud, and virtual) between themselves and the end customers. And the interoperability issues that could arise between today’s virtualized infrastructures built for computing resources and other initiatives in implementing a software-defined wide area network (SD-WAN) present additional problems, especially considering how many assume that a virtualized infrastructure will always live within the data center. In fog computing, there might be thousands of interconnected data centers supporting virtualized infrastructure. Entering this new era, service providers must learn to effectively coordinate resources across the entire ecosystem.
Telecom leaders need to advance their strategies. They have successfully weathered many industry shakeups and transformed themselves in order to remain competitive. Now they need to rethink how they design and evolve their infrastructures. They must move away from proprietary systems and consider adopting open source where it makes sense, upgrade individual network components and automate them using NFV, and rethink the way they offer services. It’s a tall order and one that will likely require new partners – partners who understand the importance of fog computing and Zero Latency Computing and the impact the two technologies will have, and who also realize that this transformative evolution will need to happen without upending how telcos operate their businesses today.