Is Kubernetes the answer to the challenges in Multi-access Edge Computing – or is there more to this equation?
The market for Multi-access Edge Computing (MEC) is pegged at $4.25 billion in 2025. The reasons are many – from the recent surge in AR/VR gaming, to a growing preference for video calling and Ultra-High-Definition, and of course, the Internet of Things (IoT) that spawn SmartX applications, including cities, manufacturing, agriculture, and logistics.
The majority of MEC opportunity is both driving and driven by 5G, and the two will grow hand-in-hand in the days to come.
For enterprises and service providers that rely on the cutting-edge of technology for day-to-day, business-critical operations, MEC poses a lot of opportunities, in terms of:
- Enhanced Mobile Broadband
- Ultra-Reliable Low Latency
- Massive Internet Of Things
What is MEC and what does it bring to the table?
MEC describes a new ecosystem that enables predictable, high-performing, cloud-computing capabilities on an economical IT environment that’s running at the network’s edge. MEC’s service characteristics are defined by ultra-low latency and high bandwidth that can interact, in real-time, with RAN connectivity services.
MEC moves the intelligence and service control from centralized data centers to the edge of the network – closer to the users. Instead of backhauling all the data to a central site for processing, it can be analyzed, processed, stored locally and shared upstream when needed.
For evolving digital adopters, like healthcare, defense, and logistics, MEC poses a lot of benefits. However, for them to realize these advantages, mobile operators have to overcome certain challenges: network modernization, automation at scale, colocation and security. Once they do this, of course, they stand to gain a competitive edge in terms of:
- Reduced service latency
- Built in multi-tenancy
- Interactive feedback
- Architecture synergy
- All the advantages of distributed systems
To know more about MEC and its service architecture, click here to download our white paper Designing MEC Platforms.
Your ability to evolve and adapt faster is what matters!
Deploying regional, national, or international services is by no means a simple task. When you are looking at maximizing revenue per deployment and competing against agile specialists, traditional providers will need additional flexibility.
- A one-size-fits all solution and even small-medium-large sizing does not make one competitive!
- What your customers need today will not be what they want tomorrow. They will base their decision on which operator to use on their ability to accommodate that change.
- In future networks, resources and applications will be deployed and migrated all over the network, far edge, edge, regional, core, and so on, where each environment has different restrictions and configurations.
- Automation will greatly influence your ability to quickly roll out new revenue generating services.
Is Kubernetes the way forward?
The key to taking advantage of change revolves around a flexible cloud platform and an orchestration toolset that makes it easy to manage your bare-metal infrastructure and move workloads around the network, reusing resource models, network models and existing workflows. Otherwise, you end up re-customizing, re-integrating and reinventing the wheel every time you make a change.
This brings us to Kubernetes. Kubernetes’ advantages for Multi-access Edge Computing includes:
- Increased performance
- Efficient scale & self-healing
- Open source & multi-vendor
- Multi-cloud capabilities
- Increased developer productivity
- Real-world scale-out capabilities
However, while Kubernetes is the north star of Edge Computing, it was not originally designed for service providers. Not all Kubernetes platforms are equal. With most platforms, one must consider the following challenges:
- Ease of use
- Ease of NF/ application/ service lifecycle automation integration
- Container-CNF and VM-VNF roadmaps and silo implications
- Automated workload placement
- Advanced networking requirements
- NF and application performance tuning
- Declarative automation and orchestration
When moving from the test lab to deployment and massive scale-out of services, it will eventually become evident that how you automate is just as important as what you automate.
There is a lot of hand-waving in the industry, where vendors and analysts praise the benefits of the cloud automation revolution as if it is a simple cure-all for any repetitive or scale-out task. However, there is more to making 5G services a success at a scale.
That is why it is so important to choose an automated cloud solution that works for massive-scale deployments, such as end-to-end MEC and 5G. Automated cloud platforms and orchestration solutions have significant comparative benefits and choosing these platforms must be done with great care as they impact your time to outcome, resource utilization, solution costs, and opportunities.
When choosing cloud native platform and orchestration tools, it is important to dig deep into the functionality and ease-of-use:
- How do you unify bare-metal with my services life cycle? Can you streamline these life cycles with a single tool, integrating bare-metal, cloud-2native clusters, services and appliance workflows?
- How do you onboard new applications and what environmental automation options do you have?
- How do you manage containers, VMs and appliances?
- How many steps does it take, if any, to reuse workflows and resource pools when deploying VMs and containers side-by-side?
- How are resources reserved for multi-tenant deployments? What views and tools are available on a per-tenant basis?
- How flexible and customizable is your cloud platform’s automated workload placement algorithm?
- How does it help you with resource and outage planning across multiple events?
- How customizable are your roles-based access definitions?
- How easy and granular are your chargeback options, and what is the scope?
The answer to these problems is Robin Cloud Native Platform (CNP).
What are Robin’s flagship products – CNP and MDCAP?
CNP is one system, built from the ground to unify VM and container service operations. It runs both VNFs and CNFs on the same Kubernetes platform, with a unified operations model and fully shared resource pools.
In addition, there is the Robin Multi Cluster Automation Platform (MDCAP) – that orchestrates and manages the lifecycle of any workflow including, bare-metal provisioning, cloud platform instantiation, NFs, application, Network Services (NS) and Methods Of Procedures (MOPs), all of which can be auto-triggered through a policy engine.
As we continue to innovate our services, Robin.io is working actively to further decrease costs for operators, as well as develop its work in areas like network slicing and AI-driven automation.
Enabling the future while modernizing the past
Context-aware Robin MDCAP and Robin CNP make integration easier and facilitate rapid scaling of operations of your containers, VMs and legacy appliances. Robin products streamline operations processes, reduce time to outcome, and harmonize multiple generations of services, enabling telecom operators to best use both new and incumbent assets. This flexibility allows them to rapidly deploy at scale from core to edge, ahead of the competition.
Interested to know more? Read Robin’s latest solution brief for insights on “Designing O-MEC Platforms” that help operators boost performance.
Invest a few minutes of your time and unlock a wide range of performance benefits. Connect with our experts and explore how our products can help bring out the agility, automation and maximized benefits of MEC into your architecture.