Documentation

About Nephio

Our mission is “to deliver carrier-grade, simple, open, Kubernetes-based cloud native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments.” But what does that mean? With this release and the accompanying documentation, we hope to make that clear.

The mission outlines the basic goals and the About Nephio page describes the high-level architecture of Nephio. It is important to understand that Nephio is about managing complex, inter-related workloads at scale. If we simply want to deploy a network function, existing methods like Helm charts and scripts are sufficient. Similarly, if we want to deploy some infrastructure, then using existing Infrastructure-as-Code tools can accomplish that. Configuring running network functions can already be done today with element managers.

So, why do we need Nephio? The problems Nephio wants to solve start only once we try to operate at scale. “Scale” here does not simply mean “large number of sites”. It can be across many different dimensions: number of sites, number of services, number of workloads, size of the individual workloads, number of machines needed to operate the workloads, complexity of the organization running the workloads, and other factors. The fact that our infrastructure, workloads, and workload configurations are all interconnected dramatically increases the difficulty in managing these architectures at scale.

To address these challenges, Nephio follows a few basic principles that experience has shown to enable higher scaling with less management overhead:

  • Intent-driven to enable the user to specify “what they want” and let the system figure out “how to make that happen”.
  • Distributed actuation to increase reliability across widely distributed fleets.
  • Uniformity in systems to reduce redundant tooling and processes, and simplify operations.

Additionally, Nephio leverages the “configuration as data” principle. This methodology means that the “intent” is captured in a machine-manageable format that we can treat as data, rather than code. In Nephio, we use the Kubernetes Resource Model (KRM) to capture intent. As Kubernetes itself is already an intent-driven system, this model is well suited to our needs.

To understand why we need to treat configuration as data, let’s consider an example. In a given instance, a network function may have, say, 100 parameters that need to be decided upon. When we have 100 such network functions, across 10,000 clusters, this results in 100,000,000 inputs we need to define and manage. Handling that sheer number of values, with their interdependencies and a need for consistency management between them, requires data management techniques, not code management techniques. This is why existing methodologies begin to break down at scale, particular edge-level scale.

Consider as well that no single human will understand all of those values. Those values relate not only to workloads, but also to the infrastructure we need to support those workloads. They require different areas of expertise and different organizational boundaries of control. For example, you will need input from network planning (IP address, VLAN tags, ASNs, etc.), input from compute infrastructure teams (types of hardware or VMs available, OS available), Kubernetes platform teams, security teams, network function experts, and many, many other individuals and teams. Each of those teams will have their own systems for tracking the values they control, and processes for allocating and distributing those values. This coordination between teams is a fundamental organizational problem with operating at scale. The existing tools and methods do not even attempt to address these parts of the problem; they start once all of the “input” decisions are made.

The Nephio project believes the organizational challenge around figuring out these values is actually one of the primary limiting factors to achieving efficient management of large, complex systems at scale. This gets even harder when we realize that we need to manage changes to these values over time, and understand how changes to some values implies the need to change other values. This challenge is currently left to ad hoc processes that differ across organizations. Nephio is working on how to structure the intent to make it manageable using data management techniques.

This release of Nephio focuses:

  • Demonstrating the core Nephio principles such as Configuration-as-Data and leveraging the intent-driven, active-reconciliation nature of Kubernetes.
  • Infrastructure orchestration/automation using controllers based on the Cluster API. At this time only KIND cluster creation is supported.
  • Orchestration/automation of 5G Core and RAN network functions deployment and management.

While the current releases uses Cluster API, KIND, and free5gc/OAI for demonstration purposes, the exact same principles and even code can be used for managing other infrastructure and network functions. The uniformity in systems principle means that as long as something is manageable via the Kubernetes Resource Model, it is manageable via Nephio.


Release Notes

Release Notes

Guides

A collection of step by step Nephio guides.

Abbreviations

Glossary

Models and APIs

Reference for the Nephio models and APIs

Nephio Architecture

Reference for the Nephio Architecture