Demonstrating Synergies Between ETSI SDG OpenSlice (OSL) and LF Sylva: PART 3

November 04, 2024

These series are part of the demonstration that we will perform during the SNS4SNS event. More details at the end of the article

The synergy between OpenSlice (OSL) and Sylva offers a powerful combination that addresses the increasing complexity of managing telco and edge cloud infrastructures. This blog explores how these two platforms—OpenSlice, an ETSI SDG-backed solution, and Sylva, a Linux Foundation project—can be integrated to optimize service orchestration and resource management for telecom operators.

Here is the 1st part of this series: https://osl.etsi.org/news/20241015_osl_sylva_part1/ and here is the 2nd part: https://osl.etsi.org/news/20241022_osl_sylva_part2/

In previous articles we explored the usage of Identity and Access Management and offering Sylva Workload Cluster as a Service. In this final article we will explore how OpenSlice can be aware of resources in a Sylva Workload Cluster, interact with operators and deploy new services.

OpenSlice includes the CRIDGE microservice which enables the platform to be aware of Kubernetes resources and interact with operators. Thus we need to install and configure a new CRIDGE for each new Sylva Workload cluster. CRIDGE can be installed via a HELM chart. While OpenSlice could use many other applications we are using ARGO to perform HELM installations in Kubernetes clusters. (See how to use an operator like we do for deploying Helm charts via OpenSlice https://osl.etsi.org/documentation/latest/service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example/ )

Kubernetes Cluster Join operator

To ease and automate the process of joining kubernetes clusters under the management of OpenSlice we created an operator that enables different scenarios of management resources. The source code is here: https://labs.etsi.org/rep/osl/code/addons/org.etsi.osl.controllers.kcj For this operator we need to provide different options. The specification is as follows:

apiVersion: controllers.osl.etsi.org/v1alpha1
kind: KCJResource
metadata:
  name: kcjexample
spec:
  clusterConfigBase64: \\kubeconfig of remote cluster in BASE64
  installArgoCDRemote: \\if true will install ARGO in remote cluster
  installCridgeLocal: \\if true will install CRDIGE in local OSL management cluster via local ARGO. This CRIDGE will monitor remotely the remote cluster
  installCridgeRemote: \\if true will install CRDIGE in remote cluster via remote or local ARGO. This CRIDGE will monitor the remote cluster and connect to OSL management cluster
  cridgeAMQEndPoint: "tcp://osl-url:61616?jms.watchTopicAdvisories=false" \\the URL of the OSL service bus 
  cridgeAMQUsername: "artemis" \\the username to connect to the OSL service bus
  cridgeAMQPassword: "artemis" \\the password to connect to the OSL service bus
  addClusterToLocalArgoCD: false \\ if true will create service account to remote cluster and add the cluster to local ARGO in OSL management cluster

The following figure presents a design of the operator as a service and exposed for service ordering.

For our scenario we configured the operator to automatically install argo in the remote cluster and install and configure a new CRIDGE service for managing and interfacing the remote cluster.

Another option would be to install automatically via the operator, ARGO and CRIDGE in the remote Sylva Workload cluster. Next figure displays also the equivalent request. This scenario depends on different connectivity options that are available.

Deploy resources in Workload cluster

Now that the cluster is ready, and managed by OpenSlice we can further install services. In our example we will install Open5GS open source 5G core by using helm charts provided by Gradiant (https://github.com/gradiant/5g-charts ). We need first to design the service as next figures display

After service design, we can expose it in our catalogs ready to be ordered by users.

As soon as someone orders this service, it will be installed in the new sylva cluster. The related CRIDGE service that manages the new Sylva workload cluster will manage the Open5Gs installation via the remote argo services.

After a service order is completed, the user can access the cluster and the related open5Gs services.

Next figure displays how the services are related from OpenSlice service order, down to the namespace created in the new Sylva cluster (as shown through Rancher) and also detailed Grafana dashboards

Conclusion and Future steps: Offer all services as a single service bundle

In the series, we designed and offered a set of Services to the users. In our setup, the user needs to: - order first a Sylva cluster and retrieve the corresponding cluster key

  • Instrument OpenSlice to take the management of the new cluster
  • Design services (like Open5GS) to be deployed in the new cluster What would be interesting for service providers would be to offer the above as a single service: 5G core in a new Sylva cluster in end-to-end manner. So, the challenging part is instrumenting and designing OpenSlice service to do all these steps automatically. While there is still work to be done within OpenSlice codebase to support these end-to-end scenarios, including any new LCM rules related to SLA, scaling etc, such case is quite possible with current solution.

But this is left for the readers 😊

Thank you for reading so far! If you are interested on learning more join our OSL community.

These series are part of the demonstration that we will perform during the SNS4SNS event at ETSI, Sophia Antipolis, France 12-14 Nov. 2024. More information https://www.etsi.org/events/2407-etsi-sns4sns-event#pane-6/
Stay tuned for more updates and happy coding! 🌟

#OpenSource #OpenSlice #ETSI #SYLVA #LINUXFOUNDATION