Demonstrating Synergies Between ETSI SDG OpenSlice (OSL) and LF Sylva: PART 2

October 22, 2024

These series are part of the demonstration that we will perform during the SNS4SNS event. More details at the end of the article

The synergy between OpenSlice (OSL) and Sylva offers a powerful combination that addresses the increasing complexity of managing telco and edge cloud infrastructures. This blog explores how these two platforms—OpenSlice, an ETSI SDG-backed solution, and Sylva, a Linux Foundation project—can be integrated to optimize service orchestration and resource management for telecom operators.

Here is the 1st part of this article: https://osl.etsi.org/news/20241015_osl_sylva_part1/

In the previous article we explored the usage of Identity and Access Management. In this part we will explore the Request for Sylva Workload Clusters or more or less Sylva Workload Cluster as a Service.

The OSL Management Cluster interfaces with the Sylva Management Cluster to request workload clusters. This is a crucial aspect of the integration, as OSL will provide a self-service capability for tenants (e.g., telecom operators) to easily request Sylva’s Kubernetes-based workload clusters. The following image explains our approach.

  • OpenSlice’s orchestrator (OSOM) via CRIDGE service sends requests for workload clusters to Sylva’s management cluster through the newly developed SylvaMD WC Resource Operator.
  • The Sylva management cluster creates and manages the requested workload clusters
  • OpenSlice retrieves the workload cluster kubeconfig from Sylva, allowing it to manage the workloads in the newly created clusters.

##Introducing a new operator for managing Sylva WC

Since OpenSlice version 2024Q2 we support the interaction with Kubernetes resources via operators, thus offering as a service complex processes.

To support the Sylva WC as a Service scenario, we developed a prototype SylvaMD WC Resource Operator which is responsible for interacting with the Sylva management cluster. It requests workload cluster resources and manages cluster status. The source code is available under OSL addons:

https://labs.etsi.org/rep/osl/code/addons/org.etsi.osl.controllers.sylva

The operator wraps the Sylva CLI, since currently Sylva does not offer an API. To install the operator in a Kubernetes cluster you need as input a kubeconfig to the OSL management cluster and a kubeconfig of the Sylva management cluster. We need to install then a couple of secrets. One with a default Kustomization and another one with credentials access to underlying infrastructure (e.g. Openstack) The operator has a very simple syntax. As an example we can request a sylva workload cluster with 1 master 3 worker nodes

apiVersion: controllers.osl.etsi.org/v1alpha1
kind: SylvaMDResource
metadata:
  name: wc12345-aaeeff
spec:
  clusterControlPlaneReplicas: "1"
  clusterMd0Replicas: "3"

However in more complex request, we can specify the whole equivalent of values.yaml of a Sylva WC cluster

apiVersion: controllers.osl.etsi.org/v1alpha1
kind: SylvaMDResource
metadata:
  name: wc67890-bbccdd
spec:
  valuesyaml: |
    ---
    cluster:

      capi_providers:
        infra_provider: capo
        bootstrap_provider: cabpr

      capo:
        image_key: ubuntu-jammy-plain-rke2-1-28-8  # OpenStack Glance image (key of image in sylva_diskimagebuilder_images/os_images)
        ssh_key_name: sylva # OpenStack Nova SSH keypair is provided by runner context in CI
        network_id: 7b490a0c-0f5c-4475-a436-e4bcbecc7f5e # OpenStack Neutron network id is provided by runner context in CI
        flavor_name: cpu8.m16384.d40g
        control_plane_az:
          - nova

      machine_deployments:
        md0:
          replicas: 3
          capo:
            failure_domain: nova
            flavor_name: cpu8.m16384.d40g

      control_plane_replicas: 1

    openstack:
      control_plane_affinity_policy: soft-anti-affinity
      storageClass:
        name: "rbd1"
        type: "rbd1"

    proxies:
      http_proxy: ""
      https_proxy: ""
      no_proxy: ""

    ntp:
      enabled: false
      servers:
      # - 1.2.3.4
      # - 1.2.3.5

    sylva_diskimagebuilder_images:
     ubuntu-jammy-plain-rke2-1-28-8:
        enabled: true

     ubuntu-jammy-plain-kubeadm-1-28-9:
        enabled: true

After installing the Operator definition will appear in OpenSlice as Resource Specification like the following image

Create a ResourceFacingServiceSpecification and a CustomerFacingServiceSpecification that will just have as user input the number of master and worker nodes. (See how to use an operator like we do for deploying Helm charts via OpenSlice https://osl.etsi.org/documentation/latest/service_design/examples/helmInstallation_aaS_Example_Jenkins/HELM_Installation_aaS_Jenkins_Example/ )

Then you need to design a rule to inject the user inserted values to the operator template

The operator also has the ability to provide us the kubeconf of the new Sylva cluster. We will retrieve it with a new rule like the image shows:

Then we can expose it as a service to our catalog so users can make Service Orders:

As soon us the service order is completed the services are active, which means that the cluster is alive. From the Secret characteristic we retrieve the kubeconf to connect to the Sylva cluster.

We can use also the same account to connect to rancher and monitor the activity of our cluster

In our next and final part of this series, we will explore the request for how OpenSlice can be aware of resources in a Sylva Workload Cluster, interact with operators and deploy new services

These series are part of the demonstration that we will perform during the SNS4SNS event at ETSI, Sophia Antipolis, France 12-14 Nov. 2024. More information https://www.etsi.org/events/2407-etsi-sns4sns-event#pane-6/
Stay tuned for more updates and happy coding! 🌟

#OpenSource #OpenSlice #ETSI #SYLVA #LINUXFOUNDATION