Skip to content

OpenSlice Deployment Guide with Kubernetes

Intended Audience: OpenSlice administrators

Requirements

Hardware requirements:

Minimum Hardware Requirements Recommended Hardware Requirements
4 CPU cores 8 CPU cores
8 GB RAM 16 GB RAM
30 GB storage 50 GB storage

Software Requirements:

  • git: For cloning the project repository.
  • Kubernetes: A running cluster where OpenSlice will be deployed.
    • Disclaimer: The current manual setup of Persistent Volumes using hostPath is designed to operate with only a single worker node. This setup will not support data persistence if a pod is rescheduled to another node.
  • Helm: For managing the deployment of OpenSlice.
  • Ingress Controller: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress.
    • An Nginx ingress controller is required, which can be installed using this guide.
    • If you use another type of ingress controller, you'll need to modify [repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml to conform to your ingress controller's requirements.
  • Network Load Balancer: Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB).
  • Domain/IP Address: Necessary for accessing the application. This should be configured in [repo-root]/kubernetes/helm/openslice/values.yaml under rooturl.

Additional Configuration

  • Storage Class: In a production environment, specify your storageClass in [repo-root]/kubernetes/helm/openslice/values.yaml under storageClass. If not defined, PVs will be created and managed manually.
    • Disclaimer: Before deploying, confirm that your storage system supports claims of one 10G and two 1G volumes.

Preparing the environment

1. Setting Up A Kubernetes Cluster

Refer to the official Kubernetes documentation for setting up a cluster. Ensure your cluster meets the hardware requirements specified above.

2. Installing Helm

Helm must be installed on your machine to deploy OpenSlice via Helm charts. Follow the official Helm installation guide.

Downloading the project

1. Create a new folder to download the project

mkdir openslice
cd openslice

2. Download the project code

Clone the project code from the GitLab repository. Note: This process will be simplified once the charts are published in the GitLab registry, requiring only the chart to be pulled.

git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.main.git
cd org.etsi.osl.main/kubernetes/helm/openslice/

3. Prerequisites before deployment

Before deploying the Helm chart, ensure you have configured the necessary components as detailed in the following section, i.e. Configure Helm Chart Services. By default, the main branch is selected for deployment.

We recommend:

  • main branch for the most stable experience and
  • develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the develop documentation)

Configure Helm Chart Services

When deploying OpenSlice with Helm, service configurations are handled through the values.yaml file. This file allows you to define all necessary configurations for your deployment, including database credentials, service URLs, and logging levels. Below are examples of how to configure your services in Helm based on your provided values.

Configuring Services

1. Database Configuration

To configure MySQL and other related services, you can directly set the values in your values.yaml file under the oscreds and mysql sections. For example:

oscreds:
  mysql:
    username: "root"
    password: "letmein"
    openslicedb: "osdb"
    keycloak: 
      database: "keycloak"
      username: "keycloak"
      password: "password"
      adminpassword: "Pa55w0rd"
    portal:
      database: "osdb"
      username: "portaluser"
      password: "12345"

2. Keycloak Configuration

Keycloak settings, including the database and admin password, are part of the oscreds.mysql.keycloak section. If you need to adjust Keycloak-specific settings like realms or client configurations, you'll likely need to customize your Helm chart further or manage these settings directly within Keycloak after deployment. The Keycloak realm configuration that is imported by default can be found under kubernetes/helm/openslice/files/keycloak-init/realm-export.json.

oscreds:
  mysql:
    keycloak: 
      database: "keycloak"
      username: "keycloak"
      password: "password"
      adminpassword: "Pa55w0rd"

3. CRIDGE Configuration

If you want to create and manage Kubernetes Custom Resources (CRs), you will have to provide:

  • a cluster-wide scope kubeconf file (typically located at /home/{user}/.kube directory of the Kubernetes Cluster's host)

You will have to copy the kubeconf file to the org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge directory, prior to the deployment.

By default, the deployment process copies the org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge/config file into the /root/.kube directory of the CRIDGE container.

The above configuration works for the default kubeconf file names. It explicitly expects a file named config within the org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge directory. If you are working with custom kubeconf file names, you will have to rename them.

OpenSlice also offers management support of multiple Kubernetes Clusters simultaneously. For this, you will have to:

  • add all the respective kubeconf files into the org.etsi.osl.main/compose/kubedir directory.
  • create a copy of the cridge.yaml and cridge-config.yaml in \org.etsi.osl.main\kubernetes\helm\openslice\templates directory for every Cluster. Mind the need for different naming.
  • update every cridge-config.yaml file to get the appropriate kubeconf file for every Cluster.

Below you may find an indicative example that only references the affected fields of each cridge-config.yaml file:

data:
  config: |-
    {{- .Files.Get "files/org.etsi.osl.cridge/config-clusterX" | nindent 4 }}

4. External Services Configuration

For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the values.yaml file:

bugzillaurl: "example.com:443/bugzilla"
bugzillakey: "VH2Vw0iI5aYgALFFzVDWqhACwt6Hu3bXla9kSC1Z"
main_operations_product: "Main Site Operations" // this is the default product to issue tickets
centrallogurl: "http://elk_ip:elk_port/index_name/_doc"

Bugzilla should have the following components under the specified product:

  • NSD Deployment Request: Component used to schedule deployment req
  • Onboarding: Issues related to VNF/NSD Onboarding
  • Operations Support: Default component for operations support
  • Validation: Use to track validation processes of VNFs and NSDs
  • VPN Credentials/Access: Used for requesting VPN Credentials/Access

Also in the 'Main Site Operations' product, a version named 'unspecified' must be created.

5. Application and Logging Configuration

Application-specific configurations, such as OAuth client secrets, can be set in the spring section:

spring:
  oauthClientSecret: "secret"

6. Ingress and Root URL

To configure the ingress controller and root URL for OpenSlice, update the rooturl field with your ingress load balancer IP or domain. This setting is crucial for external access to your application:

rooturl: "http://openslice.com" # Example domain
# or
rooturl: "http://3.15.198.35:8080" # Example IP with port

7. Persistent Volume for MySQL

For persistent storage, especially for MySQL, define the storage size under the mysql section. This ensures that your database retains data across pod restarts and deployments.

mysql:
  storage: "10Gi"

Configure Web UI

In folder kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js you must make a copy of config.js.default file and rename it to config.js.

This is mandatory for the configuration file to be discoverable.

Edit the config.js configuration file with your static configuration, if needed.

{
  TITLE: "OpenSlice by ETSI",
  WIKI: "https://osl.etsi.org/documentation/",
  BUGZILLA: "{{ .Values.rooturl }}/bugzilla",
  STATUS: "{{ .Values.rooturl }}/status",
  APIURL: "{{ .Values.rooturl }}",
  WEBURL: "{{ .Values.rooturl }}/nfvportal",
  APIOAUTHURL: "{{ .Values.rooturl }}/auth/realms/openslice",
  APITMFURL: "{{ .Values.rooturl }}/tmf-api/serviceCatalogManagement/v4"
}

Configure TMF Web UI

In the folder kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config there are 3 files available for configuration:

  • config.prod.default.json (Basic information + API configuration)
  • theming.default.scss (CSS color palette theming)
  • config.theming.default.json (HTML configuration - Logo, Favicon, Footer)

You must make a copy of files:

  • config.prod.default.json and rename it to config.prod.json
  • theming.default.scss and rename it to theming.scss

The 2 files above (i.e. config.prod.json, theming.scss) are essential for the successful deployment of OpenSlice, and executing the above steps is mandatory for the configuration files to be discoverable.

Ensure that you check the config.prod.json and theming.scss files and readjust to your deployment if needed.

# Starting from the root project directory
cd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config

E.g. You may edit "TITLE", "WIKI", etc properties with your domain title. Also configure TMF's API and Keycloak's location for the web application, if needed.

{         
    "TITLE": "OpenSlice by ETSI",
    "PORTALVERSION":"2024Q2",
    "WIKI": "https://osl.etsi.org/documentation",
    "BUGZILLA": "{BASEURL}/bugzilla/",
    "STATUS": "{BASEURL}/status/",
    "WEBURL": "{BASEURL}",
    "PORTAL_REPO_APIURL": "{BASEURL}/osapi",
    "ASSURANCE_SERVICE_MGMT_APIURL": "{BASEURL}/oas-api",
    "APITMFURL": "{BASEURL}/tmf-api",
    "OAUTH_CONFIG" : {
        "issuer": "{BASEURL}/auth/realms/openslice",
        "loginUrl": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/auth",
        "tokenEndpoint": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/token",
        "userinfoEndpoint": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/userinfo",
        "redirectUri": "{BASEURL}/redirect",
        "logoutUrl": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/logout", 
        "postLogoutRedirectUri": "{BASEURL}",

        "responseType": "code",
        "oidc": false,
        "clientId": "osapiWebClientId",
        "dummyClientSecret": "secret",

        "requireHttps": false,
        "useHttpBasicAuth": true,
        "clearHashAfterLogin": false,

        "showDebugInformation": true
    }
}

The {BASEURL} placeholder in the file automatically detects the Origin (Protocol://Domain:Port) of the deployment and applies it to every respective property. E.g. If you are attempting a local deployment of OpenSlice, then {BASEURL} is automatically translated to "http://localhost". Similarly, you may use {BASEURL} to translate to a public deployment configuration, e.g. "https://portal.openslice.eu".

If further customization, apart from the default provided, is needed for branding (Logo, Footer) then config.theming.json needs to be created in kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config directory, as follows:

# Starting from the root project directory
cd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config
sudo cp config.theming.default.json config.theming.json

Deploy the Helm Chart

After configuring the services, and editing the values.yaml file accordingly, the helm install command can be performed.

cd kubernetes/helm/openslice/
helm install myopenslice . --namespace openslice --create-namespace

Validating deployments and container monitoring

In a Kubernetes environment, you can monitor the status of your deployments and containers using kubectl, the Kubernetes command-line tool, which provides powerful capabilities for inspecting the state of resources in your cluster.

Checking the Status of your application's deployment

To check the status of your deployment, use the following commands. The output should be similar:


kubectl get pods -n openslice

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
myopenslice-artemis       1/1     1            1           6m28s
myopenslice-blockdiag     1/1     1            1           6m28s
myopenslice-bugzilla      1/1     1            1           6m28s
myopenslice-centrallog    1/1     1            1           6m28s
myopenslice-cridge        1/1     1            1           6m28s
myopenslice-keycloak      1/1     1            1           6m28s
myopenslice-kroki         1/1     1            1           6m28s
myopenslice-manoclient    1/1     1            1           6m28s
myopenslice-oasapi        1/1     1            1           6m28s
myopenslice-osom          1/1     1            1           6m28s
myopenslice-osportalapi   1/1     1            1           6m28s
myopenslice-osscapi       1/1     1            1           6m28s
myopenslice-portalweb     1/1     1            1           6m28s
myopenslice-tmfweb        1/1     1            1           6m28s
kubectl get deployments -n openslice

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
myopenslice-artemis       1/1     1            1           7m17s
myopenslice-blockdiag     1/1     1            1           7m17s
myopenslice-bugzilla      1/1     1            1           7m17s
myopenslice-centrallog    1/1     1            1           7m17s
myopenslice-cridge        1/1     1            1           7m17s
myopenslice-keycloak      1/1     1            1           7m17s
myopenslice-kroki         1/1     1            1           7m17s
myopenslice-manoclient    1/1     1            1           7m17s
myopenslice-oasapi        1/1     1            1           7m17s
myopenslice-osom          1/1     1            1           7m17s
myopenslice-osportalapi   1/1     1            1           7m17s
myopenslice-osscapi       1/1     1            1           7m17s
myopenslice-portalweb     1/1     1            1           7m17s
myopenslice-tmfweb        1/1     1            1           7m17s
kubectl get services -n openslice

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                        AGE
myopenslice-artemis       ClusterIP   10.101.128.223   <none>        8161/TCP,61616/TCP,61613/TCP   7m43s
myopenslice-blockdiag     ClusterIP   10.109.196.90    <none>        8001/TCP                       7m43s
myopenslice-bugzilla      ClusterIP   10.107.10.101    <none>        13010/TCP                      7m43s
myopenslice-centrallog    ClusterIP   10.109.84.33     <none>        13013/TCP                      7m43s
myopenslice-keycloak      ClusterIP   10.104.172.73    <none>        8080/TCP,8443/TCP              7m43s
myopenslice-kroki         ClusterIP   10.106.92.111    <none>        8000/TCP                       7m43s
myopenslice-manoclient    ClusterIP   10.100.143.154   <none>        13011/TCP                      7m43s
myopenslice-mysql         ClusterIP   10.108.206.75    <none>        3306/TCP                       7m43s
myopenslice-oasapi        ClusterIP   10.100.107.66    <none>        13101/TCP                      7m43s
myopenslice-osom          ClusterIP   10.97.88.133     <none>        13100/TCP                      7m43s
myopenslice-osportalapi   ClusterIP   10.111.212.76    <none>        13000/TCP                      7m43s
myopenslice-osscapi       ClusterIP   10.101.84.220    <none>        13082/TCP                      7m43s
myopenslice-portalweb     ClusterIP   10.101.16.112    <none>        80/TCP                         7m43s
myopenslice-tmfweb        ClusterIP   10.101.157.185   <none>        80/TCP                         7m43s

Accessing Logs for Troubleshooting

If a pod is not in the expected state, you can access its logs for troubleshooting:

kubectl logs <pod-name> -n openslice

Post installation steps

After the successful deployment of OpenSlice, to ensure the E2E user experience, this section is mandatory. It contains crucial configuration in regard of authentication and user creation.

Configure Keycloak server

The Keycloack server is managing authentication and running on a container at port 8080. It is also proxied to your host via the ingress resource under http://your-domain/auth.

  • Navigate to http://your-domain/auth/ or https://your-domain/auth/, (http://ipaddress:8080/auth/ or https://ipaddress:8443/auth/ which are directly accessible without proxy)

  • Navigate to Administration Console

  • Login with the credentials from section Keycloak Configuration. Default values are:

    • user: admin
    • password: Pa55w0rd

This applies only if you are running in HTTP and get a message: HTTPS required.

To resolve this issue when running in HTTP:

  • Select the master realm from top left corner
  • Go to login Tab and select "Require SSL": None
  • Repeat for realm Openslice

If you are running in HTTPS, then "Require SSL" can be left unchanged to external requests.

1. Configure email

Keycloak allows new users to register. Subsequently, this will also allow new users to register to the OpenSlice portal.

Navigate to realm Openslice > Realm Settings > Login Tab > check User registration, Verify email, Forgot password etc.

Finally, enter the details of the mail server at the Email Tab.

Email configuration is optional for test runs, but if not provided the above functionalities (e.g. external user registration) will not be possible.

2. Add an OpenSlice admin user

This step is mandatory so as to access the OpenSlice Web UI. To add an OpenSlice admin user you must: - Navigate to realm Openslice > Users > Add user - Set a password - Upon creation, navigate to Role Mappings and add ADMIN to Assigned Roles list

That user is different from the Keycloak admin user. It is required to login and browse the OpenSlice Web UI. The Role ADMIN guarantee full access through the OpenSlice UI, thus such a user is always required.

NFV Orchestrator Configuration

After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts.

See NFV Orchestrator Configuration.