OpenSlice Deployment Guide with Kubernetes
Intended Audience: OpenSlice administrators
Requirements
Hardware requirements
Minimum Hardware Requirements | Recommended Hardware Requirements |
---|---|
4 CPU cores | 8 CPU cores |
8 GB RAM | 16 GB RAM |
30 GB storage | 50 GB storage |
Software Requirements
- git: For cloning the project repository.
- Kubernetes: A running cluster where OpenSlice will be deployed.
- Disclaimer: The current manual setup of Persistent Volumes using
hostPath
is designed to operate with only a single worker node. This setup will not support data persistence if a pod is rescheduled to another node.
- Disclaimer: The current manual setup of Persistent Volumes using
- Helm: For managing the deployment of OpenSlice.
-
Ingress Controller: Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic. You must have an Ingress controller to satisfy an Ingress.
- Nginx Ingress Controller (Kubernetes Community Edition): The ingress resource is configured to use an Nginx type ingress controller.
- If you need to expose the message bus service (Artemis), which communicates using the TCP protocol, you must use version >= 1.9.13 of the Nginx Ingress Controller (a prerequisite for managing multiple kubernetes clusters). This version or higher includes the required functionality to handle TCP services. Otherwise, earlier versions may suffice depending on your configuration.
- To install or upgrade to the required version, run the following command:
bash helm upgrade nginx-ingress ingress-nginx/ingress-nginx --namespace ingress \ --set tcp.61616="<openslice-namespace>/<openslice-helm-release-name>-artemis:61616"
Replace
<helm-release-name>
with the name of your OpenSlice Helm release. * More details regarding the Nginx Ingress Controller (Kubernetes Community Edition) can be found here. * Other Ingress Controller: For non-Nginx ingress controllers, modify[repo-root]/kubernetes/helm/openslice/templates/openslice-ingress.yaml
to meet your controller’s requirements.
Exposure
Option 1 - Load balancer
- Network Load Balancer: Required for exposing the service (e.g., GCP, AWS, Azure, MetalLB).
- Domain/IP Address: Necessary for accessing the application. This should be configured in
[repo-root]/kubernetes/helm/openslice/values.yaml
underrooturl
.
Option 2 - Ingress
- Ingress Controller with NodePort: You can expose the application using the NodePort of the Ingress Controller's service.
- IP Address and Port: Use the IP address of the master node and the assigned NodePort to access the application. This should be configured in
[repo-root]/kubernetes/helm/openslice/values.yaml
underrooturl
.
For example:
rooturl: http://<master-node-ip>:<nodeport>
Additional Configuration
- Storage Class: In a production environment, specify your
storageClass
in[repo-root]/kubernetes/helm/openslice/values.yaml
understorageClass
. If not defined, PVs will be created and managed manually.- Disclaimer: Before deploying, confirm that your storage system supports claims of one 10G and two 1G volumes.
Preparing the environment
1. Setting Up A Kubernetes Cluster
Refer to the official Kubernetes documentation for setting up a cluster. Ensure your cluster meets the hardware requirements specified above.
2. Installing Helm
Helm must be installed on your machine to deploy OpenSlice via Helm charts. Follow the official Helm installation guide.
Downloading the project
1. Create a new folder to download the project
mkdir openslice
cd openslice
2. Download the project code
Clone the project code from the GitLab repository. Note: This process will be simplified once the charts are published in the GitLab registry, requiring only the chart to be pulled.
git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.main.git
cd org.etsi.osl.main/kubernetes/helm/openslice/
3. Prerequisites before deployment
Before deploying the Helm chart, ensure you have configured the necessary components as detailed in the following section, i.e. Configure Helm Chart Services. By default, the main
branch is selected for deployment.
We recommend:
- main branch for the most stable experience and
- develop branch for an experience with the latest features (for develop branch installation, it is strongly advisable that you may as well follow the develop documentation)
Configure Helm Chart
When deploying OpenSlice with Helm, service configurations are handled through the values.yaml
file. This file allows you to define all necessary configurations for your deployment, including database credentials, service URLs, and logging levels. Below are examples of how to configure your services in Helm based on your provided values.
Configuring Services
1. Database Configuration
To configure MySQL and other related services, you can directly set the values in your values.yaml
file under the oscreds
and mysql
sections. For example:
oscreds:
mysql:
username: "root"
password: "letmein"
openslicedb: "osdb"
keycloak:
database: "keycloak"
username: "keycloak"
password: "password"
adminpassword: "Pa55w0rd"
portal:
database: "osdb"
username: "portaluser"
password: "12345"
2. Keycloak Configuration
Keycloak settings, including the database and admin password, are part of the oscreds.mysql.keycloak
section. If you need to adjust Keycloak-specific settings like realms or client configurations, you'll likely need to customize your Helm chart further or manage these settings directly within Keycloak after deployment. The Keycloak realm configuration that is imported by default can be found under kubernetes/helm/openslice/files/keycloak-init/realm-export.json
.
oscreds:
mysql:
keycloak:
database: "keycloak"
username: "keycloak"
password: "password"
adminpassword: "Pa55w0rd"
3. CRIDGE Configuration
To create and manage Kubernetes Custom Resources (CRs), you have to install and configure the CRIDGE component.
For CRIDGE to work properly, you need to provide a cluster-wide scope kubeconfig file (typically located at /home/{user}/.kube
directory of the Kubernetes Cluster's host). This kubeconfig file allows CRIDGE to communicate with your Kubernetes cluster.
There are two ways to install CRIDGE:
3.1 Bundled CRIDGE deployment with the OpenSlice Helm chart (same cluster environment)
By default, the OpenSlice Helm chart also deploys CRIDGE alongside the bundle. To configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment:
-
Manual Copy to Helm Files Directory:
- Copy the kubeconfig file to the following directory:
org.etsi.osl.main/kubernetes/helm/openslice/files/org.etsi.osl.cridge
. - The deployment process will automatically copy the file into the
/root/.kube
directory of the CRIDGE container. - Note: This method expects the kubeconfig file to be named exactly
kubeconfig.yaml
in the specified directory.
- Copy the kubeconfig file to the following directory:
-
Passing the Kubeconfig File Using Helm (
--set-file
):- If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the
--set-file
option, at the final deployment process:
bash --set-file cridge.kubeconfig.raw=path/to/kubeconfig.yaml
- This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment.
- If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the
-
Passing a Base64-Encoded Kubeconfig Using Helm (
--set
):- Alternatively, you can pass the kubeconfig as a base64-encoded string, during the Helm installation using the
--set
option, at the final deployment process:
bash --set cridge.kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)"
- This method encodes the kubeconfig content and passes it directly to the CRIDGE container.
- Alternatively, you can pass the kubeconfig as a base64-encoded string, during the Helm installation using the
Note: Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed.
3.2 Standalone CRIDGE deployment
There can be cases where a separate deployment of CRIDGE, apart from the bundled OpenSlice deployment, may be needed. These cases comprise:
- remote cluster management, different from the one OpenSlice is installed
- more control over the component (e.g. multiple component instances / clusters)
In this case, initially you have to disable CRIDGE from deploying with the rest of OpenSlice. To do so, in the values.yaml
of OpenSlice Helm chart, you have to change the cridge.enabled
flag to false
.
cridge:
enabled: false
Following, clone the CRIDGE project from the GitLab, which also includes the respective standalone Helm chart.
git clone https://labs.etsi.org/rep/osl/code/org.etsi.osl.cridge.git
cd org.etsi.osl.cridge/helm/cridge/
Similarly, to configure CRIDGE, there are three different ways to provide this kubeconfig file during deployment:
-
Manual Copy to Helm Files Directory:
- Copy the kubeconfig file to the following directory:
org.etsi.osl.cridge/helm/cridge/files/org.etsi.osl.cridge
. - The deployment process will automatically copy the file into the
/root/.kube
directory of the CRIDGE container. - Note: This method expects the kubeconfig file to be named exactly
kubeconfig.yaml
in the specified directory.
- Copy the kubeconfig file to the following directory:
-
Passing the Kubeconfig File Using Helm (
--set-file
):- If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the
--set-file
option:
bash helm install cridge-release . --set-file kubeconfig.raw=path/to/kubeconfig.yaml
- This method reads the specified kubeconfig file and mounts it into the CRIDGE container during deployment.
- If you do not wish to manually copy the file, you can pass it directly during the Helm installation using the
-
Passing a Base64-Encoded Kubeconfig Using Helm (
--set
):- Alternatively, you can pass the kubeconfig as a base64-encoded string:
bash helm install cridge-release . --set kubeconfig.base64="$(base64 path/to/kubeconfig.yaml)"
- This method encodes the kubeconfig content and passes it directly to the CRIDGE container.
Note: Regardless of the method you choose, if you're using a non-standard kubeconfig file name, make sure to adjust the references or rename the file as needed.
Important Note: If you are deploying CRIDGE in the same cluster and namespace as OpenSlice, no additional configuration is required for the message bus broker URL and OpenSlice communicates with CRIDGE directly. However, if CRIDGE is installed in a separate Kubernetes cluster from the one hosting OpenSlice, it is important to configure the
values.yaml
file for the CRIDGE Helm chart to point to the correct message bus broker URL. Please see Nginx Ingress Controller (Kubernetes Community Edition) configuration on how to properly expose the message bus in such scenario.
In the values.yaml
of the CRIDGE Helm chart, you must set oscreds.activemq.brokerUrl
to point to the IP address of the ingress controller in the OpenSlice cluster, as shown below:
oscreds:
activemq:
brokerUrl: "tcp://<openslice-rootURL>:61616?jms.watchTopicAdvisories=false"
Management of multiple Kubernetes Clusters
OpenSlice also offers management support of multiple Kubernetes Clusters simultaneously.
For this, you will have to replicate the steps in Standalone CRIDGE deployment for every Cluster. Each CRIDGE instance will be in charged with the management of one Kubernetes Cluster.
4. External Services Configuration
For configuring optional external services like Bugzilla and CentralLog, specify their URLs and credentials in the values.yaml
file:
bugzillaurl: "example.com:443/bugzilla"
bugzillakey: "VH2Vw0iI5aYgALFFzVDWqhACwt6Hu3bXla9kSC1Z"
main_operations_product: "Main Site Operations" // this is the default product to issue tickets
centrallogurl: "http://elk_ip:elk_port/index_name/_doc"
Bugzilla should have the following components under the specified product:
- NSD Deployment Request: Component used to schedule deployment req
- Onboarding: Issues related to VNF/NSD Onboarding
- Operations Support: Default component for operations support
- Validation: Use to track validation processes of VNFs and NSDs
- VPN Credentials/Access: Used for requesting VPN Credentials/Access
Also in the 'Main Site Operations' product, a version named 'unspecified' must be created.
5. Application and Logging Configuration
Application-specific configurations, such as OAuth client secrets, can be set in the spring
section:
spring:
oauthClientSecret: "secret"
6. Ingress and Root URL
To configure the ingress controller and root URL for OpenSlice, update the rooturl field with your ingress load balancer IP or domain. This setting is crucial for external access to your application:
rooturl: "http://openslice.com" # Example domain
# or
rooturl: "http://3.15.198.35:8080" # Example IP with port
7. Persistent Volume for MySQL
For persistent storage, especially for MySQL, define the storage size under the mysql
section. This ensures that your database retains data across pod restarts and deployments.
mysql:
storage: "10Gi"
8. Configuring TCP Forwarding for Artemis
To expose the message bus service (Artemis) via the ingress controller, it’s essential to configure TCP traffic forwarding. Artemis listens on port 61616
, and this traffic needs to be directed to the Artemis service within your Kubernetes cluster.
In the Ingress Controller Setup section, you already configured the Nginx ingress controller to handle this TCP forwarding. By setting the rule for port 61616
, traffic arriving at the ingress will be forwarded to the Artemis service defined in your Helm release.
This setup ensures that the message bus service is accessible externally via the ingress controller, completing the necessary configuration for Artemis.
Configure Web UI
In folder kubernetes/helm/openslice/files/org.etsi.osl.portal.web/src/js
you must make a copy of config.js.default
file and rename it to config.js
.
This is mandatory for the configuration file to be discoverable.
Edit the config.js
configuration file with your static configuration, if needed.
{
TITLE: "OpenSlice by ETSI",
WIKI: "https://osl.etsi.org/documentation/",
BUGZILLA: "{{ .Values.rooturl }}/bugzilla",
STATUS: "{{ .Values.rooturl }}/status",
APIURL: "{{ .Values.rooturl }}",
WEBURL: "{{ .Values.rooturl }}/nfvportal",
APIOAUTHURL: "{{ .Values.rooturl }}/auth/realms/openslice",
APITMFURL: "{{ .Values.rooturl }}/tmf-api/serviceCatalogManagement/v4"
}
Configure TMF Web UI
In the folder kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config
there are 3 files available for configuration:
- config.prod.default.json (Basic information + API configuration)
- theming.default.scss (CSS color palette theming)
- config.theming.default.json (HTML configuration - Logo, Favicon, Footer)
You must make a copy of files:
config.prod.default.json
and rename it toconfig.prod.json
theming.default.scss
and rename it totheming.scss
The 2 files above (i.e. config.prod.json, theming.scss) are essential for the successful deployment of OpenSlice, and executing the above steps is mandatory for the configuration files to be discoverable.
Ensure that you check the config.prod.json
and theming.scss
files and readjust to your deployment if needed.
# Starting from the root project directory
cd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config
E.g. You may edit "TITLE", "WIKI", etc properties with your domain title. Also configure TMF's API and Keycloak's location for the web application, if needed.
{
"TITLE": "OpenSlice by ETSI",
"PORTALVERSION":"2024Q2",
"WIKI": "https://osl.etsi.org/documentation",
"BUGZILLA": "{BASEURL}/bugzilla/",
"STATUS": "{BASEURL}/status/",
"WEBURL": "{BASEURL}",
"PORTAL_REPO_APIURL": "{BASEURL}/osapi",
"ASSURANCE_SERVICE_MGMT_APIURL": "{BASEURL}/oas-api",
"APITMFURL": "{BASEURL}/tmf-api",
"OAUTH_CONFIG" : {
"issuer": "{BASEURL}/auth/realms/openslice",
"loginUrl": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/auth",
"tokenEndpoint": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/token",
"userinfoEndpoint": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/userinfo",
"redirectUri": "{BASEURL}/redirect",
"logoutUrl": "{BASEURL}/auth/realms/openslice/protocol/openid-connect/logout",
"postLogoutRedirectUri": "{BASEURL}",
"responseType": "code",
"oidc": false,
"clientId": "osapiWebClientId",
"dummyClientSecret": "secret",
"requireHttps": false,
"useHttpBasicAuth": true,
"clearHashAfterLogin": false,
"showDebugInformation": true
}
}
The {BASEURL} placeholder in the file automatically detects the Origin (Protocol://Domain:Port) of the deployment and applies it to every respective property. E.g. If you are attempting a local deployment of OpenSlice, then {BASEURL} is automatically translated to "http://localhost". Similarly, you may use {BASEURL} to translate to a public deployment configuration, e.g. "https://portal.openslice.eu".
If further customization, apart from the default provided, is needed for branding (Logo, Footer) then config.theming.json
needs to be created in kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config directory, as follows:
# Starting from the root project directory
cd kubernetes/helm/openslice/files/org.etsi.osl.tmf.web/src/assets/config
sudo cp config.theming.default.json config.theming.json
Deploy the Helm Chart
After configuring the services, and editing the values.yaml
file accordingly, the helm install command can be performed.
cd kubernetes/helm/openslice/
helm install myopenslice . --namespace openslice --create-namespace
Validating deployments and container monitoring
In a Kubernetes environment, you can monitor the status of your deployments and containers using kubectl
, the Kubernetes command-line tool, which provides powerful capabilities for inspecting the state of resources in your cluster.
Checking the Status of your application's deployment
To check the status of your deployment, use the following commands. The output should be similar:
kubectl get pods -n openslice
NAME READY UP-TO-DATE AVAILABLE AGE
myopenslice-artemis 1/1 1 1 6m28s
myopenslice-blockdiag 1/1 1 1 6m28s
myopenslice-bugzilla 1/1 1 1 6m28s
myopenslice-centrallog 1/1 1 1 6m28s
myopenslice-cridge 1/1 1 1 6m28s
myopenslice-keycloak 1/1 1 1 6m28s
myopenslice-kroki 1/1 1 1 6m28s
myopenslice-manoclient 1/1 1 1 6m28s
myopenslice-oasapi 1/1 1 1 6m28s
myopenslice-osom 1/1 1 1 6m28s
myopenslice-osportalapi 1/1 1 1 6m28s
myopenslice-osscapi 1/1 1 1 6m28s
myopenslice-portalweb 1/1 1 1 6m28s
myopenslice-tmfweb 1/1 1 1 6m28s
kubectl get deployments -n openslice
NAME READY UP-TO-DATE AVAILABLE AGE
myopenslice-artemis 1/1 1 1 7m17s
myopenslice-blockdiag 1/1 1 1 7m17s
myopenslice-bugzilla 1/1 1 1 7m17s
myopenslice-centrallog 1/1 1 1 7m17s
myopenslice-cridge 1/1 1 1 7m17s
myopenslice-keycloak 1/1 1 1 7m17s
myopenslice-kroki 1/1 1 1 7m17s
myopenslice-manoclient 1/1 1 1 7m17s
myopenslice-oasapi 1/1 1 1 7m17s
myopenslice-osom 1/1 1 1 7m17s
myopenslice-osportalapi 1/1 1 1 7m17s
myopenslice-osscapi 1/1 1 1 7m17s
myopenslice-portalweb 1/1 1 1 7m17s
myopenslice-tmfweb 1/1 1 1 7m17s
kubectl get services -n openslice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myopenslice-artemis ClusterIP 10.101.128.223 <none> 8161/TCP,61616/TCP,61613/TCP 7m43s
myopenslice-blockdiag ClusterIP 10.109.196.90 <none> 8001/TCP 7m43s
myopenslice-bugzilla ClusterIP 10.107.10.101 <none> 13010/TCP 7m43s
myopenslice-centrallog ClusterIP 10.109.84.33 <none> 13013/TCP 7m43s
myopenslice-keycloak ClusterIP 10.104.172.73 <none> 8080/TCP,8443/TCP 7m43s
myopenslice-kroki ClusterIP 10.106.92.111 <none> 8000/TCP 7m43s
myopenslice-manoclient ClusterIP 10.100.143.154 <none> 13011/TCP 7m43s
myopenslice-mysql ClusterIP 10.108.206.75 <none> 3306/TCP 7m43s
myopenslice-oasapi ClusterIP 10.100.107.66 <none> 13101/TCP 7m43s
myopenslice-osom ClusterIP 10.97.88.133 <none> 13100/TCP 7m43s
myopenslice-osportalapi ClusterIP 10.111.212.76 <none> 13000/TCP 7m43s
myopenslice-osscapi ClusterIP 10.101.84.220 <none> 13082/TCP 7m43s
myopenslice-portalweb ClusterIP 10.101.16.112 <none> 80/TCP 7m43s
myopenslice-tmfweb ClusterIP 10.101.157.185 <none> 80/TCP 7m43s
Accessing Logs for Troubleshooting
If a pod is not in the expected state, you can access its logs for troubleshooting:
kubectl logs <pod-name> -n openslice
Post installation steps
After the successful deployment of OpenSlice, to ensure the E2E user experience, this section is mandatory. It contains crucial configuration in regard of authentication and user creation.
Configure Keycloak server
The Keycloack server is managing authentication and running on a container at port 8080. It is also proxied to your host via the ingress resource under http://your-domain/auth.
-
Navigate to http://your-domain/auth/ or https://your-domain/auth/, (http://ipaddress:8080/auth/ or https://ipaddress:8443/auth/ which are directly accessible without proxy)
-
Navigate to Administration Console
-
Login with the credentials from section Keycloak Configuration. Default values are:
- user: admin
- password: Pa55w0rd
This applies only if you are running in HTTP and get a message: HTTPS required.
To resolve this issue when running in HTTP:
- Select the master realm from top left corner
- Go to login Tab and select "Require SSL": None
- Repeat for realm Openslice
If you are running in HTTPS, then "Require SSL" can be left unchanged to external requests.
1. Configure email
Keycloak allows new users to register. Subsequently, this will also allow new users to register to the OpenSlice portal.
Navigate to realm Openslice > Realm Settings > Login Tab > check User registration, Verify email, Forgot password etc.
Finally, enter the details of the mail server at the Email Tab.
Email configuration is optional for test runs, but if not provided the above functionalities (e.g. external user registration) will not be possible.
2. Add an OpenSlice admin user
This step is mandatory so as to access the OpenSlice Web UI. To add an OpenSlice admin user you must: - Navigate to realm Openslice > Users > Add user - Set a password - Upon creation, navigate to Role Mappings and add ADMIN to Assigned Roles list
That user is different from the Keycloak admin user. It is required to login and browse the OpenSlice Web UI. The Role ADMIN guarantee full access through the OpenSlice UI, thus such a user is always required.
NFV Orchestrator Configuration
After successfully deploying and configuring OpenSlice, you may configure its environment (e.g. the NFVO) that will facilitate the deployment of NFV artifacts.