Skip to main content

3 posts tagged with "helm"

View All Tags

· 8 min read
Andrew Block

Backstage is a framework for building developer portals and has become an important tool that is complementary with establishing an Internal Developer Platform (IDP). Many of the same organizations who are seeking the benefits that Backstage and an IDP can provide are also using Kubernetes as a platform for building and running containerized workloads. As described within previous articles (Part 1, Part 2, the Backstage Helm chart can be used to not only simplify the process for deploying Backstage to Kubernetes, but also how its flexibility can adapt to a variety of conditions and constraints.

While Kubernetes has become the de-facto container orchestration platform, there are a number of Kubernetes distributions on the market. The Janus Project is an upstream community sponsored by Red Hat and OpenShift (along with the upstream OKD) is their Kubernetes distribution. The features and capabilities that are included within OpenShift greatly benefit from a framework like Backstage as it enables end users the ability to simplify their interactions with each of these services. This article will describe the considerations that must be accounted for and the process to deploy the Backstage Helm chart to OpenShift.

OpenShift Environment Considerations

As with any target environment, there are a variety of considerations that must be accounted for in order to ensure an integration is successful -- OpenShift is no different and the following are the areas that must be addressed prior to deploying the Helm chart:

  • Image Source and Content
  • Ingress Strategy
  • Security

Fortunately, as described in the prior articles, the Backstage Helm chart provides the necessary options to customize the deployment to suit the necessary requirements. These customizations are managed via Helm values and the following sections describe the significance of these areas as well as how they can be accomplished within the Backstage Helm chart.

Image Source and Content

OpenShift encompasses an entire container platform that is built upon certified and vetted content. The majority of this content is sourced from one of the Red Hat managed container registries and include everything from the foundational platform services to content designed for application developers. Previously, it was described how the PostgreSQL instance supporting the persistence of Backstage was customized to make use of a PostgreSQL image from Software Collections instead of from Bitnami.

Using a similar approach, the PostgreSQL instance that is leveraged as part of the Backstage Helm chart can be configured to make use of an image from the Red Hat Container Catalog (RHCC) to provide a greater level of security and assurance. Since the official supported image from Red Hat is the downstream of the PostgreSQL from Software Collections, the only configuration that needs to be modified is the source location as shown below:

postgresql:
image:
registry: registry.redhat.io
repository: rhel9/postgresql-13
tag: 1-73

Images originating from Red Hat container registries can be deployed to environments other than OpenShift. However, additional configurations with regards to how to enable access to the image content needs to be applied as standard Kubernetes environments do not include the Global Pull Secret which include the credentials for accessing the image source. The steps for enabling this functionality is beyond the scope of this article, but the Backstage Helm chart does support this capability.

Ingress Strategy

Exposing applications externally for the purpose of enabling access from end users or systems is a common concern when operating in a Kubernetes environment. OpenShift saw the need for this feature from the beginning of the Kubernetes based distribution of the platform and has included a component called Routes to enable this capability. Since then, the Kubernetes community has introduced a similar concept called Ingress which similarly provides support for exposing applications externally.

Given the wide adoption of Ingress in the Kubernetes community, and to provide OpenShift users with the freedom to choose from the existing Routes approach or the more Kubernetes native Ingress feature, support was added in OpenShift to “upconvert” any Ingress resource that is deployed within OpenShift to an OpenShift native Route resources. This provides the best of both worlds by giving end users the flexibility to choose the approach for which they feel the most comfortable with. In addition, the up-conversion can be customized to enable Route specific features, such as specifying the TLS termination type when exposing Ingress resources in a secure fashion. The feature can be enabled by specifying the route.openshift.io/termination on the Ingress object itself and supports edge, passthrough and termination types.

For simplicity in this implementation so that TLS is offloaded at the OpenShift router, edge termination can be specified by setting the following within the Backstage Helm Values file:

ingress:
enabled: true
annotations:
route.openshift.io/termination: "edge"

By setting this annotation, the resulting Route resource in OpenShift will be configured as a secure route with edge termination so that connections to the Backstage dashboard are secure.

Security

One of the most important aspects of OpenShift is its “secure by default” approach for managing the platform and all of the workloads. By default, OpenShift approaches security by enforcing that workloads conform to certain criteria including not running with elevated permissions (specifically as the root user) as well as not requesting access to privileged resources, such as file systems on each container host. This posture is inverse to a standard deployment of Kubernetes which does not require such considerations to be placed upon workloads. While this does require additional onus on those implementing and managing workloads, it does provide for a more secure operating environment.

While the Backstage component of the Helm chart itself does not include any specific parameters that would require modification from a security perspective, the included Bitnami postgres Helm chart does specify certain configurations that would conflict when running using OpenShift’s default security profile; specifically within the securityContext properties of the Statefulset. Fortunately, the Bitnami postgres chart does contain options that can be used to modify the default configuration to enable a deployment into OpenShift without requiring additional configurations that would need to be employed. All that needs to be configured is to set enabled: false within the pod level, container level and default securityContext properties within the Values file as shown below

postgresql:
primary:
securityContext:
enabled: false
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false

Deploying the Backstage Helm Chart

Taking into account each of the considerations that were discussed in the previous sections as well as the baseline configurations that need to be applied to a Fedora based container -- whether it be from the upstream Software Collections or from Red Hat’s certified RHEL based images. The following is a encompassing Helm Values file that should be included in a file called values-openshift.yaml can be used to deploy a Red Hat based set of content (including both Backstage and PostgreSQL) in a manner that is compatible with an OpenShift environment:

values-openshift.yaml
backstage:
image:
registry: ghcr.io
repository: janus-idp/redhat-backstage-build
tag: latest
extraEnvVars:
- name: 'APP_CONFIG_app_baseUrl'
value: 'https://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_baseUrl'
value: 'https://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_cors_origin'
value: 'https://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_database_client'
value: pg
- name: 'APP_CONFIG_backend_database_connection_host'
value: '{{ include "backstage.postgresql.host" . }}'
- name: 'APP_CONFIG_backend_database_connection_port'
value: '5432'
- name: 'APP_CONFIG_backend_database_connection_user'
value: '{{ .Values.postgresql.auth.username }}'
- name: 'APP_CONFIG_backend_database_connection_password'
valueFrom:
secretKeyRef:
key: postgres-password
name: '{{ include "backstage.postgresql.fullname" . }}'
installDir: /opt/app-root/src

ingress:
enabled: true
host: backstage.apps.example.com
annotations:
route.openshift.io/termination: 'edge'

postgresql:
enabled: true
database: backstage
postgresqlDataDir: /var/lib/pgsql/data/userdata
auth:
username: postgres
database: backstage
image:
registry: registry.redhat.io
repository: rhel9/postgresql-13
tag: 1-73
primary:
securityContext:
enabled: false
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
persistence:
enabled: true
mountPath: /var/lib/pgsql/data
extraEnvVars:
- name: POSTGRESQL_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: postgres-password
name: backstage-postgresql

Be sure to update the ingress.host property with the desired hostname of the exposed Route.

Install the Backstage Helm chart by executing the following command that includes the location of the previously created Values file:

helm install backstage backstage/backstage -f values-openshift.yaml
note

The prior command assumes that the Helm CLI and the Backstage Helm repository have been added to the local machine. Consult prior articles for instructions on how to configure these steps.

Once the Chart release is successful, confirm that not only that both Backstage and PostgreSQL pods are running, but that an edge terminated Route has been created to enable external access to the Backstage user interface. Open a web browser to the hostname defined within the Route to confirm the Backstage user interface can be accessed securely.

With only a few small steps as demonstrated within this article and thanks to the Backstage Helm chart, Backstage and its required dependencies can be deployed to an OpenShift environment. In no time at all, teams can begin building and consuming developer portals that are built on a hardened and secure foundation to enable organizations the ability to realize the benefits offered by Internal Developer Platforms.

· 11 min read
Andrew Block

Getting Backstage up and running takes no time at all thanks to the Backstage Helm chart as described in the first article in this series. However, even though Backstage has been deployed, it is hardly ready in its current state for production use. There are several key factors that need to be addressed:

  1. Exposing Backstage properly outside the cluster
  2. Adding persistence

Ingress

Accessing the Backstage instance as part of the initial deployment made use of the kubectl port-forward{:bash} command, which is a key tool that is used as part of the development process. However, in order to make a deployment more representative of how Backstage would need to be configured for a production state, a proper Ingress strategy should be implemented.

Minikube includes a set of features that extend the baseline configuration of Kubernetes, known as addons. Included in the collection of minikube addons is support for deploying the NGINX Ingress Controller to enable more native access to resources within Kubernetes.

Execute the following command to enable the ingress addon which will deploy the NGINX Ingress Controller into a namespace called ingress-nginx:

minikube addons enable ingress

Connecting through the Minikube Ingress Controller

Access to resources deployed within Kubernetes through the ingress controller varies depending on the host operating system. On linux machines which can run containers natively, access can be achieved through the ip address in use by the minikube virtual machine. This address can be obtained by running the minikube ip{:bash} command.

On OSX machines, a tunnel can be created for which connectivity can be achieved through the ingress controller using the minikube tunnel command. Since this will expose the tunnel on ports 80 and 443, elevated permissions are needed. A password prompt will appear requesting permission to access these privileged ports.

Accessing Backstage through the Minikube Ingress Controller

To configure Backstage to expose an Ingress through the newly created ingress controller, update the content of the values that are used for the Backstage Helm chart by creating a new values file called values-minikube-ingress.yaml with the following content:

values-minikube-ingress.yaml
backstage:
extraEnvVars:
- name: 'APP_CONFIG_app_baseUrl'
value: 'http://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_baseUrl'
value: 'http://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_cors_origin'
value: 'http://{{ .Values.ingress.host }}`'

ingress:
enabled: true
host: localhost

The only noticeable difference between the content of these values versus the prior content contained in the values-minikube-ingress.yaml is enabling the creation of the Ingress resource which is triggered by setting the enabled field within the ingress property to true.

For those making use of a Linux machine, since the ingress controller is accessed through the IP address of the minikube VM, a fake hostname (backstage.minikube.info) can be created by setting the following value in the /etc/hosts file.

$(minikube ip) backstage.minikube.info

Alternatively, a wildcard IP address DNS service, such as nip.io can be used if there was a desire to avoid modifying the /etc/hosts file.

Regardless of the approach, when using a Linux machine, update the value of the host field within the ingress property in the values-minikube-ingress.yaml file to the hostname configured above.

Create a new Helm release by performing an upgrade of the prior release by providing the updated values file as shown below:

helm upgrade -n backstage backstage backstage/backstage -f values-minikube-ingress.yaml

Once the release is complete, a new Ingress resource will have been created in the backstage namespace to expose Backstage outside the cluster.

Navigate to the hostname that was defined in the Ingress resource. Due to caching techniques employed by many web browsers, a “hard reload” of the page may be required in order to ensure the updated set of Javascript resources from the Backstage instance. Consult the documentation of the browser being used for the necessary steps.

By exposing and accessing Backstage through a Kubernetes Ingress, it better aligns to how one would want to configure Backstage for a production deployment.

Persistence

To simplify the getting started experience for users, Backstage makes use of an in-memory SQLite database as a persistent store. While this reduces the initial barrier of entry, it also limits how far one can go in their Backstage journey. Limitations with this implementation include the ability to achieve high-availability as each instance of Backstage has its own independent persistent store and existing data would be lost if the pod is restarted or deleted.

PostgreSQL is the database backend that is used by Backstage store data persistently and the Backstage Helm chart includes the ability to provision a deployment of PostgreSQL in order to support Backstage. The postgres Helm chart from Bitnami as a dependency chart which is, as demonstrated previously, disabled by default. Similar to how ingress was enabled in the prior section, enabling and configuring the integration between PostgreSQL and Backstage can be achieved through the Backstage Helm chart.

Create a new values file called values-minikube-persistent.yaml with the following content:

values-minikube-persistent.yaml
backstage:
extraEnvVars:
- name: 'APP_CONFIG_app_baseUrl'
value: 'http://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_baseUrl'
value: 'http://{{ .Values.ingress.host }}'
- name: 'APP_CONFIG_backend_cors_origin'
value: 'http://{{ .Values.ingress.host }}'
args:
- '--config'
- '/app/app-config.yaml'
- '--config'
- '/app/app-config.production.yaml'

ingress:
enabled: true
host: localhost

postgresql:
enabled: true
auth:
username: bn_backstage
database: backstage
primary:
persistence:
enabled: true

While no changes occurred within the set of extra environment variables, several new properties were added; specifically args and postgresql. Setting postgresql.enabled will trigger the installation of the postgresql chart as defined in the Chart.yaml file. The postgresql section also specifies the username and database that should be created on the PostgreSQL instance along with the creation of a PersistentVolumeClaim within Kubernetes that will serve as the backing storage for the database.

The backstage.args property is used to specify the container arguments that are injected into the Backstage container. When an installation of Backstage is created, several sample application configuration files are created: One that contains the baseline configuration for Backstage, one that features configurations that are designed for production use and one that can be used to specify configurations that are used for local development. The production configuration file (app-config.production.yaml) includes the necessary configurations to enable Backstage to use PostgreSQL as its persistence store. The location of these configuration files are added as arguments using the --config flag as declared in values file sample above.

Several options are available within the Bitnami postgres chart to set the password for the newly created database use including providing it within a Kubernetes secret, explicitly as a Helm Value, or having one automatically generated which is the option that will be chosen in this case. There is one disadvantage to having the chart automatically generate the password; The helm upgrade command cannot be used as each invocation would result in a newly generated password, thus invalidating the prior value and causing issues for clients looking to connect.

As a result, uninstall the previously created chart from the minikube cluster by executing the following command:

helm uninstall -n backstage backstage

With the chart now uninstalled and the resources removed from the cluster, install a brand new instance of Backstage with persistence support by executing the following command. In addition, be sure to update the host field underneath the ingress property if a different value was used from the Ingress section of this article.

helm install -n backstage backstage backstage/backstage -f values-minikube-persistent.yaml

Once the release is complete, there will be two pods in the backstage namespace: one for the instance of backstage and the other for PostgreSQL. Confirm that the Backstage instance has connected and created the necessary tables within PostgreSQL by executing the following command:

kubectl exec -it -n backstage statefulset/backstage-postgresql -- sh -c "export PGPASSWORD=$(kubectl get secrets -n backstage backstage-postgresql  -o jsonpath='{.data.postgres-password}' | base64 -d) && psql --host localhost -U postgres -c '\l'"

Several tables prefixed with “backstage” should be present and once confirmed, one has the assurance Backstage is making use of the PostgreSQL instance and storing data persistently.

Replacing the Default Images

One of the most common tasks for anyone working with Backstage is the customization of the components that are installed within the Backstage instance. This typically includes the addition of plugins that extend the baseline functionality of Backstage to enable the integration with external systems. The Backstage Helm chart references a container image from the Backstage community, but in many cases, does not include all of the components that may be needed by consumers. As a result, those looking to use Backstage in their own environment will need to produce their own instance of Backstage and store the resulting image in a container registry. The location of this instance can be specified as a set of Helm values to enable users to leverage their own image.

The Janus community also produces a minimal container image, similar to the upstream backstage community, to provide an instance of Backstage that is built from an Universal Base Image (UBI) base. Switching from the upstream Backstage image to the Janus project image can be used to demonstrate the common task of replacing where the container image of Backstage is sourced from.

The following values illustrate how to switch to the Janus provided image. Keep in mind that in practice, you will most likely need to use an image of your own with your specific customizations, but this provides a good example for understanding the process involved.

backstage:
image:
registry: ghcr.io
repository: janus-idp/redhat-backstage-build
tag: latest

Any of the Helm values files that were provided in the prior sections can be used to demonstrate substituting the location of the Backstage image.

Each Backstage image can feature a different database schema, therefore if an existing Helm release has been deployed previously with postgresql enabled, uninstall it so that the new configurations can be applied. In addition, if persistent storage was used to support PostgreSQL, the PersistentVolumeClaim that was also created needs to be manually removed. This can be achieved using the following command:

kubectl delete pvc -n backstage -l=app.kubernetes.io/name=postgresql

Once all of the resources have been removed, use the Backstage Helm chart to deploy Backstage with the updated set of values. Confirm the image associated with the Backstage deployment is leveraging the custom image by executing the following command:

kubectl get deployment -n backstage backstage -o jsonpath='{ .spec.template.spec.containers[?(@.name=="backstage-backend")].image }'

Customizing the PostgreSQL Configuration

Similar to the Backstage image itself, the image associated with the PostgreSQL instance can also be customized if there was a desire to make use of an alternate image other than the image provided by the Bitnami postgres Helm chart. Given that the Janus community is a Red Hat sponsored initiative, switching to a PostgreSQL image that is provided from Red Hat Software Collections is a common task. Fortunately, the combination of features provided by the Backstage and Bitnami postgres Helm charts enable not only the customization of image location, but additional configurations that are needed to support any other required configurations needed to support an alternate image.

Create a new values file called values-minikube-persistent-scl.yaml with the following content:

backstage:
extraEnvVars:
- name: "APP_CONFIG_app_baseUrl"
value: "http://{{ .Values.ingress.host }}"
- name: "APP_CONFIG_backend_baseUrl"
value: "http://{{ .Values.ingress.host }}"
- name: "APP_CONFIG_backend_cors_origin"
value: "http://{{ .Values.ingress.host }}"
- name: "APP_CONFIG_backend_database_connection_password" (1)
valueFrom:
secretKeyRef:
key: postgres-password
name: "{{ include \"backstage.postgresql.fullname\" . }}"
args:
- "--config"
- "/app/app-config.yaml"
- "--config"
- "/app/app-config.production.yaml"

ingress:
enabled: true
host: localhost

postgresql:
enabled: true
database: backstage
postgresqlDataDir: /var/lib/pgsql/data/userdata (2)
auth: (3)
username: postgres
database: backstage
image: (4)
registry: quay.io
repository: fedora/postgresql-13
tag: "13"
primary:
securityContext:
enabled: false
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
persistence:
enabled: true
mountPath: /var/lib/pgsql/data
extraEnvVars:
- name: POSTGRESQL_ADMIN_PASSWORD (5)
valueFrom:
secretKeyRef:
key: postgres-password
name: backstage-postgresql

There are quite a number of additional configurations that are included within this values file. Let's break down some of the most important configurations to illustrate their significance:

  1. The Software Collections PostgreSQL image manages permissions differently than the Bitnami PostgreSQL image and the key within the generated Secret differs and needs to be provided for the Backstage container
  2. Specifies the location of where PostgreSQL stores persistent content
  3. The name of the database to use and user to authenticate against
  4. The location of the Software Collections image
  5. The environment variable used by the Software Collections PostgreSQL image to signify the password for the postgres admin account

Uninstall and reinstall the Backstage Helm chart once again so that the Software Collections image will be used to support PostgreSQL.

As demonstrated throughout this article, the Helm chart for Backstage provides a robust set of capabilities in order to support customizing and orchestrating a deployment to a Kubernetes environment. By simplifying the steps that it takes to deploy Backstage, the benefits when establishing an Internal Developer Platform can be realized.

· 5 min read
Andrew Block

Red Hat recently announced its intention of joining the Backstage community to help shepherd the adoption of Internal Developer Platforms more broadly. While there are many aspects that one needs to consider when establishing an IDP, where and how the platform will be deployed is certainly near the top of the list. Backstage can be deployed on a variety of target systems ranging from traditional infrastructure (physical servers or virtual machines) to more cloud native options. Given the popularity of Kubernetes these days, it has become a common choice for running applications and Backstage is no exception to the rule. The Janus project is Red Hat’s upstream community for running Internal Developer Platforms and in addition to a series of Backstage plugins that have been recently developed, it has been working with the community to simplify the process for deploying Backstage on Kubernetes. Deploying an application in Kubernetes can take on many forms, and given that the Helm package manager has become the de facto standard for deploying applications on Kubernetes, the Janus project in conjunction with the greater Backstage community have come together to establish a canonical Helm chart for deploying and maintaining Backstage on Kubernetes. This article will describe how easy it is to get started with the Backstage Helm chart so that an instance of Backstage can be up and running on Kubernetes in no time.

Installing Helm

Helm is a versatile tool and has been integrated into a number of popular solutions as its adoption grows. However, the simplest way to demonstrate the use of the Backstage Helm chart is to utilize the standalone command line tool from a local machine. Download and install the Helm CLI from the Helm website using the method of your choosing for the target Operating System.

Once Helm has been installed, add the backstage Helm chart repository and its dependent repository using the following commands:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add backstage https://backstage.github.io/charts
note

The Backstage Helm chart is also available as an OCI artifact. However, the steps described in this article will focus on the installation from a Helm Chart repository. Instructions on how to leverage the chart from an OCI registry can be found on the chart GitHub project repository.

Deploying to Kubernetes

Several options are available for accessing a Kubernetes cluster, ranging from a managed cloud provider or running one locally. Let's start by using Minikube, a solution for running a Kubernetes cluster locally, as the target environment for deploying the Backstage Helm chart by first installing and configuring Minikube on the local machine based on the steps described on the Minikube website for the target Operating System.

Once Minikube has been installed and configured, start an instance by executing the following command:

minikube start
note

The Kubernetes CLI (kubectl) may be desired in order to perform commands against the minikube instance. By default, it is not installed when minikube is installed. Follow these steps to configure kubectl on the local machine.

Now that the minikube instance is up running, the next step is to deploy the Backstage Helm chart to Kubernetes. Regardless of the operating environment that is used for Backstage, there are a few configuration details that need to be specified, particularly the baseUrl that will be used to access the platform. Backstage configuration properties can be provided in several ways and the Backstage Helm chart (thanks to both Helm’s templating capabilities along with the ability to specify parameterized values) includes support for many of the most common types, including as environment variables, additional configuration files that are contained within ConfigMaps, and as inline configuration files that are transformed into ConfigMaps.

The most straightforward method for the purposes of this article is to define any configuration properties as environment variables which are then added as environment variables that are added to the Backstage container.

Following a similar pattern as described in the documentation related to deploying Backstage to Kubernetes, create a file called values-minikube-default.yaml with the following content:


backstage:
extraEnvVars:
- name: 'APP_CONFIG_app_baseUrl'
value: 'http://{{ .Values.ingress.host }}:7007'
- name: 'APP_CONFIG_backend_baseUrl'
value: 'http://{{ .Values.ingress.host }}:7007'
- name: 'APP_CONFIG_backend_cors_origin'
value: 'http://{{ .Values.ingress.host }}:7007`'

ingress:
enabled: false
host: localhost

Environment variables with the prefix APP_CONFIG are eligible to be interpreted by Backstage as configuration properties and any field underneath the extraEnvVars property will be added to the Backstage container. The full list of how Backstage configuration properties can be defined can be found here. Also note that by default, the Backstage Helm chart creates an Ingress resource to expose Backstage outside of the Kubernetes cluster. However, minikube does not contain an Ingress controller in its default state. To access Backstage, the port-forward capability of kubectl will be used.

Deploy Backstage to minikube by executing the following command including specifying the Values file created previously.

helm install -n backstage --create-namespace backstage backstage/backstage -f values-minikube-default.yaml

The preceding command deploys Backstage in a new namespace called backstage. Confirm the Backstage pod is running by executing the following command:

kubectl get pods -n backstage

Now, forward a local port to gain access to the Backstage service from the local machine:

kubectl port-forward -n backstage svc/backstage 7007:7007

Open a web browser and navigate to http://localhost:7007 to view the deployed instance of Backstage.

And just like that, after only a few steps, Backstage has been deployed to Kubernetes. Establishing an instance of Backstage within a Kubernetes environment is just the beginning of the journey towards achieving a robust developer platform within an organization. With the help of the Backstage Helm chart, realizing this goal becomes much more attainable.