Skip to main content

Future of the Janus IDP community

· 2 min read
Bethany Griggs
Senior Software Engineer, Red Hat Inc.

We wanted to update the Janus community around the current state of the project, and how our involvement with the upstream Backstage community will impact the Janus community moving forward.

The original intent of the Janus project was to provide a community where Red Hat could collaborate with other teams, customers, partners and interested parties to advance and evolve the Backstage upstream project. Through the Janus community, the project has successfully contributed to a number of key initiatives in the upstream project, such as dynamic plugin support, the upstream Helm chart, and the contribution of a number of plugins. Additionally, we are now an active partner in the Backstage Community Plugins SIG.

The Janus community has made impactful contributions which have shaped the wider platform. Based on this success, we now see a path forward where much of our efforts can be consolidated with the upstream. This may however render aspects of the Janus project redundant or unnecessary. We anticipate no loss of content or features from the Janus project, but we do expect content to gradually move to new locations, and we will endeavor to keep our community and users updated as to when and where the content has moved:

The Digital Experience Platform (DXP) Adoption Journey with Red Hat Developer Hub - Part 1

· 4 min read
Rigin Oommen
Senior Software Engineer, DXP, Red Hat Inc.
Mayur Deshmukh
Senior Software Engineer, DXP, Red Hat Inc.
Nilesh Patil
Senior Manager, DXP, Red Hat Inc.
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

The Digital Experience Platform (DXP) team which is part of Red Hat Global Engineering started their journey a year before with a community version of backstage. The Digital Experience portfolio includes more than 200 services and applications, many of which are of high and critical business importance and require a robust catalog that can give us insights into the services. The Backstage Service Catalog was a close match and provided a framework for further customization.

The Community version of the backstage gave us a good start, however it required dedicated time and effort to discover, configure, customize and maintain the instance. It was a time-consuming process and deviated us from the core goal of continuing building developer experience solutions. We were looking for a more stable version of the backstage to create a bandwidth for developers to focus on catalog, plugin & template development.

Red Hat Developer Hub (RHDH) scales our ecosystem which satisfies the above standards. We are still in the process of adoption based on progress. We want to share our journey with the wider community.

Migration Goals

  1. Default functionalities should behave as expected
  2. Customizations should be preserved.
  3. Internally developed plugins and interfaces should work.
  4. SCM Integrations should work.

Current Adoption Progress

1. Deployment of RHDH

With the certified RHDH Helm Chart, we were able to deploy the bare metal version of RHDH smoothly to Red Hat Openshift with all the resources needed such as Routes, Databases and more. Manual Infrastructure setup complexity was eliminated in the first moment which was really a great relief.

Refined Documentation of the RHDH helped us at every stage with tuning the changes.

2. SCM Integration

Our engineering ecosystem is distributed in a wider SCM Platforms GitHub and GitLab. We have successfully integrated the SCM systems with the RHDH very easily and were able to onboard projects to RHDH.

3. Authentication with the Enterprise Systems

In our Community Backstage, we have used the SSO based on OIDC. We were able to migrate this successfully but still there are some small challenges especially with the login page. Still we are working on issue resolution.

Future Plans

1. Dynamic Plugins

In our ongoing adoption journey, we are looking forward to implementing dynamic plugins to enhance the customization and extensibility of our developer ecosystem. The adoption of dynamic plugins will further empower our developers and users, offering new features and capabilities to meet evolving needs. We plan to convert some of our plugins into dynamic ones and integrate them into RHDH in the upcoming weeks.

2. Customization

We will be working on customizing the RHDH with respect to our team identity, especially Home Page and Plugins page. We will be working to migrate this customizations with support of dynamic plugins with the RHDH.

3. Techdocs

Techdocs enablement we have spend some time for the enablement but we are still working on the complete migration of the documents to RHDH. This will be a continued effort for us.

4. Contributing to RHDH

Janus is the upstream community of the Red Hat Developer Hub. There are a lot of contribution opportunities like plugin development, fixes and contributing to the IDP initiative. As a team we have contributed to the Janus Project on plugin development and fixes to the IDP

Benefits and Outcomes

Since our adoption of the Red Hat Developer Hub (RHDH), we have observed several significant benefits and outcomes that have positively impacted our development ecosystem and streamlined our processes. Below are some of the key benefits and anticipated outcomes:

  • Great Support
  • Enhanced Stability and Reliability
  • Streamlined Deployment
  • Improved Documentation
  • Seamless SCM Integration

By sharing these benefits and outlining our adoption roadmap, to provide a comprehensive view of the positive changes and future enhancements that the Red Hat Developer Hub (RHDH) is bringing to your organization. We will share more in the Part 2

Creating and integrating a Backstage Frontend plugin with a Backend plugin

· 12 min read
Sandip Gahlot
Janus Blog Contributor
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

Introduction

This blog will provide information on how we can create a Backstage Frontend plugin as well integrating it with an existing Backstage Backend plugin. The Backend plugin we will use in this article will be the Local System Information plugin mentioned in another blog.

We will make some modifications to the backstage backend to wire the backend plugin into the Backstage app and then move on to create a new frontend plugin. Once everything is in place and we have verified the frontend plugin integration with the backend plugin, we will then analyze the code that makes it possible.

Prerequisites

To start, please make sure that the following tasks have already been performed:


Modifying the Backstage backend

The backend plugin exposes the /system-info endpoint that is only available using curl, and is not accessible through the Backstage app by any Frontend plugin. To expose and make use of this backend plugin in our app, we will need to wire the plugin's router into the Backstage backend router.

Exporting the backend plugin router

  • Create a new file named packages/backend/src/plugins/local-system-info.ts with the following content:

    packages/backend/src/plugins/local-system-info.ts
    import { createRouter } from '@internal/plugin-local-system-info-backend';
    import { Router } from 'express';
    import { PluginEnvironment } from '../types';

    export default async function createPlugin(env: PluginEnvironment): Promise<Router> {
    return await createRouter({
    logger: env.logger,
    });
    }
  • Wire the backend plugin router into the Backstage backend router by modifying packages/backend/src/index.ts:

    • Import the newly added file for our backend plugin into the packages/backend/src/index.ts.

    • Then add the plugin into the backend by adding the following line into the main function after the existing addPlugin calls.

      packages/backend/src/index.ts
        import ...
      ...
      import sysInfo from './plugins/local-system-info';
      ...
      ...
      async function main() {
      ...
      await addPlugin({ plugin: 'search', apiRouter, createEnv, router: search });

      await addPlugin({plugin: 'sys-info', apiRouter, createEnv, router: sysInfo,});
      ...
      }

Modifying the package.json to run the backend plugin

The command to start the Backstage (yarn start) starts both the app and backend as well as any other workspaces found in the plugins directory, which also includes our new backend plugin.

Ideally, one would want to publish the backend and frontend plugins and then simply install that plugin to use it. But since we are setting up both the backend and frontend plugins in our local development environment, the yarn start command will try to start the app and the backend plugin along with the Backstage backend. Both the backend plugin as well as Backstage backend processes listen on port 7007. This will cause a port conflict when the Backstage app starts.

To resolve this issue, we can either modify the package.json in the root directory to add a new script to only run the app and backend, or run two commands to start the app and backend separately.

Modifying the package.json

Modify the package.json in the root directory by adding the following content in the scripts section (just after the start script entry):

package.json
{
"name": "root",
...
"scripts": {
...
"start": "turbo run start --parallel",
"start-dev": "turbo run start --filter=app --filter=backend",
...
},
...
}

Alternate method to start Backstage

  • In case one does not want to modify the package.json, an alternative method to start backstage is to run the following two commands in the root directory of the backstage-showcase repository, in two separate terminals:
yarn workspace app start
yarn workspace backend start

Modifying the backend plugin response

In the plugins/local-system-info-backend/src/service/router.ts file, move all the elements except cpus to the data element so that it is easier to parse and display in the frontend plugin:

plugins/local-system-info-backend/src/service/router.ts
const systemInfo = {
data: {
hostname: os.hostname(),
operatingSystem: os.type(),
platform: os.platform(),
release: os.release(),
uptime: os.uptime(),
loadavg: os.loadavg(),
totalMem: os.totalmem(),
freeMem: os.freemem(),
},
cpus: os.cpus(),
};

Verifying that the backend plugin is available through the Backstage app

To verify the changes done to the backend plugin, run the Backstage backend with yarn start-backend in the root directory of the backstage-showcase repository.

Since we have yet to create a frontend plugin, we will still need to use a curl command to invoke the backend plugin. However, we will access the endpoint provided by the backstage backend router similar to how it would be accessed by a frontend plugin:

curl localhost:7007/api/sys-info/system-info | jq
  • When running in the Backstage backend, it is accessed with a prefix of /api (from the cli or app)
  • The backend plugin is exposed on the /sys-info main route (which matches the plugin name when we registered the plugin in the packages/backend/src/index.ts file)
  • The API endpoint for providing system information is /system-info

After verifying the endpoint, you can now stop the backend by killing the yarn process.


Creating the Frontend plugin

To create a new frontend plugin, please execute the following command in the root directory of the backstage-showcase repository in a terminal:

yarn new --select plugin

When prompted to enter the ID of the plugin, please provide system-info. Here's the output from running this command in my terminal:

Creating a new Plugin

Verifying the new Frontend plugin

The above command will create a new Backstage frontend plugin and will also add it to the Backstage app. To verify the new plugin, run the app with yarn start-dev in the root directory of the backstage-showcase repository, or by following these steps. Once the app starts up, the new plugin can be verified by navigating to http://localhost:3000/system-info

Integrating the frontend plugin with backend plugin

The newly generated frontend plugin contains some static data that is displayed when navigating to http://localhost:3000/system-info. In this section, we will modify the frontend plugin to invoke the backend plugin API and display the data returned by that API.

Please follow the steps given below to achieve this integration:

  • Delete the following directories:

    plugins/system-info/src/components/ExampleComponent
    plugins/system-info/src/components/ExampleFetchComponent
  • Create a new directory named plugins/system-info/src/components/SystemInfoPage

  • Create a new file named plugins/system-info/src/components/SystemInfoPage/types.ts with the following content:

    plugins/system-info/src/components/SystemInfoPage/types.ts
    import { TableColumn } from '@backstage/core-components';

    export const sysInfoCpuColumns: TableColumn[] = [
    {
    title: 'CPU Model',
    field: 'model',
    },
    {
    title: 'CPU Speed',
    field: 'speed',
    },
    {
    title: 'Times Idle',
    field: 'times.user',
    },
    {
    title: 'Times IRQ',
    field: 'times.irq',
    },
    {
    title: 'Times Nice',
    field: 'times.nice',
    },
    {
    title: 'Times Sys',
    field: 'times.sys',
    },
    {
    title: 'Times User',
    field: 'times.user',
    },
    ];

    export const sysInfoMainDataColumns: TableColumn[] = [
    { title: 'Hostname', field: 'hostname', highlight: true },
    { title: 'OS', field: 'operatingSystem', width: '10%' },
    { title: 'Platform', field: 'platform', width: '10%' },
    { title: 'CPU Model', field: 'cpuModel', width: '10%' },
    { title: 'CPU Speed', field: 'cpuSpeed', width: '10%' },
    { title: 'Total Memory', field: 'totalMem', width: '20%' },
    { title: 'Free Memory', field: 'freeMem', width: '10%' },
    { title: 'Release', field: 'release', width: '10%' },
    { title: 'Uptime', field: 'uptime', width: '10%' },
    ];

    type CpuData = {
    model: string;
    speed: number;
    times: CpuTimeData;
    };

    type CpuTimeData = {
    idle: number;
    irq: number;
    nice: number;
    sys: number;
    user: number;
    };

    type SysInfoMainData = {
    cpuModel: string;
    cpuSpeed: number;
    freeMem: number;
    hostname: string;
    loadavg: Array<number>;
    operatingSystem: string;
    platform: string;
    release: string;
    totalMem: number;
    uptime: number;
    };

    export type SysInfoData = {
    cpus: Array<CpuData>;
    data: SysInfoMainData;
    mainDataAsArray: SysInfoMainData[];
    };
  • Create a new file named plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx with the following content:

    plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
    import React from 'react';
    import useAsync from 'react-use/lib/useAsync';

    import { Table } from '@backstage/core-components';

    import { Box, Grid, Typography } from '@material-ui/core';

    import { configApiRef, useApi } from '@backstage/core-plugin-api';

    import { SysInfoData, sysInfoCpuColumns, sysInfoMainDataColumns } from './types';

    export const SystemInfoPage = () => {
    const config = useApi(configApiRef);
    const SYS_INFO_BACKEND_URL = 'backend.baseUrl';

    const { loading: isSysInfoLoading, value: sysInfoData } =
    useAsync(async (): Promise<SysInfoData> => {
    const backendUrl = config.getString(SYS_INFO_BACKEND_URL);
    const backendApiEndPoint = `${backendUrl}/api/sys-info/system-info`;
    const systemInfoData = await fetch(backendApiEndPoint)
    .then((res) => (res.ok ? res : Promise.reject(res)))
    .then((res) => res.json());

    // To display the main data in a table, prepare the array to contain the ONLY data we have
    systemInfoData.mainDataAsArray = [];
    systemInfoData.mainDataAsArray[0] = systemInfoData.data;
    systemInfoData.mainDataAsArray[0].cpuModel = systemInfoData.cpus[0].model;
    systemInfoData.mainDataAsArray[0].cpuSpeed = systemInfoData.cpus[0].speed;

    return systemInfoData;
    }, []);

    return (
    <>
    <Grid style={{ marginTop: '1rem' }} container spacing={2}>
    <Grid item xs={10}>
    <Table
    title="System Info Details"
    columns={sysInfoMainDataColumns}
    isLoading={isSysInfoLoading}
    data={sysInfoData?.mainDataAsArray || []}
    options={{
    padding: 'dense',
    pageSize: 1,
    emptyRowsWhenPaging: false,
    search: false,
    }}
    emptyContent={
    <Box style={{ textAlign: 'center', padding: '15px' }}>
    <Typography variant="body1">Backend data NOT found</Typography>
    </Box>
    }
    />
    </Grid>

    <Grid item xs={10}>
    <Table
    title="System Info Details - CPUs"
    columns={sysInfoCpuColumns}
    isLoading={isSysInfoLoading}
    data={sysInfoData?.cpus || []}
    options={{
    padding: 'dense',
    pageSize: 10,
    emptyRowsWhenPaging: false,
    search: false,
    }}
    emptyContent={
    <Box style={{ textAlign: 'center', padding: '15px' }}>
    <Typography variant="body1">Backend data NOT found</Typography>
    </Box>
    }
    />
    </Grid>
    </Grid>
    </>
    );
    };
  • Create a new file named plugins/system-info/src/components/SystemInfoPage/index.ts with the following contents:

    plugins/system-info/src/components/SystemInfoPage/index.ts
    export { SystemInfoPage } from './SystemInfoPage';
  • Modify plugins/system-info/src/plugin.ts by replacing the occurrences of ExampleComponent with SystemInfoPage (modified line is highlighted in the code block given below):

    plugins/system-info/src/plugin.ts
    import { createPlugin, createRoutableExtension } from '@backstage/core-plugin-api';

    import { rootRouteRef } from './routes';

    export const systemInfoPlugin = createPlugin({
    id: 'system-info',
    routes: {
    root: rootRouteRef,
    },
    });

    export const SystemInfoPage = systemInfoPlugin.provide(
    createRoutableExtension({
    name: 'SystemInfoPage',
    component: () => import('./components/SystemInfoPage').then((m) => m.SystemInfoPage),
    mountPoint: rootRouteRef,
    }),
    );
    • Before the change: import('./components/ExampleComponent').then(m => m.ExampleComponent),
    • After the change: import('./components/SystemInfoPage').then(m => m.SystemInfoPage),

Verifying the integration between the frontend plugin and backend plugin

Now that the plugin is all set, start the app if it is not already started. In case you see any errors, please restart the app with yarn start-dev in the root directory of the backstage-showcase repository, or by following these steps. Once the app starts up, navigate to http://localhost:3000/system-info. This page should provide the System Info Details as shown below:

System Info Details


Analyzing the code

With the frontend plugin integrated with the backend plugin and able to fetch the System information data from the backend and display it in the UI, let us go over the code that made it all possible.

The following files were created (or modified) to achieve this task:

  • plugins/system-info/src/components/SystemInfoPage/types.ts: This file contains the following types that either handle the JSON response coming from the backend plugin or are used as table columns when displaying data:
    • CpuTimeData: Type to contain times attribute of CPU data from the backend plugin JSON response
    • CpuData: Type to contain CPU data from the backend plugin JSON response
    • SysInfoMainData: Type to contain data element from the backend plugin JSON response
    • SysInfoData: Type to contain SysInfoMainData and list (Array) of CpuData data
    • sysInfoCpuColumns: This is a list of fields that are of type TableColumn and are used to display the header for CPU columns.
    • sysInfoMainDataColumns: This is a list of fields that are of type TableColumn and are used to display the header for main data columns.
      • The field property for each element in the list maps to the field that is used to display data from the object containing data for the table.
  • plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx: Main file to invoke the backend API and parse/display data on the UI with the main lines highlighted in the code block:
    • Gets the backend baseUrl (backend.baseUrl) from the config. This property is automatically configured and is available to the frontend plugin with the use of configApiRef
      plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
          const config = useApi(configApiRef);
      ...
      const backendUrl = config.getString(SYS_INFO_BACKEND_URL);
    • Invokes the backend plugin API (${backendUrl}/api/sys-info/system-info) and extracts the JSON from the response as SysInfoData
      plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
      const backendUrl = config.getString(SYS_INFO_BACKEND_URL);
      const backendApiEndPoint = `${backendUrl}/api/sys-info/system-info`;
      const systemInfoData = await fetch(backendApiEndPoint)
      .then((res) => (res.ok ? res : Promise.reject(res)))
      .then((res) => res.json());
    • To display the system information data, we are using the following two table components:
      • The first table uses sysInfoMainDataColumns for the columns and sysInfoData?.mainDataAsArray for the main data (mainDataAsArray is set after fetching the data from the backend API).
        plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
            systemInfoData.mainDataAsArray = [];
        systemInfoData.mainDataAsArray[0] = systemInfoData.data;
        systemInfoData.mainDataAsArray[0].cpuModel = systemInfoData.cpus[0].model;
        systemInfoData.mainDataAsArray[0].cpuSpeed = systemInfoData.cpus[0].speed;
        ...
        <Table
        title="System Info Details"
        columns={sysInfoMainDataColumns}
        isLoading={isSysInfoLoading}
        data={sysInfoData?.mainDataAsArray || []}
        >
      • The second table uses sysInfoCpuColumns for the columns and sysInfoData?.cpus for the CPU data.
        plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
            <Table
        title="System Info Details - CPUs"
        columns={sysInfoCpuColumns}
        isLoading={isSysInfoLoading}
        data={sysInfoData?.cpus || []}
        >
    • Exports the SystemInfoPage component, which contains the Grid containing the two table components defined above:
      plugins/system-info/src/components/SystemInfoPage/SystemInfoPage.tsx
          export const SystemInfoPage = () => {
      ...
      }
  • plugins/system-info/src/components/SystemInfoPage/index.ts: exports the SystemInfoPage component.
  • plugins/system-info/src/components/SystemInfoPage/plugin.ts: sets the component used by systemInfoPlugin to the SystemInfoPage component
    plugins/system-info/src/components/SystemInfoPage/plugin.ts
    export const SystemInfoPage = systemInfoPlugin.provide(
    createRoutableExtension({
    name: 'SystemInfoPage',
    component: () => import('./components/SystemInfoPage').then((m) => m.SystemInfoPage),
    mountPoint: rootRouteRef,
    }),
    );
    • This SystemInfoPage is used when we hit the route /system-info (set in packages/app/src/App.tsx)

Conclusion

We have now created a new Backstage frontend plugin and integrated it with the backend plugin to display the data coming from the backend plugin. We used types to manage the JSON response and display the Table columns. This just scratches the surface of what can be done in a frontend plugin. As the backend plugin is exposing data through REST API, you can return pretty much whatever you need to process and display the result in your frontend plugin.

Hope you enjoyed this blog!!!


Creating your first Backstage Backend plugin

· 10 min read
Ramy ElEssawy
Developer Hub Advocate
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

Introduction

Plugins are the heart and soul of Backstage. They are the building blocks that make up the functionality of a Backstage app. Whether you want to integrate with your favorite tools or create new features, plugins allow you to create a personalized experience in Backstage.

Backend plugins are crucial in providing the necessary data and performing backend operations. They typically expose a REST API that frontend plugins can call, and they might also interact with databases, external APIs, or other resources. They are essential for enabling the full functionality of a Backstage app, allowing frontend plugins to be more than just static pages - they can be dynamic, interactive, and integrated with your existing infrastructure.

In this blog post, we'll guide you through creating a backend plugin for Backstage. We'll lay the groundwork for a Local System Information plugin, creating a backend plugin that fetches information about the local system, like CPU usage, memory usage, and disk space. In the second part of this blog, we'll build upon this foundation and create the frontend plugin to interact with it. Let's get started!

Understanding Backstage Backend

Before we dive into creating a backend plugin, it's important to understand the structure of the Backstage backend and the technologies it uses.

Backstage Backend Structure

The Backstage backend is a standalone, separate component from the frontend. It's designed to be a monolithic server that hosts all the backend plugins. Each plugin is its own isolated piece of code, but they all run in the same server process. This allows plugins to share common features and utilities while still keeping their codebases separate and focused.

The backend is structured around a core backend package, @backstage/backend-common, which provides common utilities and interfaces. Backend plugins are then added to the backend by creating a new instance of the plugin class and adding it to the backend's router.

Backend Router

The backend router is responsible for routing incoming HTTP requests to the appropriate backend plugin. Each backend plugin is essentially an Express router that gets mounted onto the main Express app.

When you create a new backend plugin, you define a new Express router. This router defines the API endpoints for your plugin, and it's where you implement the logic for handling incoming requests.

Role of NodeJS and Express in the Backstage Backend

The Backstage backend is built using Node.js, a JavaScript runtime that allows you to run JavaScript on the server. Node.js is event-driven and non-blocking, making it well-suited to serve inherently asynchronous web requests.

Express is a minimal and flexible Node.js web application framework that provides robust features for web and mobile applications. It's used as the web server framework in the Backstage backend.

In the context of our Local System Information plugin, the backend plugin will be an Express router that exposes a REST API for fetching system information. The frontend plugin will then call this API to fetch the needed data.

Setting Up Your Environment

We need to set up our development environment before we can start creating our backend plugin. Here's a step-by-step guide on how to do this:

Step 1: Install Node.js and Yarn

The first step is to install Node.js and Yarn. Node.js is the JavaScript runtime the Backstage backend runs on, and Yarn is the package manager Backstage uses.

You can download Node.js from the official website. Backstage requires Node.js version 14 or later. After installing Node.js, you can install Yarn by running the following command in your terminal.

npm install -g yarn

Step 2: Clone the Janus-Showcase Repository

In this tutorial, we will use Red Hat’s Project Janus to explain the steps for building our backend plugin. So the next step is to clone the Janus-Showcase repository. To clone the repository, run the following command in your terminal.

git clone https://github.com/janus-idp/backstage-showcase

Then, navigate into the cloned repository by running

cd backstage-showcase

Step 3: Start the Janus IDP Backend

Once you've cloned the repository, you need to install its dependencies. You can do this by running the following command in your terminal.

yarn install

When writing this blog, Janus IDP requires yarn version 1; check your current version using the command

yarn --version

If needed set the yarn version using this command

yarn set version 1.22.19

Finally, you can start the Janus-Showcase application. To do this, navigate into the folder you cloned (backstage-showcase), then run the start command in your terminal.

yarn start

This will start the Janus-Showcase backend running at http://localhost:7007/

Janus IDP - Home

With your environment set up, you're ready to start creating your backend plugin. In the next section, we'll guide you through creating a new backend plugin.

Creating a Backend Plugin

Now that we've set up our development environment, we can start creating our backend plugin. In this section, we'll guide you through creating a new backend plugin using the Backstage CLI and explain the structure of the newly created plugin.

Creating a Backend Plugin Using the Backstage CLI

The Backstage CLI provides a command for creating a new backend plugin. To create a new backend plugin, navigate to the directory backstage-showcase, and in your terminal, run the following command:

yarn new --select backend-plugin

You'll be asked to supply a name for the plugin. This is an identifier that will be part of the NPM package name; you might choose an ID like local-system-info.

Creating a new Plugin

Structure of the Newly Created Plugin

When you create a new backend plugin using the yarn new --select backend-plugin command, the Backstage CLI does a few things behind the scenes to wire up the plugin to your Backstage app.

The CLI adds a new entry to the dependencies section of the package.json file in the packages/backend directory. This entry points to your new plugin, effectively installing it as a dependency of your Backstage app.

The CLI also creates a new directory for your plugin under the plugins/ directory. In our example, the CLI creates the directory backstage-showcase/plugins/local-system-info-backend containing the code for your plugin. Here's a brief overview of the structure of a newly created backend plugin:

  • src/: This directory contains the source code for your plugin. It includes an index.ts file, which is the entry point for your plugin and exports a function that creates an instance of your plugin's router. This function is called by the Backstage backend when it starts up.

  • src/service/: This directory contains the router.ts file, which defines the router for your plugin and the router.test.ts file. This file contains a sample test for your plugin's router.

  • package.json: This file defines the metadata and dependencies for your plugin. It includes the name of your plugin, its version, dependencies, and various configuration options. The main field points to the src/index.ts file, which is the entry point for your plugin.

In the context of our Local System Information plugin, the router.ts file will define a REST API for fetching system information, and the index.ts file will export a function that creates an instance of this router. In the next section, we'll start implementing our backend plugin by defining its API.

Implementing the Plugin API

Now that we have our plugin structure in place, it's time to implement the API for our plugin. This is where we define the endpoints our frontend plugin will call to fetch the system information. In this section, we'll guide you through defining these endpoints and implementing the logic for fetching the system information.

Defining API Endpoints in the router.ts File

The first step in implementing our plugin API is to define the endpoints in our router.ts file. This file is where we define an Express router for our plugin, and it's where we define the routes for our API.

Here's an example of what our router.ts file might look like:

import express from 'express';
import os from 'os';

export function createRouter(): express.Router {
const router = express.Router();

router.get('/system-info', (req, res) => {
const systemInfo = {
hostname: os.hostname(),
operatingSystem: os.type(),
platform: os.platform(),
release: os.release(),
uptime: os.uptime(),
loadavg: os.loadavg(),
totalMem: os.totalmem(),
freeMem: os.freemem(),
cpus: os.cpus(),
};

res.send(systemInfo);
});

return router;
}

In this example, we define a single GET endpoint at /system-info. When this endpoint is accessed, it fetches the system information using Node.js's built-in os module. The os module provides several methods for fetching system information, such as os.hostname() for fetching the hostname, os.type() for fetching the operating system type, and os.totalmem() for fetching the total memory.

We call these methods to fetch the system information, and then we send this data in the response using res.send(systemInfo).

With this, we have a fully functional backend plugin that fetches and provides system information. In the next section, we'll guide you through testing your plugin and ensuring everything works as expected.

Testing Your Backend Plugin

After implementing the API for your plugin, testing it to ensure it works as expected is important. In this section, we'll guide you through the process of testing your backend plugin.

Standalone mode - Light Testing

A backend plugin can be started in a standalone mode. You can do a first-light test of your service:

cd plugins/local-system-info-backend/
yarn start

After a few seconds, the plugin will be built successfully, and the backend will listen on port 7007. In a different terminal window, now run the following command:

curl localhost:7007/local-system-info/system-info | jq '.'

Curl plugin endpoint

Testing using Jest functions

Backstage uses Jest as its testing framework so that you can use any Jest functions in your tests. Replace the content of the router.test.ts file in the src/service directory with the following:

import request from 'supertest';
import express from 'express';
import { createRouter } from './router';

describe('router', () => {
const app = express();
app.use(createRouter());

it('should return system info', async () => {
const response = await request(app).get('/system-info');

expect(response.status).toBe(200);
expect(response.body).toHaveProperty('hostname');
expect(response.body).toHaveProperty('operatingSystem');
expect(response.body).toHaveProperty('platform');
// ... add more assertions as needed
});
});

In this example, we're using the supertest library to send a GET request to our /system-info endpoint, and then we're using Jest's expect function to assert that the response has a status of 200 and includes the expected properties.

Now, you can run your test using the the following command from the directory /plugins/local-system-info-backend.

yarn test

This will run Jest, which will find and run all tests in your router.test.ts file.

Jest Test Result

Testing your plugin is an important step in the development process. It helps ensure that your plugin works as expected and can help you catch and fix any issues before they become problems in a production environment.

Conclusion

Congratulations! You've just created your first Backstage backend plugin. You've learned about the structure of the Backstage backend. You've also walked through creating a backend plugin using the Backstage CLI and implemented a REST API for your plugin.

In this tutorial, we've created a Local System Information plugin that fetches and provides system information. This is a simple yet powerful example of what you can achieve with Backstage backend plugins.

Remember, the power of Backstage comes from its extensibility. With plugins, you can integrate your existing tools and services or create entirely new features tailored to your needs. Don't be afraid to experiment and create your own plugins!

Exploring the Backstage Topology Plugin

· 5 min read
Divyanshi Gupta
Plugin Contributor
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

The Janus community recently released the Backstage Topology plugin which provides an intuitive way to visualize and understand the workloads running on your Kubernetes cluster. Currently, the plugin is read-only, allowing you to view and analyze the workload distribution across clusters without making any modifications.

The intuitive graphical representation provides an at-a-glance understanding of the workload distribution across clusters, enabling you to spot issues or imbalances quickly. Whether it's Deployments, Jobs, Daemonsets, Statefulsets, CronJobs, or Pods, this plugin lets you gain insights into the components powering your applications or services.

Topology plugin intro

We introduced the plugin in the previous blog so in this blog post, we will explore the unique features, installation, and usage of the Topology plugin in Backstage.

The Backstage Topology Plugin provides a lot of powerful features going beyond the graphical visualization of Kubernetes workloads. Let's explore the various features that make this plugin outstanding:

Filter workloads by cluster

The plugin empowers you to visualize the workloads of a specific cluster when your workloads are spread across multiple clusters. This functionality allows you to focus on a particular environment or segment of your application infrastructure. This targeted filtering capability makes issue troubleshooting easier and can help with the optimization of resources.

Filter workloads by cluster

Minikube topology

Group workloads

Visualizing workloads into logical sets is made easy with the Topology Plugin. By grouping related workloads, you can manage them collectively, enabling efficient monitoring, scaling, and resource allocation.

To display workloads in a visual group add the following label to your workloads:

labels:
app.kubernetes.io/part-of: <GROUP_NAME>

Group workloads

Establish node Connections and Relationships

The plugin facilitates the establishment of connections between nodes to represent their relationships. This feature enhances your understanding of the dependencies and interactions between different workloads, fostering a comprehensive view of your infrastructure.

To display workloads with visual connectors add the following annotation to your target workloads:

annotations:
app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'

Establish node Connections and Relationships

Access the application in a single click

The Topology Plugin simplifies access to the running application associated with your workloads. With a single click, you can easily navigate to the relevant application, saving time and effort.

Access the application in a single click

Application

Get workload insights using the side panel

The plugin also provides a side panel that opens up on selecting a workload. The side panel shows the details of the workload and its connected resources. This level of granularity helps troubleshoot issues, find bottlenecks and fine-tune your workload configurations.

Sidepanel Details tab

Sidepanel Details tab

Sidepanel Resources tab

Sidepanel Resources tab

Installation and configuration

To start leveraging the capabilities of the Backstage Topology Plugin, follow these steps for installation and configuration:

  1. Install the prerequisite Kubernetes plugin, including @backstage/plugin-kubernetes and @backstage/plugin-kubernetes-backend, by following the provided installation and configuration guides.

  2. Configure the Kubernetes plugin to connect to your cluster using a ServiceAccount. Ensure that the ServiceAccount accessing the cluster has the necessary ClusterRole granted. If you have the Backstage Kubernetes plugin configured, the ClusterRole is likely already granted.

  3. Annotate the entity's catalog-info.yaml file to identify whether an entity contains Kubernetes resources:

    annotations:
    backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
  4. Optionally, add the backstage.io/kubernetes-namespace annotation to identify Kubernetes resources using the defined namespace:

    annotations:
    backstage.io/kubernetes-namespace: <RESOURCE_NS>
  5. Add a custom label selector to help Backstage find the Kubernetes resources. This label selector takes precedence over the ID annotations:

    annotations:
    backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
  6. Label the resources with the following label to allow the Kubernetes plugin to retrieve the Kubernetes resources from the requested entity:

    labels:
    backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>

    Note: When using the label selector, ensure that the mentioned labels are present on the resource.

  7. Install the Topology plugin using the following command:

    yarn workspace app add @janus-idp/backstage-plugin-topology

Enabling Topology plugin in Backstage app

Now that the Topology plugin is installed and configured, to leverage the Backstage Topology Plugin within your Backstage application, enable the plugin in UI by adding the following code to this file packages/app/src/components/catalog/EntityPage.tsx:

import { TopologyPage } from '@janus-idp/backstage-plugin-topology';

const serviceEntityPage = (
<EntityPageLayout>
{/* ... */}
<EntityLayout.Route path="/topology" title="Topology">
<TopologyPage />
</EntityLayout.Route>
</EntityPageLayout>
);

Using the Topology plugin in Backstage

Now that the plugin is fully set up, once you open your Backstage application and select an entity from the Catalog page, you should see a Topology tab on the entity page. Go to the Topology tab and you will be presented with a graphical view of your service’s workloads.

Next Steps

The Backstage Topology Plugin is a game-changer for managing Kubernetes workloads, offering a range of powerful features designed to simplify visualization, organization, and monitoring. We are also working on adding more cool new features to this plugin so be sure to keep an eye out for the latest updates.

To contribute to this plugin, report issues, seek guidance or provide feedback, please see our GitHub repository https://github.com/janus-idp/backstage-plugins/tree/main/plugins/topology.

Recommended Approach to Configuring TechDocs for Backstage on OpenShift

· 6 min read
Jason Froehlich
Developer Hub Maverick
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

Backstage includes a built-in techdocs builder that can be used to generate static HTML documentation from your codebase. However, the default basic setup of the "local" builder is not intended for production. This is because the local builder requires that you have a running instance of Backstage and the techdocs plugin installed on your local machine, as well as the use of local storage.

In this blog post, we will show you the recommended approach to streamlining the configuration of TechDocs for Backstage on OpenShift. We will show you how to set up a fully-automated process for building and publishing techdocs using GitHub Actions and the OpenShift Data Foundations operator. This will allow you to create an ObjectBucketClaim that mimics an AWS S3 Bucket, which can then be used to store and serve your techdocs.

OpenShift Data Foundation (ODF) Installation

The TechDocs publisher stores generated files in Cloud Storage (Google GCS, AWS S3, Azure Blob Storage) or local storage. OpenShift Data Foundations (ODF) provides a custom resource called the ObjectBucketClaim, which can be used to request an S3 compatible bucket backend. To use this feature, you must install the ODF operator.

We can install this using the OperatorHub from the OpenShift Web Console

Operator Find

  1. Navigate to the Operators -> OperatorHub menu
  2. Type ODF in the Filter by keyword... box
  3. Select the OpenShift Data Foundation operator and then select Install

Keep the default settings as shown below:

Operator Install

Click Install.

note

The operator can take several minutes to complete.

Once complete click the Create StorageSystem button.

Operator StorageSystem

note

If the Create StorageSystem screen does not look like the screenshot below, wait a few minutes for the operator pods to start running. You can check the status of the pods in the openshift-storage namespace and wait for them to change to a Running state.

Operator Create StorageSystem

Take all the default settings. When you reach the Capacity and nodes section make sure to select at least 3 nodes, preferably in 3 different zones.

ObjectBucketClaim Creation

Once the StorageSystem is complete use the following yaml to create an ObjectBucketClaim in the same namespace Janus is installed:

obc.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: backstage-bucket-claim
spec:
generateBucketName: backstage-bucket-
storageClassName: openshift-storage.noobaa.io
note

It may take a few minutes after the StorageSystem is complete in order for the OBC to be fully created. If the status of the OBC is Lost just wait a few minutes.

Once you have completed the steps, your ObjectBucketClaim will be ready for use. You can confirm this by checking the Claim Data section as seen below from the OpenShift Web Console.

ObjectBucketClaim Data

Configuring Backstage

Once the ODF Operator is installed and an ObjectBucketClaim is created, Backstage can be configured to use the ObjectBucketClaim as the TechDocs publisher.

Deployment

Update the Backstage Deployment to include the following:

envFrom:
- configMapRef:
name: backstage-bucket-claim
- secretRef:
name: backstage-bucket-claim
env:
- name: BUCKET_URL
value: 'VALUE_OF_S3_ROUTE_LOCATION'
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: backstage-bucket-claim
key: BUCKET_REGION

Both a Secret and a ConfigMap are created with the same name and include information that Backstage needs to connect to the ObjectBucketClaim in order to read the TechDoc files.

The VALUE_OF_S3_ROUTE_LOCATION variable above will need to be replaced. To do this, run the following command and replace with the output:

oc get route s3 -n openshift-storage -o jsonpath='https://{.spec.host}'

The AWS Region is empty in this case because we are using OpenShift Data Foundation (ODF). Backstage will fail to start if we set this value to an empty string. By default, Backstage will look for an environment variable named AWS_REGION if no region is specified in the app-config.yaml file. This is why we are not setting this value in the app-config.yaml and setting the AWS_REGION from the ConfigMap instead.

Application Configuration

Update the app-config.yaml file to look like the following:

app-config.yaml
techdocs:
builder: 'external'
generator:
runIn: 'local'
publisher:
type: 'awsS3'
awsS3:
bucketName: ${BUCKET_NAME}
endpoint: ${BUCKET_URL}
s3ForcePathStyle: true
credentials:
accessKeyId: ${AWS_ACCESS_KEY_ID}
secretAccessKey: ${AWS_SECRET_ACCESS_KEY}
note

If you are using Janus Backstage Showcase image, you will also need add the following to the app-config.yaml file:

app-config.yaml
enabled:
techdocs: true

TechDocs Builder

To generate the static files that will be published to our ObjectBucketClaim, we will need to set up a builder that utilizes the techdocs-cli. We can do this by creating a GitHub Action similar to the one found below. As you can see, this will run anytime the mkdocs.yaml file is modified or any file in the docs folder is modified. This will regenerate the static content, so that users always have access to the latest documentation.

techdocs.yaml
name: Publish TechDocs Site

on:
push:
branches:
- main
paths:
- 'docs/**'
- 'mkdocs.yaml'

jobs:
publish-techdocs-site:
name: Publish techdocs site
runs-on: ubuntu-latest

env:
TECHDOCS_S3_BUCKET_NAME: ${{ secrets.BUCKET_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_ENDPOINT: ${{ secrets.AWS_ENDPOINT }}
ENTITY_NAMESPACE: 'default'
ENTITY_KIND: 'Component'
ENTITY_NAME: 'BACKSTAGE_COMPONENT_NAME'

steps:
- name: Checkout code
uses: actions/checkout@v3
with:
fetch-depth: 0

- name: Setup Node
uses: actions/setup-node@v3
with:
node-version-file: '.nvmrc'
cache: 'yarn'

- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.9'

- name: Install techdocs-cli
run: sudo npm install -g @techdocs/cli

- name: Install mkdocs and mkdocs plugins
run: python -m pip install mkdocs-techdocs-core==1.*

- name: Generate docs site
run: techdocs-cli generate --no-docker --verbose

- name: Publish docs site
run: techdocs-cli publish --publisher-type awsS3 --storage-name $TECHDOCS_S3_BUCKET_NAME --awsEndpoint $AWS_ENDPOINT --awsS3ForcePathStyle --entity $ENTITY_NAMESPACE/$ENTITY_KIND/$ENTITY_NAME
note

Replace BACKSTAGE_COMPONENT_NAME with the name of the Backstage component

Although the default basic setup of the "local" builder in Backstage is a valuable starting point for development and testing, it is crucial to transition to an external builder and utilize a Cloud Storage provider to create a resilient and scalable solution for production environments. By doing so, we can guarantee optimal performance, reliability, and streamlined management of documentation within Backstage.

Resources

Topology plugin coming soon to a Backstage near you!

· 2 min read
Divyanshi Gupta
Plugin Contributor
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

The Janus community is thrilled to share details about our Backstage Topology Plugin. This powerful new tool simplifies the process of visualizing k8s workloads of your Backstage services. With this plugin, developers can get a clear and concise overview of their application's structure and workload status. This eliminates the stress and cognitive overload that often comes with working with Kubernetes.

Topology plugin coming soon!

With the Backstage Topology Plugin, you will be able to see a graphical visualization of your backstage service’s workloads and their pods statuses across clusters in real-time with the ability to filter workloads by a specific cluster.

So, what makes the Backstage Topology Plugin so special? For starters, it offers a range of powerful features other than providing a graphical visualization of k8s workloads and that includes providing one click access to the running application, functionality to group workloads in multiple sets, ability to connect nodes to each other to represent their relationships with each other and providing a way to look into the details of the workload and its related resources.

And best of all, the Backstage Topology Plugin is incredibly easy to use. Its intuitive interface and straightforward design mean that you won't have to waste time figuring out how to use it or struggling with complex settings. Instead, you can focus on getting your work done quickly and efficiently.

Next steps

Be on the lookout for a more in depth overview of the Backstage Topology Plugin soon!

Learn more about other Backstage plugins in the Janus community here.

Janus Backstage Images Now Available on quay.io

· 4 min read
Andrew Block
Maintainer of Janus Helm Charts & Plugin Contributor
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

The Janus project produces container images to support many of the active initiatives within the community. These images, built on top Red Hat certified content, help provide a stable and secure base to enable use in most environments. Previously, these images were only available within the GitHub Container Registry service associated with janus-idp GitHub organization. The Janus community is happy to announce that all images produced by the Janus project are now available on quay.io within the janus-idp organization. quay.io is a hosted registry service for storing and building container images and distributing other OCI artifacts. With this new offering, community members and consumers can take full advantage of the benefits now provided by sourcing container content from quay.io.

The Significance of quay.io Integration

You might be wondering why serving content on quay.io is noteworthy. Let's expound upon several of these reasons:

Security First

Security is a top of mind concern these days, and steps should be taken to ensure that all phases of development and deployment use a secure-first mentality. It was already described how the Janus Project images use certified Red Hat container images, specifically those based on the Universal Base Image (UBI). These freely available images contain the same enterprise grade packages and content as found in Red Hat Enterprise Linux (RHEL), so security and lifecycle management is top of mind.

Another security feature provided out-of-the-box when making use of quay.io as a container registry: image scanning. Each and every image that is published to quay.io undergoes a scan from Clair to determine if any vulnerabilities are present within the image. Having an understanding of whether the image contains any current security concerns is important for both producers and consumers. Producers need to be able to determine whether the content they are producing contains any vulnerabilities and mitigate them appropriately. Consumers, more importantly, seek the ability to understand if the content they are leveraging includes any risks. This is crucial information to have at one's fingertips, as up to half of the content in some publicly hosted registries contain at least one critical vulnerability (Reference). With the Janus images hosted on quay.io, these benefits are now available.

Support Within Enterprise Environments

Backstage along with the concepts preached by Internal Developer Platforms are seeing adoption within many types of environments, including those with enterprise concerns. While every organization is unique, there are some common traits that they share - one of which is leveraging content from trusted sources. Many of these same organizations forbid accessing external resources and operate in a fully disconnected mode. For those that use externally sourced content, steps are typically put in place to enable and allow access to these assets.

OpenShift, Red Hat's Kubernetes distribution, serves platform container images from quay.io. Given that any necessary approval to access external content may have been already completed to use quay.io as a source of content, no additional steps would be needed. Otherwise, adding another namespace (quay.io/janus-idp for example) as an allowed content source may be easier to have approved since other namespaces within the same registry as there is already an existing precedent in place.

Continued Investment of Quay as a Container Registry

Hosting assets within quay.io is another example of the Janus Project supporting the Quay ecosystem. Content stored in Quay (either the hosted quay.io or self managed standalone Product Red Hat Quay) can be visualized thanks to the Quay Backstage plugin providing many of the same data points, including security related data, all available within the Backstage dashboard. A full overview of the Quay Backstage plugin and its features can be found in this article. The Quay Backstage plugin is just one of many plugins developed by the Janus community and can be found in the backstage-plugins repository within the janus-idp GitHub organization.

Simplifying the experience surrounding the use of an Internal Developer Platform is one of the core tenets of the Janus Project, and one way to stay true to this mission is making content more readily accessible and as feature rich as possible. By serving Janus Project related OCI assets within quay.io, project contributors, community members, and consumers can take advantage of this globally hosted service and all of the features it provides.

Exposing your 3scale APIs through the Backstage catalog

· 2 min read
Francisco Meneses
Plugin Contributor
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

Backstage has many features that come out of the box; one of which is the API catalog. The API catalog is responsible for displaying API entities, which are defined in YAML format and can be stored on a git repository and used as a source to register API entities in the catalog.

But, what happens when you already have an API Manager like 3scale that handles your API definitions? To better integrate 3scale and Backstage, the Janus community developed a 3scale backend plugin that imports APIs from 3scale into the Backstage catalog as API entities.

Installation

With this plugin, your APIs from multiple 3scale tenants will be available in the Backstage catalog API entities.

The first step is to install the backend plugin. Navigate to the root directory of your Backstage instance and run the following command to add the plugin.

yarn workspace backend add @janus-idp/backstage-plugin-3scale-backend

Configuration

The 3scale Backstage plugin allows configuration of one or many providers using the app-config.yaml configuration file. Use a threeScaleApiEntity marker to start configuring them:

app-config.yaml
catalog:
providers:
threeScaleApiEntity:
dev:
baseUrl: https://<TENANT>-admin.3scale.net
accessToken: <ACCESS_TOKEN>
schedule: # optional; same options as in TaskScheduleDefinition
# supports cron, ISO duration, "human duration" as used in code
frequency: { minutes: 1 }
# supports ISO duration, "human duration" as used in code
timeout: { minutes: 1 }

Add the 3scale entity provider to the catalog builder at packages/backend/src/plugins/catalog.ts, once done, the catalog plugin should be able to load 3scale products as entities in the Backstage catalog:

packages/backend/src/plugins/catalog.ts
import { ThreeScaleApiEntityProvider } from '@janus-idp/backstage-plugin-3scale-backend';

/ ..
const builder = await CatalogBuilder.create(env);

/** ... other processors and/or providers ... */
builder.addEntityProvider(
ThreeScaleApiEntityProvider.fromConfig(env.config, {
logger: env.logger,
scheduler: env.scheduler
}),
);

const { processingEngine, router } = await builder.build();
/ ..

Verify

Now your API entities will be available in the Backstage catalog.

API entities in the Backstage catalog

Next steps

To contribute to this plugin, report issues, or provide feedback, visit our GitHub repository.

Exploring Quay registry in Backstage

· 4 min read
Tom Coufal
Maintainer of Janus Helm Charts & Plugins
As of July 1, 2024, all Janus IDP blogs have been archived and will no longer be updated. Some information in these posts may be outdated and may not work as described.

The Janus IDP family of Backstage plugins is expanding! Please welcome our new member - a frontend plugin that enriches application view in Backstage with insights from a Quay hosted registry.

Backstage models the software ecosystem as Backstage Catalog entities. Users compose small individual service components into a bigger picture. And in order to truly understand and fully describe the individual building blocks, Backstage users construct views to capture different aspects of these components; from technical documentation, through dependencies and relation to deployment state, CI and CD. And part of this picture is understanding what is actually being deployed into live environments. In many cases, the deployment artifact is a container image and the desire to view all of the available and deployed container images. This new Quay plugin for Backstage brings in that capability.

Quay is an OCI-compatible registry that allows users to build, store and distribute container images and other OCI artifacts. It is available as a hosted service on quay.io as well as a self-hosted environment deployable to any OpenShift cluster through a Quay operator.

Installation and setup

With this plugin, viewing available container images in a particular repository is easy. The following guide will elaborate in detail on individual steps for enabling the integration and all currently supported settings and options.

First, it is necessary to install the plugin. Please add it to the frontend of your Backstage instance:

yarn workspace app add @janus-idp/backstage-plugin-quay

Connecting to Quay registry via Proxy

The plugin leverages the Backstage native proxy capabilities to query the Quay API; therefore, configurations need to be made to the app-config.yaml. In order to connect the Backstage instance to the public hosted Quay.io environment, the following configuration can be used:

app-config.yaml
proxy:
'/quay/api':
target: 'https://quay.io'
changeOrigin: true
headers:
X-Requested-With: 'XMLHttpRequest'

When accessing private Quay repositories, it may be necessary to extend this configuration with an Authorization header and Quay API token. This token can be obtained by creating a Quay Application using the steps outlined in this documentation. Once a token is obtained, the header can be set by extending the app-config.yaml setup above with an additional header:

app-config.yaml
proxy:
'/quay/api':
target: 'https://quay.io'
changeOrigin: true
headers:
X-Requested-With: 'XMLHttpRequest'
Authorization: 'Bearer ${QUAY_TOKEN}'

Be aware that the QUAY_TOKEN is an environment variable that has to be available to the Backstage instance at runtime.

Another popular option is to be able to target a self-hosted Quay deployment. This can be achieved by simply changing the target property in the settings above with the location of the Quay instance. In addition, if the self-hosted Quay registry is also deployed with a certificate that is not in the certificate chain of trust for the Backstage instance, the secure option has to be unset.

app-config.yaml
proxy:
'/quay/api':
target: '<SELF_HOSTED_QUAY_ENDPOINT>'
changeOrigin: true
headers:
X-Requested-With: 'XMLHttpRequest'
Authorization: 'Bearer ${QUAY_TOKEN}'
secure: [true|false]

More details on available Backstage proxy settings can be found in the upstream documentation.

This plugin conforms to the pattern used by many other Backstage plugins that use the Backstage proxy and provides a mechanism to change the default proxy path /quay/api via the following app-config.yaml settings, if needed:

app-config.yaml
quay:
proxyPath: /custom/quay/path

Enabling Quay plugin widget in UI

Now that the plugin is configured to access the desired Quay registry, enable the UI by enabling an additional view in the frontend application.(packages/app/src/components/catalog/EntityPage.tsx file in the bootstrap application)

packages/app/src/components/catalog/EntityPage.tsx
import { QuayPage, isQuayAvailable } from '@janus-idp/backstage-plugin-quay';

const serviceEntityPage = (
<EntityPageLayout>
// ...
<EntityLayout.Route if={isQuayAvailable} path="/quay" title="Quay">
<QuayPage />
</EntityLayout.Route>
</EntityPageLayout>
);

Using the plugin

Finally, after the plugin is fully set up, it needs to be instructed on what data to display for individual catalog entities. Extend the entity with an annotation, a similar experience Backstage users are used to with other plugins:

metadata:
annotations:
'quay.io/repository-slug': '<ORGANIZATION>/<REPOSITORY>'

For example, if we annotate a Component with 'quay.io/repository-slug': 'janus-idp/redhat-backstage-build', we are presented with the following page:

Backstage view for janus-idp/redhat-backstage-build Quay repository

Next steps

Although this plugin doesn’t have the vast features available in the Quay UI, it brings much-needed value to Backstage users. In the future, we plan to iterate on this plugin and provide users with more insights into unique Quay functions like vulnerability scanning and detailed manifest views.

To contribute to this plugin, report issues, seek guidance or provide feedback, please see our GitHub repository https://github.com/janus-idp/backstage-plugins/tree/main/plugins/quay.