Skip to content

Vitality Getting Started

MaΓ«l Aubert edited this page Mar 12, 2025 · 39 revisions

This page provides a comprehensive guide to setting up, installing, and working on the Vitality project. Vitality is a centralized telemetry system designed to integrate multiple tools, enabling better debugging, performance tracking, and user experience improvements. Follow the instructions below to get started.


πŸ”§ Main Technologies Used

πŸ–₯️ Frontend

🏒 Frontend Back Office (BO)

βš™ Backend


πŸ“₯ Installation Guide

Prerequisites

  1. Install PostgreSQL for your OS.
  2. Install pgAdmin 4 for your OS.
  3. Install Node.js using nvm (minimum version 22.11.0).
  4. Install pnpm using:
    npm i -g pnpm
    Minimum version: 9.15.4.

πŸ—„οΈ Database Configuration

1. Create a Role in PostgreSQL

  • Open your terminal and type psql.

  • Use \du to list existing roles.

  • If your_name does not exist, create a role using the following command:

    CREATE ROLE "your_name" LOGIN PASSWORD 'your_password';

    Example:

    CREATE ROLE "john_dupon" LOGIN PASSWORD 'awesome_password';

    πŸ”΄ Note: The postgres role should exist by default as a superuser.

2. Set Up a New Connection in pgAdmin 4

  • Open pgAdmin 4 and create a new server connection.
  • Use the following details:
    • Name: v6y_database
    • Hostname: localhost

⚠ Common Errors

bash: psql: command not found

Fix by adding PostgreSQL to your system's PATH:

export PATH=/path/to/PostgreSQL/bin:$PATH

πŸš€ Project Installation

1. Clone the Repository

git clone https://github.com/ekino/v6y.git

2. Navigate to the Project Directory

cd v6y

3. Install Dependencies with PNPM

pnpm install

This installs dependencies for all monorepo modules.

4. Configure Environment Variables

  • In these directories: v6y, v6y-libs/core-logic, front, front-bo, bff, and bfb-*, create an .env file according to the env-template file content.

  • Refer to GitLab Personal Access Tokens and GitHub Personal Access Tokens for generating tokens.

  • Initialize the database by running the following command from the root folder:

    pnpm run init-db

πŸ“‘ Running the Application

πŸ–₯️ Frontend Only

The Frontend is responsible for displaying the user interface of the application, while the Backend for Frontend (BFF) acts as an intermediary layer between the frontend and backend systems. It handles data aggregation and communication with the backend services, ensuring that the frontend receives the required data in an optimized format.

To start these components:

  1. Start the Backend for Frontend:

    cd v6y-apps/bff
    pnpm start:dev
  2. Start the Frontend:

    cd v6y-apps/front
    pnpm start:dev

πŸ”΅ GraphQL Playground: Access the playground at http://localhost:4001/v6y/graphql for testing queries and mutations.

🏒 Frontend Back Office (BO) Only

The Frontend Back Office (BO) is designed for administrative tasks, providing tools for managing application configurations, user accounts, and other backend settings. Like the frontend, it communicates with the Backend for Frontend (BFF) for optimized data handling.

To start these components:

  1. Start the Backend for Frontend:

    cd v6y-apps/bff
    pnpm start:dev
  2. Start the Frontend Back Office:

    cd v6y-apps/front-bo
    pnpm start:dev

πŸ”΅ GraphQL Playground: Access the playground at http://localhost:4001/v6y/graphql for testing queries and mutations.

βš™ Backend Only

The BFB Main Analyzer retrieves the list of configured applications from the database. Each application contains a Git repository URL (GitHub/GitLab) and a production URL.

  1. Main Analyzer Workflow:

    • The main analyzer checks out the repository as a ZIP file.

    • The ZIP file is extracted to:

      v6y-apps/code-analysis-workspace
      
    • Once the source code is checked out, further analysis does not require the main analyzer.

    • Alternatively, contributors can manually download and extract the ZIP file into code-analysis-workspace and directly start the static analyzer.

  2. Starting Static Analysis:

    • The main analyzer triggers the static analyzer (if required by the application type).

    • If a new analyzer needs to be attached, modify ApplicationManager.buildApplicationFrontendByBranch:

      try {
          await fetch(staticAuditorApiPath as string, {
              method: 'POST',
              headers: { 'Content-Type': 'application/json' },
              body: JSON.stringify({ applicationId, workspaceFolder }),
          });
      } catch (error) {
          AppLogger.info(
              `[ApplicationManager - buildApplicationFrontendByBranch - staticAuditor] error:  ${error}`
          );
      }
      
      try {
          await fetch(yourNewStaticAuditorUrl as string, {
              method: 'POST',
              headers: { 'Content-Type': 'application/json' },
              body: JSON.stringify({ applicationId, workspaceFolder }),
          });
      } catch (error) {
          AppLogger.info(
              `[ApplicationManager - buildApplicationFrontendByBranch - staticAuditor] error:  ${error}`
          );
      }
    • Each analyzer must have its own try-catch to avoid failures from blocking others.

  3. Dynamic Analysis Requires the Main Analyzer:

    • Dynamic analyzers run on production URLs and require real-time data.

    • To attach a new dynamic analyzer, update ApplicationManager.buildDynamicReports:

      const buildDynamicReports = async ({ application }: BuildApplicationParams) => {
          AppLogger.info('[ApplicationManager - buildDynamicReports] application: ', application?._id);
      
          if (!application) {
              return false;
          }
      
          try {
              await fetch(dynamicAuditorApiPath as string, {
                  method: 'POST',
                  headers: { 'Content-Type': 'application/json' },
                  body: JSON.stringify({ applicationId: application?._id, workspaceFolder: null }),
              });
          } catch (error) {
              AppLogger.info(
                  `[ApplicationManager - buildDynamicReports - dynamicAuditor] error:  ${error}`
              );
          }
      };
    • Each analyzer must have its own try-catch to avoid failures from blocking others.

  4. Automated Keyword and Evolution Management:

    • The system automatically updates keywords and evolutions based on audit results.

    • No manual changes are required to insert keywords or track evolutions.

  5. Ensuring Each Analyzer Runs in Isolation:

    • Each analyzer should run inside a worker process to prevent blocking the main thread:

      await forkWorker('./src/workers/LighthouseAnalysisWorker.ts', workerConfig);
      await forkWorker('./src/workers/CodeQualityAnalysisWorker.ts', workerConfig);
      await forkWorker('./src/workers/DependenciesAnalysisWorker.ts', workerConfig);

πŸ”΅ It's also possible to start any task / script from the main package.json:

...
  "scripts": {
    "start:frontend": "nx run @v6y/front:start",
    "start:frontend:bo": "nx run @v6y/front-bo:start",
    "start:bff": "nx run @v6y/bff:start",
    "start:bfb:main": "nx run @v6y/bfb-main-analyzer:start",
    "start:bfb:static": "nx run @v6y/bfb-static-code-auditor:start",
    "start:bfb:dynamic": "nx run @v6y/bfb-url-dynamic-auditor:start",
    "start:bfb:devops": "nx run @v6y/bfb-devops-auditor:start",
    "start:all": "nx run-many --target=start --all",
    "stop:all": "nx run-many --target=stop --all",
    "build:tsc": "nx run-many --target=build:tsc --all",
    "build": "nx run-many --target=build --all",
    "lint": "nx run-many --target=lint --all",
    "lint:fix": "nx run-many --target=lint:fix --all --verbose",
    "format": "nx run-many --target=format --all",
    "verify:code:duplication": "jscpd --config .jscpd.json",
    "ts-coverage": "typescript-coverage-report",
    "test": "nx run-many --target=test --all",
    "init-db": "nx run-many --target=init-db --all",
    "nx:analyze:graph": "nx graph",
    "nx:analyze:graph:affected": "nx graph --affected",
    "nx:clear:cache": "nx reset",
    "prepare": "husky"
  },
...

πŸ—„ Initial Database Data

The initial database data is critical for the application to function correctly as it provides the foundational data structures and default configurations required for core features. For example, it may include:

  • Default Roles and Permissions: Ensuring proper access control mechanisms are in place.
  • Configuration Settings: Predefined settings to bootstrap the application.
  • Demo or Sample Data: Allows you to test and verify features during development.

Steps to Load Initial Data:

  1. Import the tar file into your PostgreSQL database.

  2. To use the Front, Front-BO, or query the BFF, create a superadmin account in the database. Use the following SQL command to insert the account:

    INSERT INTO accounts (username, email, password, role, created_at, updated_at, applications)
    VALUES (
      'superadmin', 
      'superadmin@example.com', 
      '$2a$10$fSDUAlp4s8gJNc7HtZdMdeevQHAyRgCy6knbL1QQz3pHstXSbWm0W', 
      'SUPERADMIN', 
      NOW(), 
      NOW(), 
      ARRAY[]::integer[]
    );
  3. Once the superadmin account is created, log in using:

πŸ”΅ Note: The login process will generate an authentication token, which must be included in every request sent to the BFF.

  1. To create another user account, log in to Front-BO and create additional user accounts with the appropriate roles and privileges.

🎨 Theming Guidelines (Adding a New Theme)

export const ThemeTypes = {
    ADMIN_DEFAULT: 'admin-default',
    APP_DEFAULT: 'app-default',
    // Add here your new theme value
};

export const ThemeModes = {
    LIGHT: 'light',
    DARK: 'dark',
};

/**
 * Load theme based on the theme type
 * @param theme
 */
export const loadTheme = ({ theme }: ThemeProps) => {
    if (theme === ThemeTypes.ADMIN_DEFAULT) {
        return AdminTheme;
    }

    if (theme == ThemeTypes.APP_DEFAULT) {
        return AppTheme;
    }


    // add here your theme case

    return {};
};
  • Add the necessary variants (e.g., admin, app).
image
  • All theme-specific changes should reside within ui-kit and ui-guide.

πŸ” Interaction with Repositories

  • Vitality seamlessly integrates with GitHub and GitLab repositories to fetch repository details, file contents, deployments, and merge requests.
  • This interaction is fully centralized in RepositoryApi.ts, ensuring consistent API calls across all Vitality applications.
/**
 * Builds the configuration for the Github API.
 * @param organization
 * @constructor
 */
const GithubConfig = (organization: string): GithubConfigType => ({
    baseURL: 'https://api.github.com',
    api: '',

    urls: {
        fileContentUrl: (repoName: string, fileName: string) =>
            `https://api.github.com/repos/${organization}/${repoName}/contents/${fileName}`,
        repositoryDetailsUrl: (repoName: string) =>
            `https://api.github.com/repos/${organization}/${repoName}`,
    },

    headers: {
        Authorization: `Bearer ${process.env.GITHUB_PRIVATE_TOKEN}`,
        Accept: 'application/vnd.github+json',
        'Content-Type': 'application/json',
        'User-Agent': 'V6Y',
    },
});

/**
 * Builds the configuration for the Gitlab API.
 * @param organization
 * @constructor
 */
const GitlabConfig = (organization: string | null): GitlabConfigType => {
    const baseURL = organization ? `https://gitlab.${organization}.com` : 'https://gitlab.com';
    return {
        baseURL,
        api: 'api/v4',

        urls: {
            repositoryDetailsUrl: (repoName: string) =>
                `${baseURL}/api/v4/projects?search=${repoName}`,
            fileContentUrl: (repoName: string, fileName: string) =>
                `${baseURL}/api/v4/projects?search=${repoName}/${fileName}`,
            repositoryDeploymentsUrl: (repoId: string) =>
                `${baseURL}/api/v4/projects/${repoId}/deployments`,
            repositoryMergeRequestsUrl: (repoId: string) =>
                `${baseURL}/api/v4/projects/${repoId}/merge_requests`,
        },

        headers: {
            'PRIVATE-TOKEN': process.env.GITLAB_PRIVATE_TOKEN || '',
            'Content-Type': 'application/json',
        },
    };
};

/**
 * Builds the query options for the API.
 * @param organization
 * @param type
 */
const buildQueryOptions = ({
    organization,
    type = 'gitlab',
}: BuildQueryOptions): GithubConfigType | GitlabConfigType =>
    type === 'gitlab' ? GitlabConfig(organization!) : GithubConfig(organization!);
  • All needed requests to Git is centralized inside RepositoryApi.ts file.
  • If any incompatibility between Github or Gitlab is detected, it should be handled inside RepositoryApi.ts. All Vitality apps, should pass by this file to make a request to any Git repository.

⚠️ Handling GitHub and GitLab Incompatibilities: If any incompatibility between GitHub and GitLab APIs is detected, it should be handled exclusively within RepositoryApi.ts.

This ensures:

  • A consistent API interface for all Vitality applications.
  • Seamless Git provider switching without modifying application logic.
  • Centralized maintenance, reducing the risk of API inconsistencies.

All Vitality applications must use RepositoryApi.ts for making any request to a Git repository.


πŸ“ˆ Interaction with Monitoring Events

Vitality uses DataDog to collect monitoring events. However, you can easily integrate your own monitoring platform by following these steps:

  1. Create a function named fetch[your-monitoring-platform]Events(), similar to the existing fetchDataDogEvents(), in MonitoringApi.ts.
  2. Convert the fetched data into MonitoringEvent as defined in MonitoringType.ts. This conversion function should be placed in MonitoringUtils.ts.
  3. Call the fetch and conversion functions within getMonitoringEvents() in MonitoringApi.ts.

πŸ›  Example Implementation

As a result, your getMonitoringEvents() function should look like this:

const getMonitoringEvents = async ({ application, dateStartStr, dateEndStr }: GetEventsOptions) => {
    try {
        AppLogger.info(
            `[EventApi - getEvents] Fetching events for application: ${application._id}`,
        );

        if (!application || !application.configuration?.dataDog) {
            AppLogger.error(
                `[EventApi - getEvents] Application or DataDog configuration is missing`,
            );
            return [];
        }

        const dateStartTimeStamp = formatStringToTimeStamp(dateStartStr, 'ms');
        const dateEndTimeStamp = formatStringToTimeStamp(dateEndStr, 'ms');

        const dataDogEvents = await fetchDataDogEvents({
            dataDogConfig: application.configuration.dataDog,
            dateStartTimeStamp,
            dateEndTimeStamp,
        });

        const convertedEvents = convertDataDogEventsToMonitoringEvents({
            dataDogEvents,
            dateStartTimeStamp,
            dateEndTimeStamp,
        });

        AppLogger.info(
            `[EventApi - getEvents] Events fetched successfully: ${convertedEvents.length}`,
        );

        return convertedEvents;
    } catch (error) {
        AppLogger.error(
            `[EventApi - getEvents] An exception occurred while fetching events: ${error}`,
        );
        return [];
    }
};

⚠️ Configuration Update

If your monitoring platform requires API keys or additional configurations, make sure to:

  • Store them in the database under the configuration column.
  • Update the schema in ApplicationModel.ts.

πŸ‘₯ Contribution Guide

  1. Check GitHub Issues for good first issue or help wanted tags.

  2. Follow the Contribution Guide.


πŸ“œ License

This project is licensed under the MIT License. See the LICENSE file for details.


πŸ“© Contact

For further assistance, contact our support team or open an issue on GitHub.