diff --git a/catalog/systems/OnePlatform.yml b/catalog/systems/OnePlatform.yml index 75622b83b..c8ddb08bf 100644 --- a/catalog/systems/OnePlatform.yml +++ b/catalog/systems/OnePlatform.yml @@ -6,6 +6,9 @@ metadata: title: One Platform namespace: devex description: One Platform provides a single place for all internal applications and services, supports consistent User experience by providing standard platform for service hosting and data integration, efficient resource management, real time metrics availability, cross-team collaboration and unified documentation. + annotations: + github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. tags: - digital-experience - javascript diff --git a/catalog/systems/docs/code-of-conduct.md b/catalog/systems/docs/code-of-conduct.md new file mode 100644 index 000000000..1baecabbe --- /dev/null +++ b/catalog/systems/docs/code-of-conduct.md @@ -0,0 +1,80 @@ +--- +id: code-of-conduct +title: Code of Conduct +sidebar_label: Code of Conduct +--- + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as +contributors and maintainers pledge to making participation in our project and +our community a harassment-free experience for everyone, regardless of age, body +size, disability, ethnicity, sex characteristics, gender identity and expression, +level of experience, education, socio-economic status, nationality, personal +appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment +include: + +- Using welcoming and inclusive language +- Being respectful of differing viewpoints and experiences +- Gracefully accepting constructive criticism +- Focusing on what is best for the community +- Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +- The use of sexualized language or imagery and unwelcome sexual attention or + advances +- Trolling, insulting/derogatory comments, and personal or political attacks +- Public or private harassment +- Publishing others' private information, such as a physical or electronic + address, without explicit permission +- Other conduct which could reasonably be considered inappropriate in a + professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable +behavior and are expected to take appropriate and fair corrective action in +response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or +reject comments, commits, code, wiki edits, issues, and other contributions +that are not aligned to this Code of Conduct, or to ban temporarily or +permanently any contributor for other behaviors that they deem inappropriate, +threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces +when an individual is representing the project or its community. Examples of +representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed +representative at an online or offline event. Representation of a project may be +further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be +reported by contacting the project team at one-platform@redhat.com. All +complaints will be reviewed and investigated and will result in a response that +is deemed necessary and appropriate to the circumstances. The project team is +obligated to maintain confidentiality with regard to the reporter of an incident. +Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good +faith may face temporary or permanent repercussions as determined by other +members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, +available at + +[homepage]: https://www.contributor-covenant.org + +For answers to common questions about this code of conduct, see + diff --git a/catalog/systems/docs/how-to-contribute.md b/catalog/systems/docs/how-to-contribute.md new file mode 100644 index 000000000..8542f81bb --- /dev/null +++ b/catalog/systems/docs/how-to-contribute.md @@ -0,0 +1,35 @@ +--- +id: how-to-contribute +title: How to Contribute +sidebar_label: How to Contribute +--- + +First of all, thank you for your effort to improve OP Platform. This guide will help you regarding various aspects like putting issues, contributing a feature, etc. + +## Code of Conduct + +This project and everyone participating in it is governed by the [Code of Conduct](./code-of-conduct.md). By participating, you are expected to uphold this code. Please read the [full text](./code-of-conduct.md) so that you can read which actions may or may not be tolerated. + +--- + +## Before Submitting a Pull Request + +**Before submitting your pull request** make sure the following requirements are fulfilled: + +- Fork the repository +- Run `npm install` in the repository root +- Create a branch from `master` +- Add the required envs +- Change necessary code for bug fix, a new feature +- Check linting and format it +- Make sure all are test passing + +```bash +npm run test +``` + +## Reporting an issue + +Before submitting an issue, you need to make sure: + +- Kindly provide an adequate description and a clear title diff --git a/catalog/systems/docs/index.md b/catalog/systems/docs/index.md new file mode 100644 index 000000000..440d14354 --- /dev/null +++ b/catalog/systems/docs/index.md @@ -0,0 +1,42 @@ +--- +id: overview +title: What's One Platform +sidebar_label: Overview +slug: / +--- + +One Platform provides a single place for all internal applications and services, supports consistent User experience by providing standard platform for service hosting and data integration, efficient resource management, real time metrics availability, cross-team collaboration and unified documentation. + +## Why One Platform? + +If we observe a few key challenges of any organization with regards to application platforms for developers, it is to provide consistent app development, deployment & delivery experience, and a single place to access information. This could be due to a lack of a shared services platform for developers and unawareness of similar app's availability (search) that invites major duplication of code and efforts. + +The One Platform team was developing a solution to this problem since the start of this year. The goal was to build a shared services platform that helps developers to increase the development speed using easy to integrate microservices, in-built SPAs, and simplest components library and provide seamless app delivery. The intent is to provide a platform that connects users, developers, and stakeholders across the organization and allows them to exchange value by sharing apps. + +Developers have good reasons to select many feasible platforms that provide good development and deploying experience and One Platform works with these platforms and respective tools to ease a developer's job even further. While One Platform provides integrations and microservices that save developers time and effort, developers can use saved time to strengthen the core functionality of their individual apps. + +## One Platform Benefits? + +One Platform has been built on one key Principle/Mantra i.e. Develop fast, Deliver faster + +One Platform does not interfere in the Developer's business and provides flexibility to develop an app using their favorite framework, language, and tools. It provides on-demand integrations with internal tools and applications to easily access/share content(s). The following diagram shows where One Platform really comes into the picture. + +![OP Overview](/img/getting-started/op-overview.jpeg) + +The applications deployed in One Platform are, + +1. Open to all, by choice +2. Hosted apps under single domain i.e one.redhat.com/yourapp +3. Easy to search (including app contents) using common search service +4. Have access to Core microservices (we will discuss them in detail in the next part) + - Authentication - Red Hat Single Sign-On enabled. (Subject to IT regulations) + - Authorization - Rover integration to view & grant users access + - Feedback - Collect feedback in 3 clicks, Generate ticket (Jira, GitLab) for improvements + - Notifications - Inter SPA communication, Toasters, Banners, Subscriptions etc + - Search - App & Content search utilizing inbuilt integration with Apache Solr +5. Deployable within minutes using SPAship. +6. Patternfly compliant and can easily utilize One Platform Component library + +In the end, One Platform would like to change the developer's mindset through its services from **"I work on a product/process"** to + +
"I connect users to an experience"
diff --git a/catalog/systems/docs/op-architecture.md b/catalog/systems/docs/op-architecture.md new file mode 100644 index 000000000..9b3777267 --- /dev/null +++ b/catalog/systems/docs/op-architecture.md @@ -0,0 +1,75 @@ +--- +id: op-architecture +title: One Platform Architecture +sidebar_label: One Platform Architecture +slug: /architecture +--- + +One Platform entirely focuses on, + +- Core interaction of native & non-native apps/microservices. +- Generate maximum value to both developers and users. +- Enhancing and provide consistent developer experience. + +![OP Arch](/img/getting-started/op-arch.png) + +OP Architecture majorly consist of two components + +1. The SPA deployment base provided by SPASHIP + +2. The Unified Core API Service provided by a Federated GraphQL + +## Core Services + +Core services provides the basic interactive for any application like Authentication, Feedback, Notification etc + +Some of the core services provided by One Platform are: + +### Authentication + +One platform has different strategies for authentication. At the Client-side, internal auth is supported (auth.redhat.com) and at the API gateway, JWT token from Internal auth. The API key is supported. SPAs are authenticated default through SSI authentication support. + +### SSI Components + +One Platform provides global components, including a Navigation bar, Feedback action button, etc. These are provided as pluggable web components, published under `one-platform` namespace in npm. They provide flexibility and extensibility to the SSI. + +### User Group + +The simplest microservice is a wrapper over Rover that uses LDAP to authorize users. The plan is to give away complete ownership control to developers so the platform can stay out of the authorization business. User-group microservice plays the role of the middleware which talks to the organization's user data store. This is primarily integrated with LDAP and Rover. Also, it manages a minimal data store for faster data processing. This data store is updated daily with sync scripts. Native/Non-native microservices/SPAs can benefit from this for managing the user information. + +### Feedback + +The goal of feedback microservice is to let users submit feedback in 3 easy steps i.e. Select App, select Experience, and Describe a problem to help developers and the Platform team (in case of core services) build context for better decisions. The feedback services integrated with the ticketing system (Jira & GitLab) so developers can follow up with end-users and record satisfaction. This provides complete transparency as data is visible to all visitors and helps to increase the value of the applications. + +### Notification + +It is the core communication microservice of the platform for native(inbuilt) and non-native apps. It enables developers to select & configure the mode of communication for individual apps. The need for microservices to communicate with each other, many of which does not necessitate real-time communication, demanded the need of an engine that can help to notify the users of the event without bothering about the health and response of users. We kept it lightweight to ensure a quick response. + +### Search + +One Platform goal is to consolidate applications and make them searchable in real-time. It should be a single point of contact for end-users when they are looking for an app. The Search microservices would not only resolve app search problems however it would extend the search to app contents. This helps to design & develop an native app search. + +### Developer Console + +The developer's dashboard or the rather the control plane of One Platform. A single point to manage all your SPA's and there corresponding OP service utilization. + +### API Gateway + +The responsibility of the API gateway is to record “which service is communicating, with whom, and is it allowed to do so”. Access Control is implemented on top of the API Gateway which enables the authorization and permission model in the data flow. Also, it is a single source of truth for the entire one platform backend. Websocket support is also provided in the gateway. + +The supported authorization models are: + +- JWT Tokens from auth.redhat.com +- API Key + +## Hosted Services + +There exist hosted services maintained by One Platform Team to enhance developer experience even furthur. Some of these are + +### Lighthouse + +Lighthouse is a Google Open Source Webpage Audit tool the measure's various parameters like SEO, PWA, Accessibility etc. One Platform has hosted the Lighthouse CI server for CI Testing and also an interactive, yet simple UI to get your SPA's Lighthouse progress. + +### API Catalog + +API Catalog is One Platform's effort to resolve API discoverability in an organization. In simple terms, it's a catalog to discover various API's provided by various team. It helps developers to manage, promote and share APIs with their users. Users can get various information regarding API like the owners or maintainers of it, various pre-prod and prod instances available, etc. API Catalog also provides toolsets to play around with the APIs. diff --git a/catalog/systems/docs/service-deployment-guideline.md b/catalog/systems/docs/service-deployment-guideline.md new file mode 100644 index 000000000..e2421d753 --- /dev/null +++ b/catalog/systems/docs/service-deployment-guideline.md @@ -0,0 +1,50 @@ +--- +id: service-deployment-guideline +title: OP Service Deployment Guideline +slug: /deployments/service +sidebar_label: Service Guideline +--- + +This document shares the steps on how we can deploy the microservice on the kubernetes cluster in the openshift environment. + +## Workflow + +1. Once the PR/ MR is merged, run the github actions configured with the repository with proper tags. +2. Github actions containerizes the code and pushes images to the Github Container Registry(GHCR). +3. Once Image is published in GHCR, update the imagestreams of the respective microservice in OpenShift. +4. Roll out the microservice deployment and restart the One Platform API Gateway if required. +5. Now your changes are live now + +## Building an Image + +1. For building the image after PR navigate to GIthub Actions and select the action you want to perform. Trigger the run workflow button for the action which you have selected. +2. Once the GitHub action is completed you will be able to see the new/updated image on the packages section of the One Platform repository. + +![GH Workflow Trigger](/img/service-deploymeny-guide/step1.png) + +3. Details of the new/updated image is available over the package page over the GitHub repository with the history of the update. + +![GH Workflow Progress](/img/service-deploymeny-guide/step2.png) + +4. Login to the Openshift Console and copy the login command with oc CLI. + +```sh +oc login --token=token-test --server=https://test.openshiftapps.com:6442 +``` + +![GH Workflow History](/img/service-deploymeny-guide/step3.png) + +5. Switch to the project in openshift to update the imagestream. + +```sh +oc project +``` + +6. Update the imagestream with a new image. + +```sh +oc import-image : +``` + +7. Under the imagestreams section of the openshift web ui you can see that the new image has rolled out. +8. Navigate to respective Deployment config and redeploy the microservice to update what changes through web UI diff --git a/catalog/systems/docs/spa-deployment-guidelines.md b/catalog/systems/docs/spa-deployment-guidelines.md new file mode 100644 index 000000000..749a86b49 --- /dev/null +++ b/catalog/systems/docs/spa-deployment-guidelines.md @@ -0,0 +1,10 @@ +--- +id: spa-deployment-guidelines +title: SPA Deployment Guidelines +slug: /deployments/spa +sidebar_label: SPA Guidelines +--- + +## Guide + +#### Please head over to [SPAship quickstart deployment guide](https://spaship.io/docs/guide/user-guide/Quickstart/) diff --git a/catalog/systems/mkdocs.yml b/catalog/systems/mkdocs.yml new file mode 100644 index 000000000..390502c15 --- /dev/null +++ b/catalog/systems/mkdocs.yml @@ -0,0 +1,13 @@ +site_name: 'One Platform' + +nav: + - What's One Platform: index.md + - One Platform Architecture: op-architecture.md + - OP Service Deployment Guideline: service-deployment-guideline.md + - SPA Deployment Guideline: spa-deployment-guidelines.md + - How to Contribute: how-to-contribute.md + - Code Of Conduct: code-of-conduct.md + + +plugins: + - techdocs-core diff --git a/packages/analytics-service/catalog-info.yml b/packages/analytics-service/catalog-info.yml index 86b615e03..0ce2fedcc 100644 --- a/packages/analytics-service/catalog-info.yml +++ b/packages/analytics-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Analytics microservice is used for providing analytics api information for SPAs deployed in One Platform by connecting with Sentry and Pendo. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-006 tags: - microservice diff --git a/packages/analytics-service/docs/index.md b/packages/analytics-service/docs/index.md new file mode 100644 index 000000000..054afa8cb --- /dev/null +++ b/packages/analytics-service/docs/index.md @@ -0,0 +1,55 @@ +# Analytics Microservice + +Analytics microservice is used for providing analytics api information for SPAs deployed in One Platform by connecting with Sentry and Pendo + +## Features + +1. Total rate timeline +2. Unique error rate timeline +3. Total count of errors on an interval +4. Total unique errors on an interval +5. Error outcome based timeline - (Accepted errors, invalid errors etc) +6. Api to connect analytics service with an existing project and an app or create a new one in one-platform set + +## Local Development + +### 1. Switch to the working directory + +1. Switch to the working directory `cd analytics-service` +2. Copy the `.env.example` to the `.env` +3. Change the values as needed, keeping the unneeded values as undefined + +### 2. Start Microservice + +Install required modules by using `npm install` + +Run `npm start` to run your microservice for dev env + +To build the microservice, use `npm run build`. + +## Using docker-compose (Recommended) + +1. Follow the first 2 steps from above +2. Then execute the following command to start a standalone instance of `analytics-service` + + ```bash + docker-compose up -d analytics-service + ``` + + **Note:** Some features of the App Service might not work without the API Gateway. + +3. To start the entire cluster of microservices, use the following command + + ```bash + docker-compose up -d api-gateway + ``` + +## Runnninng Tests + +```bash +npm test +``` + +## Contributors: + +👤 **Akhil Mohan** [@akhilmhdh](https://github.com/akhilmhdh) diff --git a/packages/analytics-service/mkdocs.yml b/packages/analytics-service/mkdocs.yml new file mode 100644 index 000000000..fb703262e --- /dev/null +++ b/packages/analytics-service/mkdocs.yml @@ -0,0 +1,7 @@ +site_name: 'Analytics Service' + +nav: + - Getting Started: index.md + +plugins: + - techdocs-core diff --git a/packages/api-catalog-service/catalog-info.yml b/packages/api-catalog-service/catalog-info.yml index 1d81dde83..1549d6d82 100644 --- a/packages/api-catalog-service/catalog-info.yml +++ b/packages/api-catalog-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend ecosystem for API Catalog. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-007 tags: - microservice diff --git a/packages/api-catalog-service/docs/api-reference.md b/packages/api-catalog-service/docs/api-reference.md new file mode 100644 index 000000000..7a3eabf6b --- /dev/null +++ b/packages/api-catalog-service/docs/api-reference.md @@ -0,0 +1,10 @@ +--- +id: api-ref +title: API Reference +slug: /api-catalog/api-ref +sidebar_label: API Reference +--- + +# API Reference + +Coming soon!!! diff --git a/packages/api-catalog-service/docs/faqs.md b/packages/api-catalog-service/docs/faqs.md new file mode 100644 index 000000000..b1f294ce9 --- /dev/null +++ b/packages/api-catalog-service/docs/faqs.md @@ -0,0 +1,20 @@ +--- +id: faqs +title: API Catalog FAQs +slug: /api-catalog/faqs +sidebar_label: FAQs +--- + +# FAQ + +> Can API Catalog be used to create OpenAPI Schema for a REST API? + +Right now API Catalog doesn’t support schema creation. Developers need to provide a URL that contains the schema hosted somewhere. API Catalog uses that to monitor and provide tools that use the schema. + +> What are the API types supported by API Catalog? + +API catalog supports both REST and GraphQL APIs. We provide tools based on these two. + +> Our team has multiple APIs. Does API catalog support grouping? + +Yes, API catalog has something called namespace to allow grouping. Your team can be considered as a namespace and add multiple types of API to that namespace. diff --git a/packages/api-catalog-service/docs/index.md b/packages/api-catalog-service/docs/index.md new file mode 100644 index 000000000..ab843302e --- /dev/null +++ b/packages/api-catalog-service/docs/index.md @@ -0,0 +1,31 @@ +API Catalog Microservice +================================================= + +API Catalog Microservice consists of APIs required for the API Catalog SPA. Modules associated with the API Catalog microservice is mentioned below + 1. Namespace API Manager + 1. Notifications Manager + +Switch to the working directory +------------ + + `cd api-catalog-service`. + +Copy Certificates +------------ + + 1. Copy the SSL paths to the `.env` file of `api-catalog` microservice. + +Start Microservice +------------ + + 1. Run `npm build:dev` to generate a build for dev env and `npm build` for production build. + 2. Run `npm start` to run your microservice for dev env. + +Testing +------------ + + 1. Run `npm test` to run default tests. + +Contributors +------------ +👤 **Rigin Oommen** [@riginoommen](https://github.com/riginoommen) diff --git a/packages/api-catalog-service/docs/onboarding-guide.md b/packages/api-catalog-service/docs/onboarding-guide.md new file mode 100644 index 000000000..3dfaac123 --- /dev/null +++ b/packages/api-catalog-service/docs/onboarding-guide.md @@ -0,0 +1,30 @@ +--- +id: guides +title: Guides +slug: /api-catalog/guides +sidebar_label: Guides +--- + +## Onboarding An API + +All APIs in API Catalog must belong to a namespace. A namespace is like a container to hold multiple APIs of a group in an organization. It can be REST or GraphQL. So as an example, One Platform can be considered a namespace, and One Platform GraphQL API can be one of the APIs in its namespace. + +Onboarding an API in API Catalog is an easy process. All you need to follow is a simple multi-step wizard form. + +1. Go to [API Creation Page in API Catalog](https://one.redhat.com/developers/api-catalog/apis/new) +2. Step 1 asks for namespace details like name, description, and owners - mailing list. +3. Step 2 is where you add APIs that belong to your namespace + a. Fill up required information like name, description, API type, documentation URL, application URL, etc. + b. Provide the instances in which API can be accessed like QA, Stage, etc. + c. Fill in other information as VPN protected. +4. You can add more APIs for your namespace here +5. Step 3 is a review step in which you can review all the data you have now inserted. +6. That's it, Submit the information, and tadaaa your APIs are onboarded. + +## How does the API subscription work? + +1. To subscribe to an API, first head over to the detailed section of an API. +2. Under each schema, there will be a subscription button. +3. When you click on subscription, you can select the instances like QA and Stage to which you want to subscribe. +4. That's it. When the schema has any changes, the API catalog will monitor the provided schema and notify you of the change by email. +5. The email will contain all the breaking changes and non-breaking changes. diff --git a/packages/api-catalog-service/docs/overview.md b/packages/api-catalog-service/docs/overview.md new file mode 100644 index 000000000..09bd30fcb --- /dev/null +++ b/packages/api-catalog-service/docs/overview.md @@ -0,0 +1,46 @@ +--- +id: overview +title: What is API Catalog +slug: /api-catalog +sidebar_label: Overview +--- + +# Overview + +API Catalog is a one-stop shop for getting the APIs of your organization. It helps developers to manage, promote and share APIs with their users. Users can get various information regarding API like the owners or maintainers of it, various pre-prod and prod instances available, etc. API Catalog also provides toolsets to play around with the APIs. + +## Features + +1. Supports REST and GraphQL API +2. API metadata like + a. Information regarding the owners/maintainers and mailing list to connect with them + b. Pre-prod environments available + c. Documentation link and application link + d. Depreciation notice for schemas getting depreciated + e. VPN protected instance or not +3. Each type of API has a set of tools for users to tryout + a. REST API: Swagger and Redoc + b. GraphQL API: GraphQL Playground +4. Link it with the status component and get to know whether it's operational or not +5. CMDB information +6. Change Subscription + +## Subscription + +API Catalog also helps you to get notified of an API change. Users can subscribe to get notifications via mail for each instance of an API. When a schema change happens, the API catalog will send an email in detail notifying what all the breaking change and nonbreaking changes happened. + +## Tools + +These are the tools in API Catalog for users to get to play around with an API. + +- Swagger (REST) + + It's a popular tool that helps you to generate interactive documentation from an Open API schema. Developers need to provide a valid Open API schema to use in swagger. + +- Redoc (REST) + + Redoc is another popular tool that generates user-friendly static documentation for reference from an Open API spec sheet provided. + +- GraphQL Playground (GraphQL) + + It's a popular tool among the GraphQL community. Playground connects with a GraphQL server and, based on the GraphQL introspection received, generates IntelliSense and a dashboard to try out various queries. In API Catalog you need to provide an introspection URL or the server URL itself if introspection is allowed. diff --git a/packages/api-catalog-service/mkdocs.yml b/packages/api-catalog-service/mkdocs.yml new file mode 100644 index 000000000..514ee621f --- /dev/null +++ b/packages/api-catalog-service/mkdocs.yml @@ -0,0 +1,11 @@ +site_name: 'API Catalog Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md + +plugins: + - techdocs-core diff --git a/packages/api-gateway-service/catalog-info.yml b/packages/api-gateway-service/catalog-info.yml index f5bee0fb6..22eab3b70 100644 --- a/packages/api-gateway-service/catalog-info.yml +++ b/packages/api-gateway-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: GraphQL API Gateway with single-defined schema and source data from across many different microservices. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 tags: - api-gateway diff --git a/packages/api-gateway-service/docs/index.md b/packages/api-gateway-service/docs/index.md new file mode 100644 index 000000000..a8ceae5b4 --- /dev/null +++ b/packages/api-gateway-service/docs/index.md @@ -0,0 +1,26 @@ +# API Gateway + +API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring. + +## Local Development using docker-compose (recommended) + +1. Copy the `.env.example` to `.env`. Change/modify the variable values as required. +2. Change the URIs in `config.json` and add/remove the microservices as needed +3. Then run the docker-compose service using + ```bash + docker-compose up api-gateway + ``` + The above command will start all the dependent services from this project. (Make sure any external microservices added in step 2 are running and accessible) + +*Note:* Before starting the gateway, also make sure the microservices in this project are configured properly. + +## Running Tests + +```bash +npm test +``` + +## Contributors: + +👤 **Rigin Oommen** [@riginoommen](https://github.com/riginoommen) +👤 **Mayur Deshmukh** [@deshmukhmayur](https://github.com/deshmukhmayur) diff --git a/packages/api-gateway-service/mkdocs.yml b/packages/api-gateway-service/mkdocs.yml new file mode 100644 index 000000000..480d3b923 --- /dev/null +++ b/packages/api-gateway-service/mkdocs.yml @@ -0,0 +1,6 @@ +site_name: 'API Gateway Service' + +nav: + - Getting Started: index.md +plugins: + - techdocs-core diff --git a/packages/apps-service/catalog-info.yml b/packages/apps-service/catalog-info.yml index 102454fd9..4360ceab6 100644 --- a/packages/apps-service/catalog-info.yml +++ b/packages/apps-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend ecosystem for Apps Service. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 tags: - microservice diff --git a/packages/apps-service/docs/api-reference.md b/packages/apps-service/docs/api-reference.md new file mode 100644 index 000000000..19ce71ab9 --- /dev/null +++ b/packages/apps-service/docs/api-reference.md @@ -0,0 +1,28 @@ +--- +id: api-ref +title: API Reference +slug: /apps-service/api-ref +sidebar_label: API Reference +--- + +# API Reference + +You can test drive the available APIs on the [QA testing playground](https://qa.one.redhat.com/api/graphql). + +Some of the queries and mutations provided by the Apps Service are: + +## Queries + +| Query | Description | +| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ | +| myApps | Returns a list of all Apps you have registered. The user id is deduced from the Authorization header (i.e. JWT Token or the API Key) | +| app(appId: String!) | Returns the metadata of a single app by it's unique appId | +| findApps(selectors: FindAppsInput!) | Used to find an app by any of the available fields | + +## Mutations + +| Mutations | Description | +| ---------------------------------------- | ------------------------------------------------------- | +| createApp(app: CreateAppInput!) | Creates/Registers a new app/project | +| updateApp(id: ID!, app: UpdateAppInput!) | Update/modify the metadata of your existing app/project | +| deleteApp(id: ID!) | Delete an existing app/project | diff --git a/packages/apps-service/docs/faqs.md b/packages/apps-service/docs/faqs.md new file mode 100644 index 000000000..b7bf42d61 --- /dev/null +++ b/packages/apps-service/docs/faqs.md @@ -0,0 +1,22 @@ +--- +id: faqs +title: Apps Service FAQs +slug: /apps-service/faqs +sidebar_label: FAQs +--- + +# FAQ + +> Who are the App Service APIs for? + +The App Service APIs are for any developer who wants to use the metadata of an app they have registered on the One Platform. + +> Can anyone create an App? How do I create a new App? + +Yes. Anyone can create an app as long as you have an internal Red Hat account (i.e. as long as you are a Red Hat employee). + +To create a new App, it is recommended to use the [Developer Console](https://one.redhat.com/console) as it provides a graphical interface for quickly and easily creating an app/project on One Platform. + +> I already have an app created, can I still register an existing app? + +Yes. If you want to integrate your app with any of the One Platform Microservices, registering your app in the Apps Service (via the Developer Console or the Apps Service APIs) is the easiest way to do so. diff --git a/packages/apps-service/docs/index.md b/packages/apps-service/docs/index.md new file mode 100644 index 000000000..fdf8c316d --- /dev/null +++ b/packages/apps-service/docs/index.md @@ -0,0 +1,49 @@ +# Apps Microservice + +Apps Microservice provides the essential GraphQL APIs required for the Developer Console. It provides graphql queries for creating and managing apps. + +## Features + +- Allows creation of apps/projects +- Allows configuration of databases (Requires a couchdb instance) + +## Local Development + +### 1. Switch to the working directory + +1. Switch to the working directory `cd apps-service` +2. Copy the `.env.example` to the `.env` +3. Change the values as needed, keeping the unneeded values as undefined + +### 2. Start Microservice + +Run `npm start` to run your microservice for dev env + +To build the microservice, use `npm run build`. + +## Using docker-compose (Recommended) + +1. Follow the first 2 steps from above +2. Then execute the following command to start a standalone instance of `apps-service` + + ```bash + docker-compose up -d apps-service + ``` + + **Note:** Some features of the App Service might not work without the API Gateway. + +3. To start the entire cluster of microservices, use the following command + + ```bash + docker-compose up -d api-gateway + ``` + +## Runnninng Tests + +```bash +npm test +``` + +## Contributors: + +👤 **Mayur Deshmukh** [@deshmukhmayur](https://github.com/deshmukhmayur) diff --git a/packages/apps-service/docs/onboarding-guide.md b/packages/apps-service/docs/onboarding-guide.md new file mode 100644 index 000000000..e94e4ef3b --- /dev/null +++ b/packages/apps-service/docs/onboarding-guide.md @@ -0,0 +1,14 @@ +--- +id: guides +title: Guides +slug: /apps-service/guides +sidebar_label: Guides +--- + +Any developer can use the Apps Service to view the metadata for their apps, and to configure the One Platform Service integrations. + +To get started with the Apps Service, follow the steps below: + +1. Register your app on the [One Platform Developer Console](https://one.redhat.com/console) +2. Setup an API Key for your app/project +3. Now you can use the API Key for authenticated access to the One Platform API Gateway to access any of the APIs diff --git a/packages/apps-service/docs/overview.md b/packages/apps-service/docs/overview.md new file mode 100644 index 000000000..bc76c503a --- /dev/null +++ b/packages/apps-service/docs/overview.md @@ -0,0 +1,28 @@ +--- +id: overview +title: What is Apps Service +slug: /apps-service +sidebar_label: Overview +--- + +# Overview + +Apps Service is the indexing service which allows you to create and manage individual apps/projects on One Platform. It is the entrypoint to start developing and deploying your application on One Platform. Users can manage and set up access levels and permissions for their apps and integrations with the One Platform services. + +## Features + +1. Create and manage your project +2. Set up Access for your project +3. Configure integrations with the One Platform Services + +## Service integrations + +The Apps Service also provides APIs to configure service integrations. Users can configure any of the following services using the Apps Service: + +1. Database +2. Feedback +3. Hosting +4. Lighthouse +5. Notifications +6. Reporting +7. Search, etc. diff --git a/packages/apps-service/mkdocs.yml b/packages/apps-service/mkdocs.yml new file mode 100644 index 000000000..3b5e3680b --- /dev/null +++ b/packages/apps-service/mkdocs.yml @@ -0,0 +1,10 @@ +site_name: 'Apps Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md +plugins: + - techdocs-core diff --git a/packages/component-catalog-spa/catalog-info.yml b/packages/component-catalog-spa/catalog-info.yml index 2b6db5d08..c617c0344 100644 --- a/packages/component-catalog-spa/catalog-info.yml +++ b/packages/component-catalog-spa/catalog-info.yml @@ -9,6 +9,7 @@ metadata: annotations: lighthouse.com/website-url: https://one.redhat.com/components github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 spaship.io/property-id: 'one-platform' spaship.io/app-id: 'components' diff --git a/packages/component-catalog-spa/docs/index.md b/packages/component-catalog-spa/docs/index.md new file mode 100644 index 000000000..66fe2b11c --- /dev/null +++ b/packages/component-catalog-spa/docs/index.md @@ -0,0 +1,51 @@ +--- +id: component-lib-overview +title: OP Components +slug: /op-components +sidebar_label: Overview +--- + +One platform component library is a collection of web components which are built using lit-element, each component following the red hat brand guideline. LitElement follows the Web Components standards, so your components will work with any framework. + +## Guides + +### Component creation + +1. Go to `https://github.com/1-Platform/op-components` and fork this repository + +2. Clone the forked repository + +```sh +git clone git@github.com:1-Platform/op-components.git +``` + +3. Install Packages + +```sh +npm i +``` + +4. To create a new package run + +```sh +npm run new +``` + +5. Alternatively to run a component + +```sh +npm run dev +``` + +6. OP-Styles + +Which includes the common style, which can be used in other components. +Documentation Link + +### Quick Start Guide + +To Use the component from the library go packages into GitHub Repo or its corresponding [npm package](https://1-platform.github.io/op-components). + +All of the packages are published on [npmjs](https://www.npmjs.com/org/one-platform) + +The usage for all of the components is mentioned in their respective README, which is also available on npmjs. [(example)](https://www.npmjs.com/package/@one-platform/opc-footer) diff --git a/packages/component-catalog-spa/mkdocs.yml b/packages/component-catalog-spa/mkdocs.yml new file mode 100644 index 000000000..2b3bae858 --- /dev/null +++ b/packages/component-catalog-spa/mkdocs.yml @@ -0,0 +1,6 @@ +site_name: 'Component Catalog' + +nav: + - Getting Started: index.md +plugins: + - techdocs-core diff --git a/packages/feedback-service/catalog-info.yml b/packages/feedback-service/catalog-info.yml index 98b48cb34..3434bff33 100644 --- a/packages/feedback-service/catalog-info.yml +++ b/packages/feedback-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend integrated with JIRA, GitHub and GitLab. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-004 tags: - microservice diff --git a/packages/feedback-service/docs/api-reference.md b/packages/feedback-service/docs/api-reference.md new file mode 100644 index 000000000..6d2821254 --- /dev/null +++ b/packages/feedback-service/docs/api-reference.md @@ -0,0 +1,34 @@ +--- +id: api-ref +title: API Reference +slug: /feedback/api-ref +sidebar_label: API Reference +--- + +# API Reference + +You can test drive the available APIs on the [QA testing playground](https://qa.one.redhat.com/api/graphql) + +Feedback Microservice provides a set of GraphQL Queries and Mutation APIs to allow developers to perform CRUD operations on their Feedback Configs and feedbacks. + +## Queries + +| Query | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | +| listFeedbackConfigs | Returns a list of all listFeedbackConfigs | +| getFeedbackConfigById(id: ID!) | Returns a feedbackConfig with matching id | +| getFeedbackConfigByAppId(appId: String) | Finds a feedbackConfig with respective appId | +| listFeedbacks(search: String, limit: Int, offset: Int, category: [FeedbackCategory], appId: [String], createdBy: String, status: FeedbackStatus, sortBy: FeedbackSortType) | Returns a list of feedbacks with pagination support. It also filter the data by the category, appId, createdBy and status | +| getFeedbackById(id: ID!) | Returns the feedback with the matching id | + +## Mutations + +| Mutation | Description | +| ------------------------------------------------------------------------ | ----------------------------------------------------------------- | +| createFeedbackConfig(payload: FeedbackConfigInput!) | Creates a new feedback configration | +| updateFeedbackConfig(id: ID, payload: FeedbackConfigInput!) | Modifies a feedback configuration | +| deleteFeedbackConfig(id: ID!) | Delete a feedback configuration matching id | +| createFeedback(input: FeedbackInput!) | Creates new Feedback | +| updateFeedback(id: ID!, input: FeedbackInput!) | Updates the feedback with respective to the id | +| deleteFeedback(id: ID!) | Deletes the feedback with respective to the id | +| updateFeedbackIndex | Updates and Sync the Feedback search index | diff --git a/packages/feedback-service/docs/faqs.md b/packages/feedback-service/docs/faqs.md new file mode 100644 index 000000000..e667e48d7 --- /dev/null +++ b/packages/feedback-service/docs/faqs.md @@ -0,0 +1,18 @@ +--- +id: faqs +title: Feedback Service FAQs +slug: /feedback/faqs +sidebar_label: FAQs +--- + +# FAQ + +> How to use the feedback web component in my spa?. + +The web component is published in the npmjs, Guide for the usage is mentioned in the npmjs. It supports all the frameworks. + +[Refer NPM for more details](https://www.npmjs.com/package/@one-platform/opc-feedback) + +> How to manage the preferences & configurations with the feedback?. + +Feedback has a dedicated configuration management which stores the preferences for the dedicated configuration of the feedback. This is managed with in the developer console for each application. diff --git a/packages/feedback-service/docs/index.md b/packages/feedback-service/docs/index.md new file mode 100644 index 000000000..f27101c39 --- /dev/null +++ b/packages/feedback-service/docs/index.md @@ -0,0 +1,41 @@ +# Feedback Microservice + +One platform's Feedback GraphQL microservice supports managing the feedback with the JIRA/Github/Gitlab. + +## Local Development + +### 1. Switch to the working directory + +1. Switch to the working directory `cd feedback-service` +2. Copy the `.env.example` to the `.env` +3. Change the values as needed, keeping the unneeded values as undefined + +### 2. Start Microservice + +1. Run `npm build:dev` to generate a build for dev env and `npm build` for production build +2. Run `npm start` to run your microservice for dev env + +## Using docker-compose (Recommended) + +1. Follow the first 2 steps from above +2. Then execute the following command to start a standalone instance of `notificfeedbackations-service` + + ```bash + docker-compose up -d feedback-service + ``` + +3. To start the entire cluster of microservices, use the following command + + ```bash + docker-compose up -d api-gateway + ``` + +## Running Tests + +```bash +npm test +``` + +## Contributors: + +👤 **Rigin Oommen** [@riginoommen](https://github.com/riginoommen) diff --git a/packages/feedback-service/docs/onboarding-guide.md b/packages/feedback-service/docs/onboarding-guide.md new file mode 100644 index 000000000..3b9b25fb0 --- /dev/null +++ b/packages/feedback-service/docs/onboarding-guide.md @@ -0,0 +1,18 @@ +--- +id: guides +title: Guides +slug: /feedback/guides +sidebar_label: Guides +--- + +## How to use the Feedback component?. + +Feedback component is an UI element which can be integrated with the Single Page Applications(SPA). It is published in the [npm](https://www.npmjs.com/package/@one-platform/opc-feedback) as a web component. You can use `npm`, `yarn` or `pnpm` to install to your SPA configuration. SPA specific configurations are provided in the [opc-feedback](https://www.npmjs.com/package/@one-platform/opc-feedback) documentation. + +## What is Feedback SPA?. + +Feedback SPA is the app which provides the consolidated view of the feedback received. Users will be able to see the feedback which they collected from multiple sources such as JIRA, GitHubm GitLab and Email along with the updated status and assignee. This helps the users to track their feedback more effectively. Feedback App is deployed over [One Platform](https://one.redhat.com/feedback) + +## What is Feedback Microservice?. + +Feedback Microservice is the integration Layer built to connect the datasources for the CRUD operations for the feedback management. It also provides the GraphQL APIs for managing the data. JIRA, EMAIL, GitLab and GitHub integrations are currently supported. diff --git a/packages/feedback-service/docs/overview.md b/packages/feedback-service/docs/overview.md new file mode 100644 index 000000000..902704e9f --- /dev/null +++ b/packages/feedback-service/docs/overview.md @@ -0,0 +1,28 @@ +--- +id: overview +title: What is Feedback Service +slug: /feedback +sidebar_label: Overview +--- + +# Overview + +Feedback ecosystem powers the One Platform to collectively communicate with the users, Developers and Stakeholders. This ecosystem is integrated with the JIRA, GitLab and GitHub Instances. + +1. Feedback Web Component + + Users can share the feedback using a plug and play web component which is published in the [npmjs](https://www.npmjs.com/package/@one-platform/opc-feedback). This webcomponent is compatible with any web framework. + +2. Feedback SPA + + Feedback SPA provides the consolidated UI with the feedback submitted by the users with the updated status. + +3. Feedback Microservice + + This is the core of the feedback ecosystem which powers up the integrations and the APIs. + +## Features + +1. Consolidated view of the existing feedback +2. Plug & Play web component for the feedback +3. Integrations with JIRA, GitLab and GitHub are available. diff --git a/packages/feedback-service/mkdocs.yml b/packages/feedback-service/mkdocs.yml new file mode 100644 index 000000000..9a9ad4e8a --- /dev/null +++ b/packages/feedback-service/mkdocs.yml @@ -0,0 +1,11 @@ +site_name: 'Feedback Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md + +plugins: + - techdocs-core diff --git a/packages/lighthouse-service/catalog-info.yml b/packages/lighthouse-service/catalog-info.yml index a12c92c78..9fda914d3 100644 --- a/packages/lighthouse-service/catalog-info.yml +++ b/packages/lighthouse-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend ecosystem for Lighthouse. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-008 tags: - microservice diff --git a/packages/lighthouse-service/docs/api-reference.md b/packages/lighthouse-service/docs/api-reference.md new file mode 100644 index 000000000..380aa6cba --- /dev/null +++ b/packages/lighthouse-service/docs/api-reference.md @@ -0,0 +1,6 @@ +--- +id: api-ref +title: API Reference +slug: /lighthouse/api-ref +sidebar_label: API Reference +--- diff --git a/packages/lighthouse-service/docs/faqs.md b/packages/lighthouse-service/docs/faqs.md new file mode 100644 index 000000000..6be3a46a6 --- /dev/null +++ b/packages/lighthouse-service/docs/faqs.md @@ -0,0 +1,20 @@ +--- +id: faqs +title: Lighthouse FAQs +slug: /lighthouse/faqs +sidebar_label: FAQs +--- + +# FAQ + +> Can I get a weekly report about our project’s progress? + +Lighthouse service right now doesn’t support this feature yet. We provide the ability to export the data from the spa. We will soon work on this with our new reporting service, + +> Why does the performance score vary in my Lighthouse audits? + +Performance is a very relative metric as it depends on the auditing server. To avoid performance error lighthouse executes it multiple times and averages it. There will be still errors, due to server load + +> How to audit an app if its protected by a login? + +In Lighthouse configuration we can setup puppeteer script to bypass the login diff --git a/packages/lighthouse-service/docs/index.md b/packages/lighthouse-service/docs/index.md new file mode 100644 index 000000000..d13b9ab58 --- /dev/null +++ b/packages/lighthouse-service/docs/index.md @@ -0,0 +1,25 @@ +Lighthouse Microservice +================================================= + +One platform's Lighthouse Microservice enables to audit a web property with support of Lighthouse CI. + +Switch to the working directory +------------ + + `cd lighthouse-service`. + +Copy Certificates +------------ + + 1. Copy the SSL paths to the `.env` file of `lighthouse` microservice. + +Start Microservice +------------ + + 1. Run `npm build:dev` to generate a build for dev env and `npm build` for production build. + 2. Run `npm start` to run your microservice for dev env. + +Testing +------------ + + 1. Run `npm test` to run default tests. diff --git a/packages/lighthouse-service/docs/onboarding-guide.md b/packages/lighthouse-service/docs/onboarding-guide.md new file mode 100644 index 000000000..49a71f907 --- /dev/null +++ b/packages/lighthouse-service/docs/onboarding-guide.md @@ -0,0 +1,114 @@ +--- +id: guides +title: Guides +slug: /lighthouse/guides +sidebar_label: Guides +--- + +# Onboarding Guides + +## Build Tokens and Admin Tokens + +LHCI has two built-in authentication mechanisms enabled by default: the build token and the admin token. + +The build token allows a user to upload new data to a particular project, but does not allow the destruction or editing of any historical data. If your project is open source and you want to collect Lighthouse data on pull requests from external contributors then you should consider the build token to be public knowledge. + +The admin token allows a user to edit or delete data from a particular project. The admin token should only be shared with trusted users and never placed in the CI environment, even in open source projects with external contributors. Anyone with the admin token can delete the entire project's data. + +All other actions on the server including listing projects, viewing project and build data, and creating new projects are open to anyone with HTTP access. If you'd like to protect these actions, see the other two authentication mechanisms. + +If you forget either of these tokens you will need to connect directly to the storage of the server to reset them using the lhci wizard command. + +## Registering a property with lighthouse CI + +For registering a web property we have to go through the interactive steps with the lhci wizard command. + +Eg: + +```bash +➜ ~ lhci wizard +? Which wizard do you want to run? new-project +? What is the URL of your LHCI server? https://lighthouse.one.redhat.com +? What would you like to name the project? platforms +? Where is the project's code hosted? https://github.com// +? What branch is considered the repo's trunk or main branch? master +Created project platforms (cee828ca-2531-4b5e-87d8-cb9ba2ca2d4c)! +Use build token c26c97e3-82fe-4e5f-a49b-004761a7eed4 to add data. +Use admin token 2alsM7lA7cl9WQav0R1tf92yASNVR78CoVdSJMNp to manage data. KEEP THIS SECRET! +➜ ~ +``` + +After obtaining the build and admin tokens we can upload and manage the lighthouse report in the lighthouse CI server. + +## Setting up the auditing configuration + +Lighthouse CI configuration can be managed through a config file, environment variables, and CLI flag overrides. Lighthouse CI will automatically look for a configuration file in the current working directory in the following priority order: + +- .lighthouserc.js +- lighthouserc.js +- .lighthouserc.json +- lighthouserc.json +- .lighthouserc.yml +- lighthouserc.yml +- .lighthouserc.yaml +- lighthouserc.yaml + +The structure of the config file is segmented by command. Any options you see for a particular command can be set by the property of the same name in the config file. + +### Collect + +Runs Lighthouse n times and stores the LHRs in a local `.lighthouseci/` folder. + +| Options | Description | +| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| method | The method used to run Lighthouse. There are currently two options available, "node" which runs Lighthouse locally via node, and "psi" which runs Lighthouse by making a request to the PageSpeed Insights API. The PageSpeed Insights method has the major limitation that only sites publicly available over the internet can be tested and no other collection options will be respected. | +| headful | Boolean that controls whether Chrome is launched in headless or headful mode. This flag does not apply when using puppeteerScript. | +| additive | Boolean that controls whether the .lighthouseci directory is cleared before starting. By default, this directory is cleared on every invocation of lhci collect to start fresh. | +| url | An array of the URLs that you'd like Lighthouse CI to collect results from. | +| staticDistDir | The path to the directory where the project's productionized static assets are kept. Lighthouse CI uses this to spin up a static server on your behalf that will be used to load your site. | +| isSinglePageApplication | Boolean that controls whether the static server started in staticDistDir should act like a single-page application that serves index.html instead of a 404 for unrecognized paths. This flag has no function when staticDistDir is not set. | +| chromePath | The path of the Chrome executable to use for puppeteerScript and running Lighthouse | +| puppeteerScript | An optional path to a JavaScript file that exports a function that uses puppeteer to login to your page, setup cache data, or otherwise manipulate the browser before Lighthouse is run. | +| puppeteerLaunchOptions | An object of options to pass to puppeteer's launch method. Only used when puppeterScript is set. | +| psiApiKey | The API key to use for making PageSpeed Insights requests. Required if using method=psi. You can obtain a PSI API key from Google APIs. | +| psiApiEndpoint | The API endpoint to hit for making a PageSpeed Insights request. It is very unlikely you should need to use this option. Only use this if you have self-hosted a custom version of the PSI API. | +| psiStrategy | Use this option to change the strategy to use for PageSpeed Insights runner method. Use mobile or desktop. The default value is mobile. | +| startServerCommand | The shell command to use to start the project's webserver. LHCI will use this command to start the server before loading the urls and automatically shut it down once collection is complete. | +| startServerReadyPattern | The regex pattern to look for in the server command's output before considering the server ready for requests. Only used when startServerCommand is set. | +| startServerReadyTimeout | The maximum amount of time in milliseconds to wait for startServerCommand to print the startServerReadyPattern before continuing anyway. Only used when startServerCommand is set. | +| settings | The Lighthouse CLI flags to pass along to Lighthouse. This can be used to change configuration of Lighthouse itself.
Eg: Port, auditMode, gatherMode, output, outputPath ,channel, cli-flags-path | +| | +| numberOfRuns | The number of times to collect Lighthouse results on each url. This option helps mitigate fluctations due to natural page variability. | + +### Assert + +Asserts the conditions in the Lighthouse CI config and exits with the appropriate status code if there were any failures. + +| Options | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| preset | The assertions preset to extend [choices: "lighthouse:all", "lighthouse:recommended", "lighthouse:no-pwa"]
**lighthouse:all** - Asserts that every audit received a perfect score. This is extremely difficult to do. Only use as a base on very high quality, greenfield projects and lower the tresholds as needed.
**lighthouse:recommended** - Asserts that every audit outside performance received a perfect score, that no resources were flagged for performance opportunities, and warns when metric values drop below a score of 90. This is a more realistic base that disables hard failures for flaky audits.
**lighthouse:no-pwa** - lighthouse:recommended but without any of the PWA audits enabled. | +| assertions | The assertions to use
**Categories** - The score of any category in Lighthouse can also be asserted. Assertions are keyed by categories: `categoryId` and follow the same eslint-style format as audit assertions. Note that this just affects the category score and will not affect any assertions on individual audits within the category.
**Levels** - There are three Lighthouse CI assertion levels [choices: "off", "warn", "error"].
**Properties** - The score, details.items.length, and numericValue properties of audit results can all be checked against configurable thresholds. Use minScore, maxLength, and maxNumericValue properties, respectively, in the options object to control the assertion.
**Aggregation Methods** - There are multiple strategies for aggregating the results before asserting the threshold.
**Median** - Use the median value from all runs.
**Optimistic** - Use the value that is most likely to pass from all runs.
**Pessimistic** - Use the value that is least likely to pass from all runs.
**Median-Run** - Use the value of the run that was determined to be "most representative" of all runs based on key performance metrics. Note that this differs from median because the audit you're asserting might not be the performance metric that was used to select the median-run.
**User Timings** - Your custom user timings using performance.mark and performance.measure can be asserted against as well. | +| assertMatrix | can be used to assert against multiple URLs at the same time. When checking the results of runs against multiple URLs, different assertions can be made for different URL patterns. | +| budgetsFile | Instead of configuring using Lighthouse CI assertions against Lighthouse audits, a budget.json file can be used instead. | +| includePassedAssertions | Boolean that controls whether passed assertions should be included in the output. | + +### Upload + +Saves the runs in the `.lighthouseci/` folder to desired target and sets a GitHub status check when the GitHub token is available. + +| Options | Description | +| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| target | The type of target to upload the data to. If set to anything other than "lhci", some of the options will not apply.
When to use **target=temporary-public-storage:**
- You want to setup Lighthouse CI as quickly as possible without any costs.
- You're OK with your reports being available to anyone on the internet with the link.
- You're OK with your reports being automatically deleted after a few days.
- You're OK with your reports being stored on GCP Cloud Storage.
When to use **target=lhci:**
- You want to store Lighthouse reports for longer than a few days.
- You want to control access to your Lighthouse reports.
- You've setup a Lighthouse CI server.
When to use **target=filesystem:**
You want to process the raw Lighthouse results yourself locally.
You want access to the report files on the local filesystem.
You don't want to upload the results to a custom location that isn't supported by Lighthouse CI. | +| token | The build token for your Lighthouse CI project. Required when using target=lhci. This token should be given to you by lhci wizard --wizard=new-project. If you've forgotten your token, connect directly to your server and run lhci wizard --wizard=reset-build-token. | +| ignoreDuplicateBuildFailure | Boolean flag that controls whether upload failures due to duplicate build hashes should be ignored. | +| githubToken | The GitHub token to use when setting a status check on a GitHub PR. Use this when the project is hosted on GitHub and not using the official GitHub App. | +| githubApiHost | The GitHub API host to use when attempting to set a status check. Use this when the project is hosted on a private GitHub enterprise server and not using the public GitHub API. | +| githubAppToken | The GitHub App token returned when installing the GitHub App. Use this to set status checks on GitHub PRs when using the official GitHub App. | +| githubStatusContextSuffix | The suffix to use when setting the status check on a GitHub PR. | +| extraHeaders | A map of additional headers to add the requests made to the LHCI server. Useful for adding bespoke auth tokens. | +| basicAuth | An object containing a username and password pair for authenticating with a Basic auth protected LHCI server. Use this setting when you've protected your LHCI server with Basic auth. | +| serverBaseUrl | The base URL of the LHCI server to upload to. Required when using target=lhci. | +| uploadUrlMap | Boolean that controls whether to update the latest build in temporary public storage associated with this repo. If you use master as your default branch, DO NOT use this option. If you don't use master as your default branch, set this option when you upload results from your actual default branch. | +| urlReplacementPatterns | A list of replacement patterns that will mask differences in tested URLs that you wish to hide for display or treat as the same. The urlReplacementPatterns are used to identify the same URLs for diff comparisons and as preprocessing for GitHub status check labels. | +| outputDir | The directory relative to the current working directory in which to output a manifest.json along with the Lighthouse reports collected. | +| reportFilenamePattern | The pattern to use for report filenames when writing the reports to the filesystem. | diff --git a/packages/lighthouse-service/docs/overview.md b/packages/lighthouse-service/docs/overview.md new file mode 100644 index 000000000..dc92d4fea --- /dev/null +++ b/packages/lighthouse-service/docs/overview.md @@ -0,0 +1,32 @@ +--- +id: overview +title: What is Lighthouse +slug: /lighthouse +sidebar_label: Overview +--- + +# Overview + +Lighthouse is an open-source tool developed by Google to measure the quality of web pages. Google Lighthouse audits the performance, accessibility, and search engine optimization of web pages. One Platform Lighthouse is a hosted service of Lighthouse provided for developers to audit their SPA. One Platform offers an additional SPA to access and compare your score to others of an organization. + +1. [Lighthouse CI Server](https://lighthouse.one.redhat.com) + + The Lighthouse CI enables running Lighthouse from various CI environments like GitLab, Jenkins, Github etc. + +2. [Lighthouse SPA](https://one.redhat.com/lighthouse) + + Lighthouse SPA is an app which is built over Lighthouse CI which has a great consolidated view of the lighthouse reports. It also consists of a leaderboard where the web properties are organized based on the lighthouse scores. + +## Features + +1. Leaderboard to compare your project's score with others in your organization +2. A consolidated dashboard that shows your project's progress +3. Get your project's latest score +4. Get an insight into your project's score compared to others +5. Export scores into CSV +6. Live playground to audit a webpage on the fly +7. Lighthouse server for more complex visualization + +## Lighthouse CI Server + +Lighthouse CI Server is a hosted service where developers will be able to submit their reports. It's a centralized unit where all the reports get aggregated and later visualized for more advanced insights. Developers execute audits using Lighthouse CLI and then submit them to the hosted service. You can visit the One Platform hosted Lighthouse server here. diff --git a/packages/lighthouse-service/mkdocs.yml b/packages/lighthouse-service/mkdocs.yml new file mode 100644 index 000000000..7a69c9a9b --- /dev/null +++ b/packages/lighthouse-service/mkdocs.yml @@ -0,0 +1,11 @@ +site_name: 'Lighthouse Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md + +plugins: + - techdocs-core diff --git a/packages/opc-base/catalog-info.yml b/packages/opc-base/catalog-info.yml index 3c763ff16..4e660b8a2 100644 --- a/packages/opc-base/catalog-info.yml +++ b/packages/opc-base/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: An npm package that provides authentication, toast notification, injecting logic into opc components like opc-nav drawer components etc. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. tags: - library - developers diff --git a/packages/opc-base/docs/index.md b/packages/opc-base/docs/index.md new file mode 100644 index 000000000..4eae5ad43 --- /dev/null +++ b/packages/opc-base/docs/index.md @@ -0,0 +1,76 @@ +# Opc-Base + +Opc-Base is an npm package that provides authentication, toast notification, injecting logic into opc components like `opc-nav`, drawer components etc. + +## Getting Started + +Copy the `config.example.js` file in `dev folder`, rename it as `config.js` and provide the configuration values needed to load the variables. + +For testing es module copy the `config.es.example.js` file in `dev folder`, rename it as `config.es.js` and provide the configuration values needed to load the variables. + +### Installation + +```sh + npm install +``` + +### Development Setup + +To start the live build with rollup + +```sh + npm run build:watch +``` + +To start the development server + +```sh + npm run dev +``` + +## Docs + +1. [opc-base](https://github.com/1-Platform/one-platform/tree/master/packages/opc-base/docs/opc-base.md) +2. [opc-provider](https://github.com/1-Platform/one-platform/tree/master/packages/opc-base/docs/opc-provider.md) + +## Build + +This projects uses typescript with rollup for bundling + +To build the project: + +```bash +npm run build +``` + +To watch for changes and rebuild the files: + +```bash +npm run build:watch +``` + +Inorder to preview the changes you made the project uses web [@web-dev-server](https://modern-web.dev/docs/dev-server/overview/): + +```bash +npm run dev +``` + +## Testing + +Testing is done using [@web/test-runner](https://modern-web.dev/docs/test-runner/overview/) and [@open-wc/testing](https://open-wc.org/docs/testing/testing-package/). Remember to build the package as some test are done using build file due to dependency import issues. + +To run test: + +```bash +npm run test +``` + +For local testing during development + +```bash +npm run test:watch +``` + +## Contributors + +👤 **Akhil Mohan** ([@akhilmhdh](https://github.com/akhilmhdh)) diff --git a/packages/opc-base/mkdocs.yml b/packages/opc-base/mkdocs.yml new file mode 100644 index 000000000..8108ab530 --- /dev/null +++ b/packages/opc-base/mkdocs.yml @@ -0,0 +1,9 @@ +site_name: 'Opc Base' + +nav: + - Getting Started: index.md + - Opc Base: opc-base.yml + - Opc Provider: opc-provider.yml + +plugins: + - techdocs-core diff --git a/packages/reverse-proxy-service/catalog-info.yml b/packages/reverse-proxy-service/catalog-info.yml index fcb94df43..6deddeb88 100644 --- a/packages/reverse-proxy-service/catalog-info.yml +++ b/packages/reverse-proxy-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Reverse-proxy/authentication layer for some internal services and APIs. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 tags: - reverse-proxy diff --git a/packages/reverse-proxy-service/docs/index.md b/packages/reverse-proxy-service/docs/index.md new file mode 100644 index 000000000..52f45ba86 --- /dev/null +++ b/packages/reverse-proxy-service/docs/index.md @@ -0,0 +1,13 @@ +# Reverse Proxy + +A simple express server that acts as a reverse-proxy/authentication layer for some internal services and APIs. + +Currently, the reverse-proxy contains middleware rules for: + +- CouchDB: An open-source document-oriented NoSQL database. +- Keycloak Auth: An auth middleware to apply Keycloak SSO Auth to some restricted URLs +- A no-cors proxy middleware: Used for API Catalog + +## License + +This sub-package, like it's parent monorepository, is licensed under [MIT License](../../LICENSE). diff --git a/packages/reverse-proxy-service/mkdocs.yml b/packages/reverse-proxy-service/mkdocs.yml new file mode 100644 index 000000000..fb703262e --- /dev/null +++ b/packages/reverse-proxy-service/mkdocs.yml @@ -0,0 +1,7 @@ +site_name: 'Analytics Service' + +nav: + - Getting Started: index.md + +plugins: + - techdocs-core diff --git a/packages/search-service/catalog-info.yml b/packages/search-service/catalog-info.yml index 7cf8abb0b..c450771fa 100644 --- a/packages/search-service/catalog-info.yml +++ b/packages/search-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend ecosystem for Search. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 tags: - microservice diff --git a/packages/search-service/docs/api-reference.md b/packages/search-service/docs/api-reference.md new file mode 100644 index 000000000..d5b84aa3e --- /dev/null +++ b/packages/search-service/docs/api-reference.md @@ -0,0 +1,34 @@ +--- +id: api-ref +title: API Reference +slug: /search/api-ref +sidebar_label: API Reference +--- + + +# API Reference + +You can test drive the available APIs on the [QA testing playground](https://qa.one.redhat.com/api/graphql) + +Search Microservice provides a set of GraphQL Queries and Mutation APIs to allow developers manage the data with the Solr. + +## Queries + +| Query | Description | +| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | +| listSearchMap | Returns a list of all search map configs | +| getSearchMap(id: String!) | Returns a search map with matching id | +| getSearchMapsByApp(appId: String) | Finds a search map config with respective appId | +| triggerSearchMap | Trigger search indexing job with the map config | +| search(query: String, start:Int, rows: Int) | Search endpoint for solr interface | + +## Mutations + +| Mutation | Description | +| ------------------------------------------------------------------------ | ----------------------------------------------------------------- | +| createSearchMap(appId: String!, searchMap: CreateSearchMapInput!) | Creates a new Search Map configuration | +| updateSearchMap(appId: String!, searchMap: UpdateSearchMapInput!) | Modifies a Search Map configuration. | +| deleteSearchMap(id: String!) | Delete a Search Map configuration with matching id | +| manageIndex(input: SearchInput, mode: String!) | Single endpoint to manage index creation, updation & deletion | +| createUpdateIndex(input: SearchInput) | Search index creation & updation | +| deleteIndex(id: ID!) | Deletes the search index with respective to the id | diff --git a/packages/search-service/docs/faqs.md b/packages/search-service/docs/faqs.md new file mode 100644 index 000000000..5324d590f --- /dev/null +++ b/packages/search-service/docs/faqs.md @@ -0,0 +1,12 @@ +--- +id: faqs +title: Search Service FAQs +slug: /search/faqs +sidebar_label: FAQs +--- + +# FAQs + +> Can this index static pages? + +Currently it does n’t support the static pages indexing we are working on it to build this feature. diff --git a/packages/search-service/docs/index.md b/packages/search-service/docs/index.md new file mode 100644 index 000000000..832be5be0 --- /dev/null +++ b/packages/search-service/docs/index.md @@ -0,0 +1,25 @@ +Search Microservice +================================================= + +Search microservice is the one of the major pillar of one platform which shows the result according to the search term.This microservice will provide the functionality of searching, indexing and deletion of data from Apache Solr. + +Switch to the working directory +------------ + + `cd search-service`. + +Install Dependencies +------------ + + 1. Execute `npm install` for installing `node` dependencies. + +Start Microservice +------------ + + 1. Run `npm build:dev` to generate a build for dev env and `npm build` for production build. + 2. Run `npm start` to run your microservice for dev env. + +Testing +------------ + + 1. Run `npm test` to run default tests. diff --git a/packages/search-service/docs/onboarding-guide.md b/packages/search-service/docs/onboarding-guide.md new file mode 100644 index 000000000..e26758bcb --- /dev/null +++ b/packages/search-service/docs/onboarding-guide.md @@ -0,0 +1,12 @@ +--- +id: guides +title: Guides +slug: /search/guides +sidebar_label: Guides +--- + +## How the Search Indexing and Updation works + +With the support of Hydra APIs we have created graphql API ecosystem to manage the data with the Solr. + +To Indexing/updating the information push your proposed data with the createUpdateIndex API. If you want to delete the data use the deleteIndex API. For more information please check the API References. diff --git a/packages/search-service/docs/overview.md b/packages/search-service/docs/overview.md new file mode 100644 index 000000000..2bfca8b0f --- /dev/null +++ b/packages/search-service/docs/overview.md @@ -0,0 +1,21 @@ +--- +id: overview +title: What is Search Service +slug: /search +sidebar_label: Overview +--- + +# Overview + +Search Service powers the one platform to make the data searchable. Search is integrated with the Solr via Hydra APIs. The apps from the One Platform push the data to the Solr via the GraphQL APIs with the one platform. + +## Features + +1. Auto Crawler with the Data Mapper for the search +2. API Integrations with Hydra and Solr. +3. GraphQL APIs with the search Integrations. +4. Data Management Capablities + +## Auto Crawling & Data Mapper + +Search Service has the capability of parsing a custom endpoint and push to solr data with the required formatting on the data populated. With the help of the developer console you will be able to configure the search crawling configuration. diff --git a/packages/search-service/mkdocs.yml b/packages/search-service/mkdocs.yml new file mode 100644 index 000000000..eedb0a00d --- /dev/null +++ b/packages/search-service/mkdocs.yml @@ -0,0 +1,11 @@ +site_name: 'Search Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md + +plugins: + - techdocs-core diff --git a/packages/user-group-service/catalog-info.yml b/packages/user-group-service/catalog-info.yml index 8a3362174..d7c6a91f2 100644 --- a/packages/user-group-service/catalog-info.yml +++ b/packages/user-group-service/catalog-info.yml @@ -8,6 +8,7 @@ metadata: description: Microservice based GraphQL API Backend ecosystem for User Group information integrated with the Rover. annotations: github.com/project-slug: '1-Platform/one-platform' + backstage.io/techdocs-ref: dir:. servicenow.com/appcode: ONEP-001 tags: - microservice diff --git a/packages/user-group-service/docs/api-reference.md b/packages/user-group-service/docs/api-reference.md new file mode 100644 index 000000000..0c08e429b --- /dev/null +++ b/packages/user-group-service/docs/api-reference.md @@ -0,0 +1,30 @@ +--- +id: api-ref +title: API Reference +slug: /user-group/api-ref +sidebar_label: API Reference +--- + +# API Reference + +You can test drive the available APIs on the [QA testing playground](https://qa.one.redhat.com/api/graphql). + +Some of the queries and mutations provided by the User Groups Service are: + +## Queries + +| Query | Description | +| ------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | +| getUsersBy(uid: String, rhatUUID: String) | Returns user details for the given `uid` or `rhatUUID` | +| listUsers | Returns all the users from the local cache | +| searchRoverUsers( ldapfield: ldapFieldType, value: String, cacheUser: Boolean ) | Search users based on a criteria from Rover and optionally cache them into the local cache. | +| group(cn: String!) | Returns group details and members of a LDAP / Rover Group | + +## Mutations + +| Mutation | Description | +| ------------------------------ | ---------------------------------------------------------------- | +| addUser(input: UserInput!) | Creates a new User | +| addUserFromRover(uid: String!) | Fetch user information from LDAP / Rover and add to the cache DB | +| updateUser(input: UserInput) | Updates user information | +| deleteUser(\_id: String!) | Delete user matching the provided `_id` | diff --git a/packages/user-group-service/docs/faqs.md b/packages/user-group-service/docs/faqs.md new file mode 100644 index 000000000..cabf67fde --- /dev/null +++ b/packages/user-group-service/docs/faqs.md @@ -0,0 +1,12 @@ +--- +id: faqs +title: User Group FAQs +slug: /user-group/faqs +sidebar_label: FAQs +--- + +# FAQ + +> Does the User Group Service require an LDAP / Rover? + +No, the User Group Service can work without an LDAP / Rover instance, by using the Cache DB as the primary data source. diff --git a/packages/user-group-service/docs/index.md b/packages/user-group-service/docs/index.md new file mode 100644 index 000000000..b1cc028ea --- /dev/null +++ b/packages/user-group-service/docs/index.md @@ -0,0 +1,26 @@ +# User Group Microservice + +One platform's server-side User Group GraphQL microservice. This microservice enables us to talk to the user database and the third party user data sources like LDAP (Light Weight Directory Access Protocol). It also provides the endpoint to serve the api requests + +## Switch to the working directory + +`cd user-service` + +Copy Certificates + +--- + +1. Copy the SSL paths to the `.env` file of `user` microservice. + +Start Microservice: + +--- + +1. Run `npm build:dev` to generate a build for dev env and `npm build` for production build. +2. Run `npm start` to run your microservice for dev env. + +Testing: + +--- + +1. Run `npm test` to run default tests. diff --git a/packages/user-group-service/docs/onboarding-guide.md b/packages/user-group-service/docs/onboarding-guide.md new file mode 100644 index 000000000..1698eeb1c --- /dev/null +++ b/packages/user-group-service/docs/onboarding-guide.md @@ -0,0 +1,6 @@ +--- +id: guides +title: Guides +slug: /user-group/guides +sidebar_label: Guides +--- diff --git a/packages/user-group-service/docs/overview.md b/packages/user-group-service/docs/overview.md new file mode 100644 index 000000000..67318b377 --- /dev/null +++ b/packages/user-group-service/docs/overview.md @@ -0,0 +1,22 @@ +--- +id: overview +title: What is User Group Service +slug: /user-group +sidebar_label: Overview +--- + +# Overview + +User group service acts as the primary pillar for obtaining enterprise user information for the one platform. This service uses organizational data sources like Rover/LDAP. + +## Features + +1. View User details such as User information, LDAP groups, etc. +2. Create and Manage User Groups (independent of ldap groups) +3. Create and Manage Service Accounts / API Keys + +## Data sources + +User Groups Service uses LDAP / Rover as the primary Data Source for syncing user data. But instead of duplicating the entire user directory from LDAP / Rover, it stores the data on a request basis. So whenever someone makes a request for a user, and that user does not exist in the local cache of the service, it will fetch the data from LDAP / Rover. + +At the same time, to keep the cache up-to-date, it periodically checks for any updates to the user data and syncs the cache with LDAP / Rover. diff --git a/packages/user-group-service/mkdocs.yaml b/packages/user-group-service/mkdocs.yaml new file mode 100644 index 000000000..0def8276a --- /dev/null +++ b/packages/user-group-service/mkdocs.yaml @@ -0,0 +1,11 @@ +site_name: 'User Group Service' + +nav: + - Getting Started: index.md + - API Reference: api-reference.md + - Guide: onboarding-guide.md + - Overview: overview.md + - FAQs: faqs.md + +plugins: + - techdocs-core