Skip to content

Commit

Permalink
[RELENG-7422] 📝 Add documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
gaspardmoindrot committed Jun 21, 2023
1 parent 6b51af2 commit fc92ced
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 17 deletions.
26 changes: 15 additions & 11 deletions docs/metrics-analysis-prometheus/collected-reported-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,19 +88,23 @@ We aim to obtain a result that is close to reality, within a range of
approximately +/- 5%, for data visualization purposes.
Key points to consider for retrieving cost information:

- RAM and CPU Costs : provided values for RAM and CPU expenses, can be found
in the Google Cloud documentation.
- Storage Costs : provided values for storage expenses, can be found in the
Google Cloud documentation.
- Bandwidth Cost: Directly determining the cost of bandwidth is not feasible.

To arrive at an approximate cost, we conducted an analysis of previous invoices
and calculated the additional expenses incurred due to bandwidth, which averaged
around 30% per month. With this information, we were able to approximate the
overall cost using the following formula:
- RAM and CPU Costs : provided cost per minute for RAM and CPU expenses, can
be found in the documentation of the respective cloud provider.
- Storage Costs : provided cost per minute for storage expenses, can
be found in the documentation of the respective cloud provider.
- Bandwidth Cost: Directly determining the cost per minute of bandwidth is
not feasible.

Calculating the bandwidth cost per minutes is up to the discretion of the
user and will vary depending on the workload. As an example, adding an
extra 30% is what we found by comparing the values in the documentation
of different cloud providers (for CPU, RAM, and storage) with the actual
values available on our invoices. Using this information, we were able
to estimate the overall cost using the following formula:
(all costs are per minute)

```bash
cost = (cost_per_flavor + cost_per_storage) * 130 / 100
cost = (cost_per_flavor + cost_per_storage) * cost_of_bandwidth / 100
```

!!! note
Expand Down
12 changes: 6 additions & 6 deletions docs/metrics-analysis-prometheus/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

## Introduction

In order to collect and analyze GitHub Actions metrics, users are expected
to have an existing Prometheus installation and configure it to pull metrics.

Prometheus is a powerful open-source monitoring and alerting system that allows
users to collect, store, and analyze time-series data. In this guide, we will
explore how to effectively utilize Prometheus to analyze GitHub Actions.

In order to collect and analyze GitHub Actions metrics, users are expected
to have an existing Prometheus installation and configure it to pull metrics.

## Understanding Prometheus Queries

The idea here is not to recreate the entire Prometheus documentation; we will
Expand Down Expand Up @@ -39,14 +39,14 @@ within a specified time range.
cumulative sum of the github_actions_job_cost_count_total metric,
representing the total job cost count.
3. The `[5m]` part specifies the time range for the query.
4. The `by (repository)` clause groups the data by the repository field.
4. The `by (repository)` clause groups the data by the repository label.
This enables the query to calculate the cost sum for each repository individually.
5. The expression `> 0` filters the query results to only include
repositories with a value greater than zero.

!!! info
You can also use Grafana, it enhances the visualization of Prometheus data and
provides powerful querying capabilities. Within Grafana, you can apply filters,
Using Grafana enhances the visualization of Prometheus data and
provides powerful querying capabilities. Within Grafana, apply filters,
combine queries, and utilize variables for dynamic filtering. It's important
to understand `__interval` (time interval between data points) and `__range`
(selected time range) when working with Prometheus data in Grafana. This
Expand Down

0 comments on commit fc92ced

Please sign in to comment.