Skip to content

Typo fixes #7

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions content/quickstart/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,10 +46,10 @@ kubectl get pods -n mdai -l app.kubernetes.io/part-of=mdai-log-generator
Verify that logs are being generated by using the `kubectl logs` command with any one of the pods.

```
kubectl logs -n mdai-logger-xnoisy-{replace with pod hash}
kubectl logs -n mdai deployment/mdai-logger-xnoisy
```

For examlpe, the logs from the pod named **mdai-logger-xnoisy-df6f984b8-ln8vz** should contain a variety of log levels.
For example, the logs from the pod named **mdai-logger-xnoisy-df6f984b8-ln8vz** should contain a variety of log levels.

```
2025-02-06T04:43:50+00:00 - service4321 - INFO - The algorithm successfully executed, triggering neural pathways and producing a burst of optimized data streams.
Expand Down Expand Up @@ -119,7 +119,7 @@ In this example, we'll use Fluentd to capture the synthetic log streams you crea
3. Look at the Fluentd logs, which should indicate that various pod log files are being accessed.

```
kubectl logs fluent-fluentd-<your_pod_id_here>
kubectl logs svc/fluent-fluentd
```

You should see log lines similar to the following.
Expand All @@ -142,7 +142,7 @@ In this example, we'll use Fluentd to capture the synthetic log streams you crea
You just finished connecting your fluentD instance to your Otel collector. You should see a healthy stream of data flowing through the collector.

```
kubectl logs gateway-collector-<your_pod_id_here> --tail 10
kubectl logs svc/gateway-collector --tail 10 -n mdai
```

You should see log lines similar to the following.
Expand Down
19 changes: 17 additions & 2 deletions content/usage/grafana.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,22 @@ title = 'Grafana Dashboards'
With a cluster running and data flowing through the cluster:

- Port Forward MDAI Grafana Instance:
- `kubectl port-forward svc/mdai-grafana 3000:3000 -n mdai`
```
kubectl port-forward deployment/mdai-grafana 3000:3000 -n mdai
```
- Port forward MDAI event-handler-webservice
- `kubectl port-forward svc/event-handler-webservice 8081:8081 -n mdai`
```
kubectl port-forward svc/event-handler-webservice 8081:8081 -n mdai
```
#### Login
Username:
```
admin
```
Password:
```
mdai
```

## Dashboards
### MDAI Audit Stream
Expand All @@ -54,6 +67,8 @@ The **MDAI Data Management** dashboard shows the received and exported metrics f
- Uses Prometheus as the primary datasource
- Provides real-time insights into ingress and egress filtered through MDAI
- Customizable metric options, data type, and group by label
>Note: Group by Label will need to be selected. This will be the **labelResourceAttributes** from your [MDAI Hub Custom Resource](./configs/mdai_custom_resource_config.md)

#### Metrics Tracked:
- MDAI I/O by {groupByLabel}
- Shows how many group by label monitored based on the current selected time interval
Expand Down