From 36c38820b01740e3b87a870dd925e898c29eff2d Mon Sep 17 00:00:00 2001 From: Shalane Proctor Date: Wed, 26 Mar 2025 17:18:13 -0500 Subject: [PATCH 1/3] chore: typo fixes/updates --- content/quickstart/pipelines.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/content/quickstart/pipelines.md b/content/quickstart/pipelines.md index dbb96f8..8545ce6 100644 --- a/content/quickstart/pipelines.md +++ b/content/quickstart/pipelines.md @@ -46,10 +46,10 @@ kubectl get pods -n mdai -l app.kubernetes.io/part-of=mdai-log-generator Verify that logs are being generated by using the `kubectl logs` command with any one of the pods. ``` -kubectl logs -n mdai-logger-xnoisy-{replace with pod hash} +kubectl logs -n mdai deployment/mdai-logger-xnoisy ``` -For examlpe, the logs from the pod named **mdai-logger-xnoisy-df6f984b8-ln8vz** should contain a variety of log levels. +For example, the logs from the pod named **mdai-logger-xnoisy-df6f984b8-ln8vz** should contain a variety of log levels. ``` 2025-02-06T04:43:50+00:00 - service4321 - INFO - The algorithm successfully executed, triggering neural pathways and producing a burst of optimized data streams. @@ -119,7 +119,7 @@ In this example, we'll use Fluentd to capture the synthetic log streams you crea 3. Look at the Fluentd logs, which should indicate that various pod log files are being accessed. ``` - kubectl logs fluent-fluentd- + kubectl logs svc/fluent-fluentd ``` You should see log lines similar to the following. From d1dfd1ed482fa1e333f5c48d106b561c549caa55 Mon Sep 17 00:00:00 2001 From: Shalane Proctor Date: Wed, 26 Mar 2025 17:18:57 -0500 Subject: [PATCH 2/3] updated grafana pf --- content/usage/grafana.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/usage/grafana.md b/content/usage/grafana.md index f2f81e5..342a07f 100644 --- a/content/usage/grafana.md +++ b/content/usage/grafana.md @@ -29,7 +29,7 @@ title = 'Grafana Dashboards' With a cluster running and data flowing through the cluster: - Port Forward MDAI Grafana Instance: - - `kubectl port-forward svc/mdai-grafana 3000:3000 -n mdai` + - `kubectl port-forward deployment/mdai-grafana 3000:3000 -n mdai` - Port forward MDAI event-handler-webservice - `kubectl port-forward svc/event-handler-webservice 8081:8081 -n mdai` From e8515e339471a3c83ae834c3ea62c59388b7bcd2 Mon Sep 17 00:00:00 2001 From: Shalane Proctor Date: Wed, 26 Mar 2025 20:29:08 -0500 Subject: [PATCH 3/3] Added grafana login and note for groupByLabel. Updated PF for grafana and EH. Updated gateway collector log check --- content/quickstart/pipelines.md | 2 +- content/usage/grafana.md | 19 +++++++++++++++++-- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/content/quickstart/pipelines.md b/content/quickstart/pipelines.md index 8545ce6..f732268 100644 --- a/content/quickstart/pipelines.md +++ b/content/quickstart/pipelines.md @@ -142,7 +142,7 @@ In this example, we'll use Fluentd to capture the synthetic log streams you crea You just finished connecting your fluentD instance to your Otel collector. You should see a healthy stream of data flowing through the collector. ``` -kubectl logs gateway-collector- --tail 10 +kubectl logs svc/gateway-collector --tail 10 -n mdai ``` You should see log lines similar to the following. diff --git a/content/usage/grafana.md b/content/usage/grafana.md index 342a07f..3bc39c1 100644 --- a/content/usage/grafana.md +++ b/content/usage/grafana.md @@ -29,9 +29,22 @@ title = 'Grafana Dashboards' With a cluster running and data flowing through the cluster: - Port Forward MDAI Grafana Instance: - - `kubectl port-forward deployment/mdai-grafana 3000:3000 -n mdai` +``` + kubectl port-forward deployment/mdai-grafana 3000:3000 -n mdai +``` - Port forward MDAI event-handler-webservice - - `kubectl port-forward svc/event-handler-webservice 8081:8081 -n mdai` +``` +kubectl port-forward svc/event-handler-webservice 8081:8081 -n mdai +``` +#### Login +Username: +``` +admin +``` +Password: +``` +mdai +``` ## Dashboards ### MDAI Audit Stream @@ -54,6 +67,8 @@ The **MDAI Data Management** dashboard shows the received and exported metrics f - Uses Prometheus as the primary datasource - Provides real-time insights into ingress and egress filtered through MDAI - Customizable metric options, data type, and group by label +>Note: Group by Label will need to be selected. This will be the **labelResourceAttributes** from your [MDAI Hub Custom Resource](./configs/mdai_custom_resource_config.md) + #### Metrics Tracked: - MDAI I/O by {groupByLabel} - Shows how many group by label monitored based on the current selected time interval