From 510f1ddd11a9515210eee807302b709d20f19e12 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Elias=20H=C3=A4reskog?= Date: Wed, 30 Oct 2024 15:53:41 +0100 Subject: [PATCH 1/2] docs: check amount of logs in reviews --- docs/ciso-guide/log-review.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/ciso-guide/log-review.md b/docs/ciso-guide/log-review.md index 7b4daf4314b..e78c81dcb23 100644 --- a/docs/ciso-guide/log-review.md +++ b/docs/ciso-guide/log-review.md @@ -71,6 +71,7 @@ Aim for a review which is both **wide** and **deep**. By wide we mean that you s 1. Open up a browser and open the Compliant Kubernetes [logs](../user-guide/logs.md) of the cluster you are reviewing. This functionality is currently offered by OpenSearch. 1. Search for the following keywords on all indices -- i.e., search over each index pattern -- over the last review period: `error`, `failed`, `failure`, `deny`, `denied`, `blocked`, `invalid`, `expired`, `unable`, `unauthorized`, `bad`, `401`, `403`, `500`, `unknown`. Sample a few keywords you recently encountered during your work, e.g., `already installed` or `not found`; be creative and unpredictable. 1. Vary the time point, the time interval, filters, etc. +1. Include the amount of logs for each log category in your review. Compare it to any previous log reviews of the environment. If there are an order of magnitude more logs than in the previous review it could be worth investigating why that is. 1. Go _wide_: For each query (index pattern, keyword, timepoint, time interval and filter combination), look at the timeline and see if there is an unexpected increase or decrease in the count of log lines. If you find any, focus your attention on those. 1. Go _deep_: For each query, sample at least 10 log entries, read them and make sure you understand what they mean. Think about the following: - What are potential causes? From bd13541739c5d5162863e2195fc018a419678341 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Elias=20H=C3=A4reskog?= Date: Thu, 31 Oct 2024 08:48:10 +0100 Subject: [PATCH 2/2] PR fixups --- docs/ciso-guide/log-review.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/ciso-guide/log-review.md b/docs/ciso-guide/log-review.md index e78c81dcb23..2fb102517c8 100644 --- a/docs/ciso-guide/log-review.md +++ b/docs/ciso-guide/log-review.md @@ -71,7 +71,7 @@ Aim for a review which is both **wide** and **deep**. By wide we mean that you s 1. Open up a browser and open the Compliant Kubernetes [logs](../user-guide/logs.md) of the cluster you are reviewing. This functionality is currently offered by OpenSearch. 1. Search for the following keywords on all indices -- i.e., search over each index pattern -- over the last review period: `error`, `failed`, `failure`, `deny`, `denied`, `blocked`, `invalid`, `expired`, `unable`, `unauthorized`, `bad`, `401`, `403`, `500`, `unknown`. Sample a few keywords you recently encountered during your work, e.g., `already installed` or `not found`; be creative and unpredictable. 1. Vary the time point, the time interval, filters, etc. -1. Include the amount of logs for each log category in your review. Compare it to any previous log reviews of the environment. If there are an order of magnitude more logs than in the previous review it could be worth investigating why that is. +1. Include the total amount of logs in each log category in your review (set the time interval bigger than retention). Is it the same, significantly less or significantly more logs compared to the last check? If there is a major difference, it could be worth investigating further to figure out why that is. 1. Go _wide_: For each query (index pattern, keyword, timepoint, time interval and filter combination), look at the timeline and see if there is an unexpected increase or decrease in the count of log lines. If you find any, focus your attention on those. 1. Go _deep_: For each query, sample at least 10 log entries, read them and make sure you understand what they mean. Think about the following: - What are potential causes?