diff --git a/Dockerfile.tests b/Dockerfile.tests index 8d2ad273f..f0f11fb27 100644 --- a/Dockerfile.tests +++ b/Dockerfile.tests @@ -13,7 +13,7 @@ # See the License for the specific language governing permissions and # limitations under the License. # -FROM ubuntu:latest +FROM ubuntu:22.04 RUN mkdir -p /work/tests RUN mkdir -p /work/test-results/functional diff --git a/docs/cim_compliance.md b/docs/cim_compliance.md index a7db94a32..4deecb402 100644 --- a/docs/cim_compliance.md +++ b/docs/cim_compliance.md @@ -29,7 +29,7 @@ There are two ways to generate the CIM Compliance report: - Append the following to [any one of the commands](how_to_use.md#test-execution) used for executing the test cases: ```console - --cim-report ``` **2. Generating the report using the test results stored in the junit-xml file** @@ -37,7 +37,7 @@ There are two ways to generate the CIM Compliance report: - Execute the following command: ```console - cim-report ``` ## Report Generation Troubleshooting diff --git a/docs/cim_tests.md b/docs/cim_tests.md index f0b23f63b..1aae162bd 100644 --- a/docs/cim_tests.md +++ b/docs/cim_tests.md @@ -41,7 +41,7 @@ To generate test cases only for CIM compatibility, append the following marker t ``` - #### Testcase Assertions: +#### Testcase Assertions: - There should be at least 1 event mapped with the dataset. - Each required field should be extracted in all the events mapped with the datasets. @@ -100,13 +100,13 @@ To generate test cases only for CIM compatibility, append the following marker t - Plugin gets a list of fields whose extractions are defined in props using addon_parser. - By comparing we obtain a list of fields whose extractions are not allowed but defined. -**5. Testcase to check that eventtype is not be mapped with multiple datamodels.** +**5. Testcase to check that eventtype is not mapped with multiple datamodels.** **Workflow:** - Parsing tags.conf it already has a list of eventtype mapped with the datasets. - - Using SPL we check that each eventtype is not be mapped with multiple datamodels. + - Using SPL we check that each eventtype is not mapped with multiple datamodels. ## Testcase Troubleshooting @@ -122,14 +122,14 @@ If all the above conditions are satisfied, further analysis of the test is requi For every CIM validation test case there is a defined structure for the stack trace. ```text - AssertionError: <> Source | Sourcetype | Field | Event Count | Field Count | Invalid Field Count | Invalid Values -------- | --------------- | ------| ----------- | ----------- | ------------------- | -------------- str | str | str | int | int | int | str - Search = - Properties for the field :: type= Required/Conditional condition= Condition for field validity= EVAL conditions diff --git a/docs/field_tests.md b/docs/field_tests.md index 6de550d58..4cc339d6a 100644 --- a/docs/field_tests.md +++ b/docs/field_tests.md @@ -33,7 +33,7 @@ To generate test cases only for knowledge objects, append the following marker t ``` Testcase verifies that there are events mapped with source/sourcetype. - Here ] + test_props_fields[::field::] ``` Testcase verifies that the field should be extracted in the source/sourcetype. - Here 0. - This verifies that all the fields are extracted in the same event. **5. Events should be present in each eventtype** @@ -104,7 +104,7 @@ To generate test cases only for knowledge objects, append the following marker t **Workflow:** - - For each eventtype mentioned in eventtypes.conf plugin generates an SPL search query and asserts event_count 0 for the eventtype. + - For each eventtype mentioned in eventtypes.conf plugin generates an SPL search query and asserts event_count > 0 for the eventtype. **6. Tags defined in tags.conf should be applied to the events.** @@ -113,13 +113,13 @@ To generate test cases only for knowledge objects, append the following marker t ``` Test case verifies that the there are events mapped with the tag. - Here 0. **7. Search query should be present in each savedsearches.** @@ -133,7 +133,7 @@ To generate test cases only for knowledge objects, append the following marker t **Workflow:** - In savedsearches.conf for each stanza, the plugin generates a test case. - - For each stanza mentioned in savedsearches.conf plugin generates an SPL search query and asserts event_count 0 for the savedsearch. + - For each stanza mentioned in savedsearches.conf plugin generates an SPL search query and asserts event_count > 0 for the savedsearch. ## Testcase Troubleshooting @@ -150,8 +150,8 @@ If all the above conditions are satisfied, further analysis of the test is requi For every test case failure, there is a defined structure for the stack trace. ```text - AssertionError: <> + Search = ``` Get the search query from the stack trace and execute it on the Splunk instance and verify which specific type of events are causing failure. diff --git a/docs/generate_conf.md b/docs/generate_conf.md index 78da4777b..1af9614ee 100644 --- a/docs/generate_conf.md +++ b/docs/generate_conf.md @@ -45,7 +45,7 @@ - Execute the following command: ```console - generate-indextime-conf [] ``` For example: diff --git a/docs/how_to_use.md b/docs/how_to_use.md index 245569e2d..5c0db16ec 100644 --- a/docs/how_to_use.md +++ b/docs/how_to_use.md @@ -20,7 +20,7 @@ There are three ways to execute the tests: Run pytest with the add-on, in an external Splunk deployment ```bash - pytest --splunk-type=external --splunk-app= --splunk-data-generator= --splunk-host= --splunk-port= --splunk-user= --splunk-password= --splunk-hec-token= ``` **2. Running tests with docker splunk** @@ -101,6 +101,7 @@ services: SPLUNK_APP_ID: ${SPLUNK_APP_ID} SPLUNK_APP_PACKAGE: ${SPLUNK_APP_PACKAGE} SPLUNK_VERSION: ${SPLUNK_VERSION} + platform: linux/amd64 ports: - "8000" - "8088" @@ -120,6 +121,7 @@ services: SPLUNK_APP_ID: ${SPLUNK_APP_ID} SPLUNK_APP_PACKAGE: ${SPLUNK_APP_PACKAGE} SPLUNK_VERSION: ${SPLUNK_VERSION} + platform: linux/amd64 hostname: uf ports: - "9997" @@ -132,13 +134,10 @@ services: volumes: - ${CURRENT_DIR}/uf_files:${CURRENT_DIR}/uf_files -volumes: - splunk-sc4s-var: - external: false ``` -
+
Create conftest.py file ``` @@ -184,7 +183,7 @@ def docker_services_project_name(pytestconfig): Run pytest with the add-on, using the following command: ```bash - pytest --splunk-type=docker --splunk-data-generator= ``` The tool assumes the Splunk Add-on is located in a folder "package" in the project root. @@ -209,15 +208,15 @@ The tool assumes the Splunk Add-on is located in a folder "package" in the proje ```bash pytest --splunk-type=external # Whether you want to run the addon with docker or an external Splunk instance - --splunk-app= # Path to Splunk app package. The package should have the configuration files in the default folder. + --splunk-host= # Receiver Splunk instance where events are searchable. + --splunk-port= # default 8089 + --splunk-user= # default admin + --splunk-password= # default Chang3d! + --splunk-forwarder-host= # Splunk instance where forwarding to receiver instance is configured. + --splunk-hec-port= # HEC port of the forwarder instance. + --splunk-hec-token= # HEC token configured in forwarder instance. + --splunk-data-generator= # Path to pytest-splunk-addon-data.conf ``` > **_NOTE:_** @@ -243,10 +242,10 @@ There are 3 types of tests included in pytest-splunk-addon are: 3. To generate test cases only for index time properties, append the following marker to pytest command: ```console - -m splunk_indextime --splunk-data-generator= ``` - For detailed information on index time test execution, please refer {ref}`here Splunk index of which the events will be searched while testing. Default value: "*, _internal". ``` @@ -270,11 +269,11 @@ The following optional arguments are available to modify the default settings in 2. To increase/decrease time interval and retries for flaky tests, user can provide following additional arguments: ```console - --search-retry= Number of retries to make if there are no events found while searching in the Splunk instance. Default value: 0. - --search-interval= Time interval to wait before retrying the search query.Default value: 0. ``` @@ -297,7 +296,7 @@ The following optional arguments are available to modify the default settings in - **Addon related errors:** To suppress these user can create a file with the list of strings and provide the file in the **--ignore-addon-errors** param while test execution. ```console - --ignore-addon-errors= ``` - Sample strings in the file. @@ -328,7 +327,7 @@ The following optional arguments are available to modify the default settings in - Default value for this parameter is *store_new* ```console - --event-file-path= ``` - Path to tokenized events file @@ -380,7 +379,7 @@ The following optional arguments are available to modify the default settings in **3. Setup test environment before executing the test cases** - If any setup is required in the Splunk/test environment before executing the test cases, implement a fixture in {ref}`conftest.py ``` > **_NOTE:_** --splunk-data-generator should contain the path to *pytest-splunk-addon-data.conf*, @@ -55,7 +55,7 @@ To generate test cases only for index time properties, append the following mark - This test case will not be generated if there are no key fields specified for the event. - Key field can be assign to token using field property. `i.e token.n.field = ` - Testcase assertions: +#### Testcase Assertions: - There should be at least 1 event with the sourcetype and host. - The values of the key fields obtained from the event @@ -72,7 +72,7 @@ To generate test cases only for index time properties, append the following mark - Execute the SPL query in a Splunk instance. - - Assert the test case results as mentioned in {ref}`testcase assertions> + Search = ``` Get the search query from the stack trace and execute it on the Splunk instance and verify which specific type of events are causing failure. @@ -229,9 +229,9 @@ Get the search query from the stack trace and execute it on the Splunk instance - No test would generate to test Key Fields for that particular stanza and thus won't be correctly tested. -8. When do I assign token.\.field = ` to the token for that field value. - Example: : For this sample, there is report written in props that extracts `127.0.0.1` as `src`, diff --git a/docs/sample_generator.md b/docs/sample_generator.md index 985383200..ba3d48ce0 100644 --- a/docs/sample_generator.md +++ b/docs/sample_generator.md @@ -56,7 +56,7 @@ host_prefix = {{host_prefix}} - If the value is event, the host field should be provided for a token using "token..field = host". **input_type = modinput | scripted_input | syslog_tcp | file_monitor | windows_input | uf_file_monitor | default** -- + - The input_type used in addon to ingest data of a sourcetype used in stanza. - The way with which the sample data is ingested in Splunk depends on Splunk. The most similar ingesting approach is used for each input_type to get accurate index-time testing. - In input_type=uf_file_monitor, universal forwarder will use file monitor to read event and then it will send data to indexer. @@ -143,7 +143,7 @@ The following replacementType -> replacement values are supported - "n" is a number starting at 0, and increasing by 1. - For static, the token will be replaced with the value specified in the replacement setting. -- For timestamp, the token will be replaced with the strptime specified in the replacement setting. Strptime directive: +- For timestamp, the token will be replaced with the strptime specified in the replacement setting. Strptime directive: [https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior](https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior) - For random, the token will be replaced with a randomly picked type-aware value - For all, For each possible replacement value, a new event will be generated and the token will be replaced with it. The configuration can be used where a token replacement contains multiple templates/values and all of the values are important and should be ingested at least once. The number of events will be multiplied by the number of values in the replacement. For example, if sample contains 3 lines & a token replacement has list of 2 values, then 6 events will be generated. For a replacement if replacementType='all' is not supported, then be default plugin will consider replacementType="random". - For file, the token will be replaced with a random value retrieved from a file specified in the replacement setting. @@ -174,8 +174,8 @@ The following replacementType -> replacement values are supported - For , the token will be replaced with a random line in the replacement file. - - Replacement file name should be a fully qualified path (i.e. \$SPLUNK_HOME/etc/apps/windows/samples/users.list). - - Windows separators should contain double forward slashes "\\" (i.e. \$SPLUNK_HOME\\etc\\apps\\windows\\samples\\users.list). + - Replacement file name should be a fully qualified path (i.e. $SPLUNK_HOME/etc/apps/windows/samples/users.list). + - Windows separators should contain double forward slashes "\\" (i.e. $SPLUNK_HOME\\etc\\apps\\windows\\samples\\users.list). - Unix separators will work on Windows and vice-versa. - Column numbers in mvfile references are indexed at 1, meaning the first column is column 1, not 0.