Skip to content

Commit

Permalink
docs: fixing documentation (#836)
Browse files Browse the repository at this point in the history
- Updated the md files for proper reference.
- Added missing brackets

Fixes #834
  • Loading branch information
harshilgajera-crest authored May 14, 2024
1 parent b789bb4 commit 82dd470
Show file tree
Hide file tree
Showing 8 changed files with 57 additions and 58 deletions.
2 changes: 1 addition & 1 deletion Dockerfile.tests
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
FROM ubuntu:latest
FROM ubuntu:22.04

RUN mkdir -p /work/tests
RUN mkdir -p /work/test-results/functional
Expand Down
4 changes: 2 additions & 2 deletions docs/cim_compliance.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ There are two ways to generate the CIM Compliance report:
- Append the following to [any one of the commands](how_to_use.md#test-execution) used for executing the test cases:

```console
--cim-report <file_name.md
--cim-report <file_name.md>
```

**2. Generating the report using the test results stored in the junit-xml file**

- Execute the following command:

```console
cim-report <junit_report.xml<report.md
cim-report <junit_report.xml> <report.md>
```

## Report Generation Troubleshooting
Expand Down
12 changes: 6 additions & 6 deletions docs/cim_tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ To generate test cases only for CIM compatibility, append the following marker t
```


#### Testcase Assertions:
#### Testcase Assertions:

- There should be at least 1 event mapped with the dataset.
- Each required field should be extracted in all the events mapped with the datasets.
Expand Down Expand Up @@ -100,13 +100,13 @@ To generate test cases only for CIM compatibility, append the following marker t
- Plugin gets a list of fields whose extractions are defined in props using addon_parser.
- By comparing we obtain a list of fields whose extractions are not allowed but defined.

**5. Testcase to check that eventtype is not be mapped with multiple datamodels.**
**5. Testcase to check that eventtype is not mapped with multiple datamodels.**


**Workflow:**

- Parsing tags.conf it already has a list of eventtype mapped with the datasets.
- Using SPL we check that each eventtype is not be mapped with multiple datamodels.
- Using SPL we check that each eventtype is not mapped with multiple datamodels.

## Testcase Troubleshooting

Expand All @@ -122,14 +122,14 @@ If all the above conditions are satisfied, further analysis of the test is requi
For every CIM validation test case there is a defined structure for the stack trace.

```text
AssertionError: <<error_message
AssertionError: <<error_message>>
Source | Sourcetype | Field | Event Count | Field Count | Invalid Field Count | Invalid Values
-------- | --------------- | ------| ----------- | ----------- | ------------------- | --------------
str | str | str | int | int | int | str
Search = <Query
Search = <Query>
Properties for the field :: <field_name
Properties for the field :: <field_name>
type= Required/Conditional
condition= Condition for field
validity= EVAL conditions
Expand Down
26 changes: 13 additions & 13 deletions docs/field_tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ To generate test cases only for knowledge objects, append the following marker t
```

Testcase verifies that there are events mapped with source/sourcetype.
Here <stanza is the source/sourcetype that is defined in the stanza.
Here &lt;stanza&gt; is the source/sourcetype that is defined in the stanza.

**Workflow:**

Expand All @@ -43,12 +43,12 @@ To generate test cases only for knowledge objects, append the following marker t
**2. Fields mentioned under source/sourcetype should be extracted**

```python
test_props_fields[<stanza::field::<fieldname>]
test_props_fields[<stanza>::field::<fieldname>]
```

Testcase verifies that the field should be extracted in the source/sourcetype.
Here <stanza is the source/sourcetype that is defined in the stanza and
<fieldname is the name of a field which is extracted under source/sourcetype.
Here &lt;stanza&gt; is the source/sourcetype that is defined in the stanza and
&lt;fieldname&gt; is the name of a field which is extracted under source/sourcetype.

**Workflow:**

Expand All @@ -62,8 +62,8 @@ To generate test cases only for knowledge objects, append the following marker t
```

Testcase verifies that the field should not have "-" (dash) or "" (empty) as a value.
Here <stanza is the source/sourcetype that is defined in the stanza and
<fieldname is name of field which is extracted under source/sourcetype.
Here &lt;stanza&gt; is the source/sourcetype that is defined in the stanza and
&lt;fieldname&gt; is name of field which is extracted under source/sourcetype.

**Workflow:**

Expand All @@ -90,7 +90,7 @@ To generate test cases only for knowledge objects, append the following marker t

- While parsing the conf file when the plugin finds one of EXTRACT, REPORT, LOOKUP
the plugin gets the list of fields extracted and generates a test case.
- For all the fields in the test case it generates a single SPL search query including the stanza and asserts event_count 0.
- For all the fields in the test case it generates a single SPL search query including the stanza and asserts event_count > 0.
- This verifies that all the fields are extracted in the same event.

**5. Events should be present in each eventtype**
Expand All @@ -104,7 +104,7 @@ To generate test cases only for knowledge objects, append the following marker t

**Workflow:**

- For each eventtype mentioned in eventtypes.conf plugin generates an SPL search query and asserts event_count 0 for the eventtype.
- For each eventtype mentioned in eventtypes.conf plugin generates an SPL search query and asserts event_count > 0 for the eventtype.

**6. Tags defined in tags.conf should be applied to the events.**

Expand All @@ -113,13 +113,13 @@ To generate test cases only for knowledge objects, append the following marker t
```

Test case verifies that the there are events mapped with the tag.
Here <tag_stanza is a stanza mentioned in tags.conf and <tag is an individual tag
Here &lt;tag_stanza&gt; is a stanza mentioned in tags.conf and &lt;tag&gt; is an individual tag
applied to that stanza.

**Workflow:**

- In tags.conf for each tag defined in the stanza, the plugin generates a test case.
- For each tag, the plugin generates a search query including the stanza and the tag and asserts event_count 0.
- For each tag, the plugin generates a search query including the stanza and the tag and asserts event_count > 0.

**7. Search query should be present in each savedsearches.**

Expand All @@ -133,7 +133,7 @@ To generate test cases only for knowledge objects, append the following marker t
**Workflow:**

- In savedsearches.conf for each stanza, the plugin generates a test case.
- For each stanza mentioned in savedsearches.conf plugin generates an SPL search query and asserts event_count 0 for the savedsearch.
- For each stanza mentioned in savedsearches.conf plugin generates an SPL search query and asserts event_count > 0 for the savedsearch.

## Testcase Troubleshooting

Expand All @@ -150,8 +150,8 @@ If all the above conditions are satisfied, further analysis of the test is requi
For every test case failure, there is a defined structure for the stack trace.

```text
AssertionError: <<error_message
Search = <Query
AssertionError: <<error_message>>
Search = <Query>
```

Get the search query from the stack trace and execute it on the Splunk instance and verify which specific type of events are causing failure.
2 changes: 1 addition & 1 deletion docs/generate_conf.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
- Execute the following command:

```console
generate-indextime-conf <path-to-addon [<path-to-the-new-conf-file]
generate-indextime-conf <path-to-addon> [<path-to-the-new-conf-file>]
```
For example:
Expand Down
45 changes: 22 additions & 23 deletions docs/how_to_use.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ There are three ways to execute the tests:
Run pytest with the add-on, in an external Splunk deployment

```bash
pytest --splunk-type=external --splunk-app=<path-to-addon-package --splunk-data-generator=<path to pytest-splunk-addon-data.conf file --splunk-host=<hostname --splunk-port=<splunk-management-port --splunk-user=<username --splunk-password=<password --splunk-hec-token=<splunk_hec_token
pytest --splunk-type=external --splunk-app=<path-to-addon-package> --splunk-data-generator=<path to pytest-splunk-addon-data.conf file> --splunk-host=<hostname> --splunk-port=<splunk-management-port> --splunk-user=<username> --splunk-password=<password> --splunk-hec-token=<splunk_hec_token>
```

**2. Running tests with docker splunk**
Expand Down Expand Up @@ -101,6 +101,7 @@ services:
SPLUNK_APP_ID: ${SPLUNK_APP_ID}
SPLUNK_APP_PACKAGE: ${SPLUNK_APP_PACKAGE}
SPLUNK_VERSION: ${SPLUNK_VERSION}
platform: linux/amd64
ports:
- "8000"
- "8088"
Expand All @@ -120,6 +121,7 @@ services:
SPLUNK_APP_ID: ${SPLUNK_APP_ID}
SPLUNK_APP_PACKAGE: ${SPLUNK_APP_PACKAGE}
SPLUNK_VERSION: ${SPLUNK_VERSION}
platform: linux/amd64
hostname: uf
ports:
- "9997"
Expand All @@ -132,13 +134,10 @@ services:
volumes:
- ${CURRENT_DIR}/uf_files:${CURRENT_DIR}/uf_files
volumes:
splunk-sc4s-var:
external: false
```
</details>

<details>
<details id="conftest">
<summary>Create conftest.py file</summary>

```
Expand Down Expand Up @@ -184,7 +183,7 @@ def docker_services_project_name(pytestconfig):
Run pytest with the add-on, using the following command:

```bash
pytest --splunk-type=docker --splunk-data-generator=<path to pytest-splunk-addon-data.conf file
pytest --splunk-type=docker --splunk-data-generator=<path to pytest-splunk-addon-data.conf file>
```

The tool assumes the Splunk Add-on is located in a folder "package" in the project root.
Expand All @@ -209,15 +208,15 @@ The tool assumes the Splunk Add-on is located in a folder "package" in the proje

```bash
pytest --splunk-type=external # Whether you want to run the addon with docker or an external Splunk instance
--splunk-app=<path-to-addon-package # Path to Splunk app package. The package should have the configuration files in the default folder.
--splunk-host=<hostname # Receiver Splunk instance where events are searchable.
--splunk-port=<splunk_management_port # default 8089
--splunk-user=<username # default admin
--splunk-password=<password # default Chang3d!
--splunk-forwarder-host=<splunk_forwarder_host # Splunk instance where forwarding to receiver instance is configured.
--splunk-hec-port=<splunk_forwarder_hec_port # HEC port of the forwarder instance.
--splunk-hec-token=<splunk_forwarder_hec_token # HEC token configured in forwarder instance.
--splunk-data-generator=<pytest_splunk_addon_conf_path # Path to pytest-splunk-addon-data.conf
--splunk-app=<path-to-addon-package> # Path to Splunk app package. The package should have the configuration files in the default folder.
--splunk-host=<hostname> # Receiver Splunk instance where events are searchable.
--splunk-port=<splunk_management_port> # default 8089
--splunk-user=<username> # default admin
--splunk-password=<password> # default Chang3d!
--splunk-forwarder-host=<splunk_forwarder_host> # Splunk instance where forwarding to receiver instance is configured.
--splunk-hec-port=<splunk_forwarder_hec_port> # HEC port of the forwarder instance.
--splunk-hec-token=<splunk_forwarder_hec_token> # HEC token configured in forwarder instance.
--splunk-data-generator=<pytest_splunk_addon_conf_path> # Path to pytest-splunk-addon-data.conf
```

> **_NOTE:_**
Expand All @@ -243,10 +242,10 @@ There are 3 types of tests included in pytest-splunk-addon are:
3. To generate test cases only for index time properties, append the following marker to pytest command:

```console
-m splunk_indextime --splunk-data-generator=<Path to the conf file
-m splunk_indextime --splunk-data-generator=<Path to the conf file>
```
For detailed information on index time test execution, please refer {ref}`here <index_time_tests`.
For detailed information on index time test execution, please refer [here](./index_time_tests.md).

- To execute all the searchtime tests together, i.e both Knowledge objects and CIM compatibility tests,
append the following marker to the pytest command:
Expand All @@ -262,19 +261,19 @@ The following optional arguments are available to modify the default settings in
1. To search for events in a specific index, user can provide following additional arguments:

```console
--search-index=<index
--search-index=<index>
Splunk index of which the events will be searched while testing. Default value: "*, _internal".
```

2. To increase/decrease time interval and retries for flaky tests, user can provide following additional arguments:

```console
--search-retry=<retry
--search-retry=<retry>
Number of retries to make if there are no events found while searching in the Splunk instance. Default value: 0.
--search-interval=<interval
--search-interval=<interval>
Time interval to wait before retrying the search query.Default value: 0.
```
Expand All @@ -297,7 +296,7 @@ The following optional arguments are available to modify the default settings in
- **Addon related errors:** To suppress these user can create a file with the list of strings and provide the file in the **--ignore-addon-errors** param while test execution.
```console
--ignore-addon-errors=<path_to_file
--ignore-addon-errors=<path_to_file>
```
- Sample strings in the file.
Expand Down Expand Up @@ -328,7 +327,7 @@ The following optional arguments are available to modify the default settings in
- Default value for this parameter is *store_new*

```console
--event-file-path=<path_to_file
--event-file-path=<path_to_file>
```

- Path to tokenized events file
Expand Down Expand Up @@ -380,7 +379,7 @@ The following optional arguments are available to modify the default settings in

**3. Setup test environment before executing the test cases**

If any setup is required in the Splunk/test environment before executing the test cases, implement a fixture in {ref}`conftest.py <conftest_file`.
If any setup is required in the Splunk/test environment before executing the test cases, implement a fixture in [conftest.py](#conftest).

```python
@pytest.fixture(scope="session")
Expand Down
16 changes: 8 additions & 8 deletions docs/index_time_tests.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,15 +14,15 @@
### Prerequisites

- `pytest-splunk-addon-data.conf` file which contains all the required data
executing the tests. The conf file should follow the specifications as mentioned {ref}`here <conf_spec`.
executing the tests. The conf file should follow the specifications as mentioned [here](./sample_generator.md#pytest-splunk-addon-dataconfspec).

______________________________________________________________________


To generate test cases only for index time properties, append the following marker to pytest command:

```console
-m splunk_indextime --splunk-data-generator=<Path to the conf file
-m splunk_indextime --splunk-data-generator=<Path to the conf file>
```

> **_NOTE:_** --splunk-data-generator should contain the path to *pytest-splunk-addon-data.conf*,
Expand Down Expand Up @@ -55,7 +55,7 @@ To generate test cases only for index time properties, append the following mark
- This test case will not be generated if there are no key fields specified for the event.
- Key field can be assign to token using field property. `i.e token.n.field = <KEY_FIELD>`

Testcase assertions:
#### Testcase Assertions:

- There should be at least 1 event with the sourcetype and host.
- The values of the key fields obtained from the event
Expand All @@ -72,7 +72,7 @@ To generate test cases only for index time properties, append the following mark

- Execute the SPL query in a Splunk instance.

- Assert the test case results as mentioned in {ref}`testcase assertions<test_assertions_key_field`.
- Assert the test case results as mentioned in [testcase assertions](#testcase-assertions).

**2. Test case for _time property:**

Expand Down Expand Up @@ -141,8 +141,8 @@ If all the above conditions are satisfied, further analysis of the test is requi
For every test case failure, there is a defined structure for the stack trace.

```text
AssertionError: <<error_message
Search = <Query
AssertionError: <<error_message>>
Search = <Query>
```

Get the search query from the stack trace and execute it on the Splunk instance and verify which specific type of events are causing failure.
Expand Down Expand Up @@ -229,9 +229,9 @@ Get the search query from the stack trace and execute it on the Splunk instance
- No test would generate to test Key Fields for that particular stanza and thus won't be correctly tested.
8. When do I assign token.\<n.field = \<field_name to test the Key Fields for an event?
8. When do I assign token.&lt;n&gt;.field = &lt;field_name&gt; to test the Key Fields for an event?
- When there props configurations written in props to extract any of the field present in Key Fields list, you should add `token.<n.field = <field_name` to the token for that field value.
- When there props configurations written in props to extract any of the field present in Key Fields list, you should add `token.<n>.field = <field_name>` to the token for that field value.
- Example:
: For this sample, there is report written in props that extracts `127.0.0.1` as `src`,
Expand Down
Loading

0 comments on commit 82dd470

Please sign in to comment.