Skip to content

Commit

Permalink
Fixes applied from pre-commit hooks
Browse files Browse the repository at this point in the history
  • Loading branch information
anibalinn committed Jul 29, 2024
1 parent c4c7009 commit e76dea1
Show file tree
Hide file tree
Showing 17 changed files with 80 additions and 146 deletions.
2 changes: 1 addition & 1 deletion .gitattributes
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
*.js linguist-detectable=false
*.css linguist-detectable=false
*.jinja2 linguist-detectable=false
*.jinja2 linguist-detectable=false
2 changes: 1 addition & 1 deletion .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ jobs:
pip install behavex
- name: Verify behavex command
run: behavex ./tests/features
run: behavex ./tests/features
37 changes: 3 additions & 34 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,56 +8,25 @@ repos:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-ast
- id: check-added-large-files
- id: check-json
- id: double-quote-string-fixer
- id: fix-encoding-pragma
- id: file-contents-sorter
- id: check-case-conflict
- id: check-symlinks
- id: check-merge-conflict
- id: debug-statements
- id: detect-private-key
- id: requirements-txt-fixer
- id: no-commit-to-branch
args: [--branch, develop, --branch, master]

- repo: https://github.com/PyCQA/bandit
rev: 1.6.2
hooks:
- id: bandit
args: [--skip, "B101,B307,B322"] # ignore assert_used
exclude: tests
args: [--skip, "B322"] # ignore assert_used

- repo: https://github.com/pre-commit/mirrors-isort
rev: v5.6.4
hooks:
- id: isort
args: ["--profile", "black"]

- repo: https://gitlab.com/pycqa/flake8
rev: '3.7.9'
hooks:
- id: flake8
additional_dependencies: [
'flake8-bugbear==19.8.0',
'flake8-coding==1.3.2',
'flake8-comprehensions==3.0.1',
'flake8-debugger==3.2.1',
'flake8-deprecated==1.3',
#'flake8-docstrings==1.5.0',
'flake8-pep3101==1.2.1',
'flake8-polyfill==1.0.2',
#'flake8-print==3.1.4',
#'flake8-quotes==2.1.1',
'flake8-string-format==0.2.3',
]

- repo: https://github.com/ambv/black
rev: stable
hooks:
- id: black
args: [--skip-string-normalization]
language_version: python3.8

# - repo: https://github.com/PyCQA/pylint
# rev: pylint-2.6.0
Expand Down
1 change: 0 additions & 1 deletion CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -182,4 +182,3 @@ ENHANCEMENTS:
DOCUMENTATION:

* Adding HTML report screenshots to documentation

29 changes: 14 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ BehaveX can be used to build testing pipelines from scratch using the same [Beha
* This is enhanced implementation of Behave's dry run feature, allowing you to see the full list of scenarios in the HTML report without actually executing the tests
* Re-execute failing test scenarios
* By just adding the @AUTORETRY tag to test scenarios, so when the first execution fails the scenario is immediately re-executed
* Additionally, you can provide the wrapper with a list of previously failing scenarios, which will also be re-executed automatically
* Additionally, you can provide the wrapper with a list of previously failing scenarios, which will also be re-executed automatically

![test execution report](https://github.com/hrcorval/behavex/blob/master/img/html_test_report.png?raw=true)

Expand All @@ -44,43 +44,43 @@ The execution is performed in the same way as you do when executing Behave from
Examples:

>Run scenarios tagged as **TAG_1** but not **TAG_2**:
>
>
> <pre>behavex -t @TAG_1 -t ~@TAG_2</pre>
>Run scenarios tagged as **TAG_1** or **TAG_2**:
>
>
><pre>behavex -t @TAG_1,@TAG_2</pre>
>Run scenarios tagged as **TAG_1**, using **4** parallel processes:
>
>
><pre>behavex -t @TAG_1 --parallel-processes 4 --parallel-scheme scenario</pre>
>Run scenarios located at "**features/features_folder_1**" and "**features/features_folder_2**" folders, using **2** parallel processes
>
>
><pre>behavex features/features_folder_1 features/features_folder_2 --parallel-processes 2</pre>
>Run scenarios from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes
>
>
><pre>behavex features_folder_1/sample_feature.feature --parallel-processes 2</pre>
>Run scenarios tagged as **TAG_1** from "**features_folder_1/sample_feature.feature**" feature file, using **2** parallel processes
>
>
><pre>behavex features_folder_1/sample_feature.feature -t @TAG_1 --parallel-processes 2</pre>
>Run scenarios located at "**features/feature_1**" and "**features/feature_2**" folders, using **2** parallel processes
>
>
><pre>behavex features/feature_1 features/feature_2 --parallel-processes 2</pre>
>Run scenarios tagged as **TAG_1**, using **5** parallel processes executing a feature on each process:
>
>
><pre>behavex -t @TAG_1 --parallel-processes 5 --parallel-scheme feature</pre>
>Perform a dry run of the scenarios tagged as **TAG_1**, and generate the HTML report:
>
>
><pre>behavex -t @TAG_1 --dry-run</pre>
>Run scenarios tagged as **TAG_1**, generating the execution evidence into the "**exec_evidence**" folder (instead of the default "**output**" folder):
>
>
><pre>behavex -t @TAG_1 -o execution_evidence</pre>

Expand Down Expand Up @@ -220,12 +220,12 @@ Tests can be muted by adding the @MUTE tag to each test scenario. This will caus

This tag can be used for flaky scenarios or when the testing infrastructure is not stable at all.

The @AUTORETRY tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails.
The @AUTORETRY tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails.

### Rerun all failed scenarios

Whenever you perform an automated test execution and there are failing scenarios, the **failing_scenarios.txt** file will be created into the execution output folder.
This file allows you to run all failing scenarios again.
This file allows you to run all failing scenarios again.

This can be done by executing the following command:

Expand All @@ -237,7 +237,7 @@ or
To avoid the re-execution to overwrite the previous test report, we suggest to provide a different output folder, using the **-o** or **--output-folder** argument.

It is important to mention that this argument doesn't work yet with parallel test executions
It is important to mention that this argument doesn't work yet with parallel test executions

## Show Your Support

Expand All @@ -246,4 +246,3 @@ It is important to mention that this argument doesn't work yet with parallel tes
By starring this repository, you help us gain visibility among other developers and contributors. It also serves as motivation for us to continue improving and maintaining this project.

Thank you in advance for your support! We truly appreciate it.

10 changes: 3 additions & 7 deletions behavex/environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,9 @@
from behavex.global_vars import global_vars
from behavex.outputs import report_json, report_xml
from behavex.outputs.report_utils import create_log_path
from behavex.utils import (
LOGGING_CFG,
create_custom_log_when_called,
get_autoretry_attempts,
get_logging_level,
get_scenario_tags,
)
from behavex.utils import (LOGGING_CFG, create_custom_log_when_called,
get_autoretry_attempts, get_logging_level,
get_scenario_tags)

Context.__getattribute__ = create_custom_log_when_called

Expand Down
16 changes: 5 additions & 11 deletions behavex/outputs/jinja_mgr.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,17 +20,11 @@
from behavex.conf_mgr import get_env
from behavex.execution_singleton import ExecutionSingleton
from behavex.outputs.output_strings import TEXTS
from behavex.outputs.report_utils import (
calculate_status,
count_by_status,
gather_errors,
get_error_message,
match_for_execution,
normalize_filename,
get_string_hash,
pretty_print_time,
resolving_type,
)
from behavex.outputs.report_utils import (calculate_status, count_by_status,
gather_errors, get_error_message,
get_string_hash, match_for_execution,
normalize_filename,
pretty_print_time, resolving_type)


class TemplateHandler(metaclass=ExecutionSingleton):
Expand Down
8 changes: 3 additions & 5 deletions behavex/outputs/report_html.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,11 +15,9 @@
from behavex.conf_mgr import get_env
from behavex.global_vars import global_vars
from behavex.outputs.jinja_mgr import TemplateHandler
from behavex.outputs.report_utils import (
gather_steps_with_definition,
get_save_function,
try_operate_descriptor,
)
from behavex.outputs.report_utils import (gather_steps_with_definition,
get_save_function,
try_operate_descriptor)


def generate_report(output, joined=None, report=None):
Expand Down
7 changes: 4 additions & 3 deletions behavex/outputs/report_json.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,14 @@
import traceback
from tempfile import gettempdir

from behave.step_registry import registry
from behave.model import ScenarioOutline
from behave.step_registry import registry

from behavex.conf_mgr import get_env
from behavex.global_vars import global_vars
from behavex.outputs.report_utils import get_error_message, match_for_execution, text
from behavex.utils import try_operate_descriptor, get_scenario_tags
from behavex.outputs.report_utils import (get_error_message,
match_for_execution, text)
from behavex.utils import get_scenario_tags, try_operate_descriptor


def add_step_info(step, parent_node):
Expand Down
2 changes: 1 addition & 1 deletion behavex/outputs/report_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ def replace(arroba, x, y):
for tag in tag_re.findall(tags_filter):
if tag not in ('not', 'and', 'or', 'True', 'False'):
tags_filter = tags_filter.replace(tag + ' ', 'False ')
return tags_filter == '' or eval(tags_filter)
return tags_filter == '' or eval(tags_filter) # nosec


def copy_bootstrap_html_generator(output):
Expand Down
9 changes: 3 additions & 6 deletions behavex/outputs/report_xml.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,9 @@
from behavex.conf_mgr import get_env
from behavex.global_vars import global_vars
from behavex.outputs.jinja_mgr import TemplateHandler
from behavex.outputs.report_utils import (
get_save_function,
match_for_execution,
text,
try_operate_descriptor,
)
from behavex.outputs.report_utils import (get_save_function,
match_for_execution, text,
try_operate_descriptor)
from behavex.utils import get_scenario_tags


Expand Down
51 changes: 17 additions & 34 deletions behavex/runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
from __future__ import absolute_import, print_function

import codecs
import copy
import json
import logging.config
import multiprocessing
Expand All @@ -18,14 +19,13 @@
import signal
import sys
import time
import copy
import traceback
from tqdm import tqdm
from operator import itemgetter
from tempfile import gettempdir

from behave import __main__ as behave_script
from behave.model import ScenarioOutline, Scenario, Feature
from behave.model import Feature, Scenario, ScenarioOutline
from tqdm import tqdm

# noinspection PyUnresolvedReferences
import behavex.outputs.report_json
Expand All @@ -36,38 +36,21 @@
from behavex.execution_singleton import ExecutionSingleton
from behavex.global_vars import global_vars
from behavex.outputs import report_xml
from behavex.outputs.report_utils import (
get_overall_status,
match_for_execution,
pretty_print_time,
text,
try_operate_descriptor,
)
from behavex.utils import (
IncludeNameMatch,
IncludePathsMatch,
MatchInclude,
cleanup_folders,
configure_logging,
copy_bootstrap_html_generator,
create_partial_function_append,
explore_features,
generate_reports,
get_json_results,
get_logging_level,
join_feature_reports,
join_scenario_reports,
len_scenarios,
print_env_variables,
print_parallel,
set_behave_tags,
set_env_variable,
set_environ_config,
set_system_paths,
get_scenario_tags
)
from behavex.outputs.report_json import generate_execution_info

from behavex.outputs.report_utils import (get_overall_status,
match_for_execution,
pretty_print_time, text,
try_operate_descriptor)
from behavex.utils import (IncludeNameMatch, IncludePathsMatch, MatchInclude,
cleanup_folders, configure_logging,
copy_bootstrap_html_generator,
create_partial_function_append, explore_features,
generate_reports, get_json_results,
get_logging_level, get_scenario_tags,
join_feature_reports, join_scenario_reports,
len_scenarios, print_env_variables, print_parallel,
set_behave_tags, set_env_variable,
set_environ_config, set_system_paths)

EXIT_OK = 0
EXIT_ERROR = 1
Expand Down
11 changes: 4 additions & 7 deletions behavex/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,10 @@
from behavex.global_vars import global_vars
from behavex.outputs import report_html
from behavex.outputs.output_strings import TEXTS
from behavex.outputs.report_utils import (
get_save_function,
match_for_execution,
normalize_filename,
get_string_hash,
try_operate_descriptor,
)
from behavex.outputs.report_utils import (get_save_function, get_string_hash,
match_for_execution,
normalize_filename,
try_operate_descriptor)

LOGGING_CFG = ConfigObj(os.path.join(global_vars.execution_path, 'conf_logging.cfg'))
LOGGING_LEVELS = {
Expand Down
Loading

0 comments on commit e76dea1

Please sign in to comment.