This document outlines the testing standards and practices for the Helm Values Manager project.
tests/
├── unit/ # Unit tests
│ ├── models/ # Tests for core models
│ │ └── test_value.py
│ └── backends/ # Tests for backend implementations
├── integration/ # Integration tests
│ └── backends/ # Backend integration tests
└── conftest.py # Shared test fixtures
Unit tests focus on testing individual components in isolation. They should:
- Test a single unit of functionality
- Mock external dependencies
- Be fast and independent
- Cover both success and error cases
Example from test_value.py
:
def test_value_init_local():
"""Test Value initialization with local storage."""
value = Value(path="app.replicas", environment="dev")
assert value.path == "app.replicas"
assert value.environment == "dev"
assert value.storage_type == "local"
Integration tests verify component interactions, especially with external services:
- Test actual backend implementations
- Verify configuration loading
- Test end-to-end workflows
- Group related tests in classes
- Use descriptive test names
- Follow the file structure of the implementation
- Keep test files focused and manageable
- Aim for 100% code coverage
- Test all code paths
- Include edge cases
- Test error conditions
- Follow Arrange-Act-Assert pattern
- Keep tests independent
- Use appropriate assertions
- Write clear test descriptions
Example:
def test_set_invalid_type():
"""Test setting a non-string value."""
value = Value(path="app.replicas", environment="dev")
with pytest.raises(ValueError, match="Value must be a string"):
value.set(3)
- Use fixtures for common setup
- Mock external dependencies
- Keep mocks simple and focused
- Use appropriate scoping
Example:
@pytest.fixture
def mock_backend():
"""Create a mock backend for testing."""
backend = Mock(spec=ValueBackend)
backend.get_value.return_value = "mock_value"
return backend
# Run all tests
python -m pytest
# Run specific test file
python -m pytest tests/unit/models/test_value.py
# Run with coverage
python -m pytest --cov=helm_values_manager
# Run tests in all environments
tox
# Run for specific Python version
tox -e py39
Each test should have:
- Clear docstring explaining purpose
- Well-structured setup and teardown
- Clear assertions with messages
- Proper error case handling
Example:
def test_from_dict_missing_path():
"""Test deserializing with missing path field."""
data = {
"environment": "dev",
"storage_type": "local",
"value": "3"
}
with pytest.raises(ValueError, match="Missing required field: path"):
Value.from_dict(data)
-
Test Independence
- Each test should run in isolation
- Clean up after tests
- Don't rely on test execution order
-
Test Readability
- Use clear, descriptive names
- Document test purpose
- Keep tests simple and focused
-
Test Maintenance
- Update tests when implementation changes
- Remove obsolete tests
- Keep test code as clean as production code
-
Performance
- Keep unit tests fast
- Use appropriate fixtures
- Mock expensive operations
Our CI pipeline:
- Runs all tests
- Checks code coverage
- Enforces style guidelines
- Runs security checks