Skip to content

Commit

Permalink
deploy: 3278ee3
Browse files Browse the repository at this point in the history
  • Loading branch information
rkdarst committed Aug 23, 2023
0 parents commit 6a3e1d7
Show file tree
Hide file tree
Showing 331 changed files with 90,874 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 2ec13c8fd951d6bd351ef681ff1b1893
tags: d77d1c0d9ca2f4c8421862c7c5a0d620
Empty file added .nojekyll
Empty file.
Binary file added _images/gh_action_commit.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/gl_action_commit.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/gl_green_check_mark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/green_check_mark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/python_application.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/suit.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/testing_group_work.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added _images/unit-testing.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
107 changes: 107 additions & 0 deletions _sources/code/fortran/build_pFUnit.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
---
orphan: true
---

# Example installation of pFUnit and pFUnit_demos

## Set up the environment

Installing pFUnit requires Git, a Fortran compiler and CMake.

### On a cluster

On an HPC cluster you might need to add a few environment modules. For
example, on [Tetralith](https://www.nsc.liu.se/systems/tetralith/) you
would do:

```bash
module add git/2.19.3-nsc1-gcc-system
module add CMake/3.16.4-nsc1
export FC=/software/sse/manual/mpprun/4.1.3/nsc-wrappers/ifort
```

### On own computer

If you don't have [CMake](https://cmake.org/) or a Fortran compiler
installed yet, one option is to install them into a conda environment
by first saving the following into a file `environment.yml`:

```yaml
name: compilers
channels:
- conda-forge
dependencies:
- cmake
- compilers
```

followed by installing the packages into a new environment:
```bash
conda env create -f environment.yml
```
and finally activating the environment:
```bash
conda activate compilers
```

For good measure, set the `FC` variable to point to your
Fortran compiler (adjust path as needed):
```bash
export FC=$HOME/miniconda3/envs/compilers/bin/gfortran
```

## Clone the pFUnit repository

```bash
git clone --recursive [email protected]:Goddard-Fortran-Ecosystem/pFUnit.git
cd pFUnit
```

## Configure with Cmake

```bash
mkdir build
cd build
cmake ..
```

## Build and install

The following will install pFUnit into a directory `installed` under
the `build` directory.

```bash
make tests
make install
```

## Compiling with pFUnit

When compiling with pFUnit, set:
```bash
export PFUNIT_DIR=$HOME/path/to/pFUnit/build/installed
```

and run CMake with `-DCMAKE_PREFIX_PATH=$PFUNIT_DIR`.


## Clone the pFUnit_demos repository

This is for testing and learning purposes.

```bash
git clone [email protected]:Goddard-Fortran-Ecosystem/pFUnit_demos.git
```

### Try out the Trivial, Basic, and MPI examples

```bash
cd pFUnit_demos
export PFUNIT_DIR=~pFUnit/build/installed/PFUNIT-4.2
cd Trivial
./build_with_cmake_and_run.x
cd ../Basic
./build_with_cmake_and_run.x
cd ../MPI
./build_with_cmake_and_run.x
´´´
190 changes: 190 additions & 0 deletions _sources/concepts.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Concepts

```{questions}
- What are unit tests, regression tests, and integration tests?
- What is test coverage?
- How should we approach testing?
```

```{figure} img/unit-testing.jpg
:alt: Tests is no guarantee
:width: 400px

Tests are no guarantee. Figure source: <https://twitter.com/dave1010/status/613601365529657344>
```


## How to test?

Imperfect tests **run frequently** are better than perfect tests which are
never written:
- Test **frequently** (each commit/push)
- Test **automatically** (e.g. using
[Azure pipelines](https://azure.microsoft.com/en-us/services/devops/pipelines/) or
[GitHub Actions](https://github.com/marketplace?type=actions) or [GitLab CI](https://docs.gitlab.com/ee/ci/) or similar services)
- Test with [numerical tolerance](http://www.smbc-comics.com/comic/2013-06-05)
(see also ["What Every Programmer Should Know About Floating-Point Arithmetic"](https://floating-point-gui.de/))
- Think about **code coverage** ([Coveralls](https://coveralls.io) or [Codecov](https://codecov.io) or similar services)

---

## Defensive programming

- Assume that mistakes will happen and introduce guards against them.
- Use **assertions** for things you believe will/should never happen.
- Use **exceptions** for anomalous or exceptional conditions requiring
special processing.

```python
def kelvin_to_celsius(temp_k):
"""
Converts temperature in Kelvin
to Celsius.
"""
assert temp_k >= 0.0, "ERROR: negative T_K"
temp_c = temp_k - 273.15
return temp_c
```

---

## Unit tests

- **Unit tests** are functions
- Test one unit: module or even single function
- Good documentation of the capability and dependencies of a module
- Unit tests are not about testing units of measure and unit conversion, they are about testing small components (units) of a code

---

## Integration tests

- **Integration tests** verify whether multiple modules are working well together
- Like in a car assembly we have to test all components independently and also whether the components are working together when combined
- Unit tests can be used for testing independent components (_e.g._ battery, controller, motor) and integration tests to check if car is working overall

---

## Regression tests

- Similarly to integration tests, **regression tests** often operate on the
whole code base
- Rather than assuming that the test author knows what the correct
result should be, regression tests look to the past for the expected behavior
- Often spans multiple code versions: when developing a new version, input
and output files of a previous version are used to test that the same
behaviour is observed

---

## Test-driven development

- In **test-driven development**, one writes tests before writing code
- Very often we know the result that a function is supposed to produce
- Development cycle (red, green, refactor):
- Write the test
- Write an empty function template
- **Verify that the test fails**
- Program until the test passes
- Perhaps improve until you are happy (refactor)
- Move on

---

## Continuous integration

- **Continuous integration** is basically when you automatically test
every single commit/push (you test whether code integrates **before** you integrate it)

---

## Code coverage

- If I break the code and all tests pass who is to blame?
- **Code coverage** measures and documents which lines of code have been traversed during a test run
- It is possible to have line-by-line coverage
- [Real-life example](https://coveralls.io/github/bast/runtest)

---

## Total time to test matters

- Total time to test matters
- If the test set takes 7 hours to run, it is likely that nobody will run it
- Identify fast essential test set that has sufficient coverage and is sufficiently
short to be run before each commit or push
- Test code can be marked (grouped). Here our `pytest` is marked:

```python
@pytest.mark.conversion
def test_fahrenheit_to_celsius():
temp_c = fahrenheit_to_celsius(temp_f=100.0)
expected_result = 37.777777
assert abs(temp_c - expected_result) < 1.0e-6
```

```sh
$ pytest -v -m conversion
```

---

## Tests don't guarantee correctness

Not only do tests not guarantee the absence of bugs, they can
also contain their own bugs.
Here's an example of how we could get the testing of the
`kelvin_to_celsius` function wrong:

```python
def kelvin_to_celsius(temp_k):
"""
Converts temperature in Kelvin
to Celsius.
"""
assert temp_k >= 0.0, "ERROR: negative T_K"
temp_c = temp_k + 273.15 # BUG!
return temp_c

# buggy test
def test_kelvin_to_celsius():
temp_c = kelvin_to_celsius(temp_k=0.0)
expected_result = 273.15
assert abs(temp_c - expected_result) < 1.0e-6
```

All tests are happy!

---

## Testing frameworks

A large number of testing frameworks, tools and libraries are available for
different programming languages. Some of the most popular ones are listed
in the [Quick Reference](./quick-reference).

---

## Good practices

- Test before committing (use the Git staging area)
- Fix broken tests immediately (dirty dishes effect)
- Do not deactivate tests "temporarily"
- Think about coverage (physics and lines of code)
- Go public with your testing dashboard and issues/tickets
- Test controlled errors: if something is expected to fail, test for that
- Create benchmark calculations to cover various performance-critical modules and monitor timing
- Make testing easy to run (`make test`)
- Make testing easy to analyze
- Do not flood screen with pages of output in case everything runs OK
- Test with numerical tolerance (extremely annoying to compare digits by eye)


```{keypoints}
- Assertions, exceptions, unit tests, integration tests and regression
tests are used to test a code on different levels
- Test driven development is one way to develop code which is tested
from the start
- Continuous integration is when every commit/merge is tested
automatically
```
11 changes: 11 additions & 0 deletions _sources/conclusions.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Conclusions and recommendations

- Explore and use the good tools that exist out there
- An incomplete list of testing frameworks can be found in the [Quick Reference](quick-reference)
- Automate tests: faster feedback and reduce the number of surprises
- Strike a healthy balance between unit tests and integration tests
- When adding new functionality, also add tests
- When you discover and fix a bug, also commit a test against this bug
- Use code coverage analysis to identify untested or unused code
- If you make your code easier to test, it becomes more modular

Loading

0 comments on commit 6a3e1d7

Please sign in to comment.