Skip to content

Commit

Permalink
Merge branch 'main' into sigma_clip
Browse files Browse the repository at this point in the history
  • Loading branch information
hbushouse authored Jun 21, 2023
2 parents 13819d2 + cc1edfd commit 6687594
Show file tree
Hide file tree
Showing 14 changed files with 415 additions and 75 deletions.
4 changes: 1 addition & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,7 @@ jobs:
with:
envs: |
- linux: test-oldestdeps-cov-xdist
python-version: 3.8
- linux: test-xdist
python-version: '3.8'
python-version: 3.9
- linux: test-xdist
python-version: '3.9'
- linux: test-xdist
Expand Down
3 changes: 1 addition & 2 deletions .github/workflows/ci_cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,8 @@ jobs:
uses: OpenAstronomy/github-actions-workflows/.github/workflows/tox.yml@v1
with:
envs: |
- macos: test-xdist
python-version: 3.8
- macos: test-xdist
python-version: 3.9
- macos: test-xdist
python-version: 3.10
- linux: test-devdeps-xdist
10 changes: 9 additions & 1 deletion .readthedocs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,17 @@ formats:
- htmlzip
- pdf

build:
os: ubuntu-22.04
tools:
python: mambaforge-4.10

conda:
environment: docs/rtd_environment.yaml

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.8
system_packages: false
install:
- method: pip
path: .
Expand Down
63 changes: 63 additions & 0 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,69 @@ Changes to API
Other
-----

-

1.3.8 (2023-05-31)
==================

Bug Fixes
---------

dark_current
~~~~~~~~~~~~

- Fixed handling of MIRI segmented data files so that the correct dark
integrations get subtracted from the correct science integrations. [#165]

ramp_fitting
~~~~~~~~~~~~

- Correct the "averaging" of the final image slope by properly excluding
variances as a part of the denominator from integrations with invalid slopes.
[#167]
- Removing the usage of ``numpy.where`` where possible for perfomance
reasons. [#169]

1.3.7 (2023-04-26)
==================

Bug Fixes
---------

ramp_fitting
~~~~~~~~~~~~

- Correctly compute the number of groups in a segment to properly compute the
optimal weights for the OLS ramp fitting algorithm. Originally, this
computation had the potential to include groups not in the segment being
computed. [#163]

Changes to API
--------------

- Drop support for Python 3.8 [#162]

1.3.6 (2023-04-19)
==================

Bug Fixes
---------

ramp_fitting
~~~~~~~~~~~~

- The ``meta`` tag was missing when checking for ``drop_frame1``. It has been
added to the check. [#161]


Changes to API
--------------

-

Other
-----

- Remove use of deprecated ``pytest-openfiles`` ``pytest`` plugin. This has been replaced by
catching ``ResourceWarning``s. [#159]
Expand Down
135 changes: 134 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# stcal
# STCAL

[![Documentation Status](https://readthedocs.org/projects/stcal/badge/?version=latest)](http://stcal.readthedocs.io/en/latest/?badge=latest)

Expand All @@ -7,3 +7,136 @@
[![codecov](https://codecov.io/gh/spacetelescope/stcal/branch/main/graph/badge.svg?token=C1LO00W9CZ)](https://codecov.io/gh/spacetelescope/stcal)

STScI Calibration algorithms and tools.

![STScI Logo](docs/_static/stsci_logo.png)

**STCAL requires Python 3.9 or above and a C compiler for dependencies.**

**Linux and MacOS platforms are tested and supported. Windows is not currently supported.**

**If installing on MacOS Mojave 10.14, you must install
into an environment with python 3.9. Installation will fail on python 3.10 due
to lack of a stable build for dependency ``opencv-python``.**


`STCAL` is intended to be used as a support package for calibration pipeline
software, such as the `JWST` and `Roman` calibration pipelines. `STCAL` is a
separate package because it is also intended to be software that can be reused
by multiple calibration pipelines. Even though it is intended to be a support
package for calibration pipelines, it can be installed and used as a stand alone
package. This could make usage unwieldy as it is easier to use `STCAL` through
calibration software. The main use case for stand alone installation is for
development purposes, such as bug fixes and feature additions. When installing
calibration pipelines that depend on `STCAL` this package automatically gets
installed as a dependency.

## Installation

The easiest way to install the latest `stcal` release into a fresh virtualenv or conda environment is

pip install stcal

### Detailed Installation

The `stcal` package can be installed into a virtualenv or conda environment via `pip`.
We recommend that for each installation you start by creating a fresh
environment that only has Python installed and then install the `stcal` package and
its dependencies into that bare environment.
If using conda environments, first make sure you have a recent version of Anaconda
or Miniconda installed.
If desired, you can create multiple environments to allow for switching between different
versions of the `stcal` package (e.g. a released version versus the current development version).

In all cases, the installation is generally a 3-step process:
* Create a conda environment
* Activate that environment
* Install the desired version of the `stcal` package into that environment

Details are given below on how to do this for different types of installations,
including tagged releases and development versions.
Remember that all conda operations must be done from within a bash/zsh shell.

### Installing latest releases

You can install the latest released version via `pip`. From a bash/zsh shell:

conda create -n <env_name> python
conda activate <env_name>
pip install stcal

You can also install a specific version, for example `stcal 1.3.2`:

conda create -n <env_name> python
conda activate <env_name>
pip install stcal==1.3.2

### Installing the development version from Github

You can install the latest development version (not as well tested) from the
Github master branch:

conda create -n <env_name> python
conda activate <env_name>
pip install git+https://github.com/spacetelescope/stcal

### Installing for Developers

If you want to be able to work on and test the source code with the `stcal` package,
the high-level procedure to do this is to first create a conda environment using
the same procedures outlined above, but then install your personal copy of the
code overtop of the original code in that environment. Again, this should be done
in a separate conda environment from any existing environments that you may have
already installed with released versions of the `stcal` package.

As usual, the first two steps are to create and activate an environment:

conda create -n <env_name> python
conda activate <env_name>

To install your own copy of the code into that environment, you first need to
fork and clone the `stcal` repo:

cd <where you want to put the repo>
git clone https://github.com/spacetelescope/stcal
cd stcal

*Note: `python setup.py install` and `python setup.py develop` commands do not work.*

Install from your local checked-out copy as an "editable" install:

pip install -e .

If you want to run the unit or regression tests and/or build the docs, you can make
sure those dependencies are installed too:

pip install -e ".[test]"
pip install -e ".[docs]"
pip install -e ".[test,docs]"

Need other useful packages in your development environment?

pip install ipython jupyter matplotlib pylint ipdb


## Contributions and Feedback

We welcome contributions and feedback on the project. Please follow the
[contributing guidelines](CONTRIBUTING.md) to submit an issue or a pull request.

We strive to provide a welcoming community to all of our users by abiding with
the [Code of Conduct](CODE_OF_CONDUCT.md).

If you have questions or concerns regarding the software, please open an issue
at https://github.com/spacetelescope/stcal/issues.

## Unit Tests

Unit tests can be run via `pytest`. Within the top level of your local `stcal` repo checkout:

pip install -e ".[test]"
pytest

Need to parallelize your test runs over all available cores?

pip install pytest-xdist
pytest -n auto
8 changes: 8 additions & 0 deletions docs/rtd_environment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
name: rtd311
channels:
- conda-forge
- defaults
dependencies:
- python=3.11
- pip
- graphviz
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
name = 'stcal'
description = 'STScI tools and algorithms used in calibration pipelines'
readme = 'README.md'
requires-python = '>=3.8'
requires-python = '>=3.9'
license = { file = 'LICENSE' }
authors = [{ name = 'STScI', email = '[email protected]' }]
classifiers = [
Expand All @@ -15,7 +15,7 @@ dependencies = [
'astropy >=5.0.4',
'scipy >=1.6.0',
'numpy >=1.20',
'opencv-python >=4.6.0.66',
'opencv-python-headless >=4.6.0.66',
]
dynamic = ['version']

Expand Down
5 changes: 5 additions & 0 deletions src/stcal/dark_current/dark_class.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,10 @@ def __init__(self, science_model=None):

self.exp_nframes = science_model.meta.exposure.nframes
self.exp_groupgap = science_model.meta.exposure.groupgap
try: # JWST only
self.exp_intstart = science_model.meta.exposure.integration_start
except AttributeError:
self.exp_intstart = None

self.cal_step = None
else:
Expand All @@ -108,5 +112,6 @@ def __init__(self, science_model=None):

self.exp_nframes = None
self.exp_groupgap = None
self.exp_intstart = None

self.cal_step = None
33 changes: 21 additions & 12 deletions src/stcal/dark_current/dark_sub.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,10 @@ def do_correction_data(science_data, dark_data, dark_output=None):

# Create a frame-averaged version of the dark data to match
# the nframes and groupgap settings of the science data.
# If the data are from MIRI, the darks are integration-dependent and
# we average them with a seperate routine.
# If the data are from JWST/MIRI, the darks are integration-dependent
# and we average them with a seperate routine.

if len(dark_data.data.shape) == 4:
if len(dark_data.data.shape) == 4: # only MIRI uses 4-D darks
averaged_dark = average_dark_frames_4d(
dark_data, sci_nints, sci_ngroups, sci_nframes, sci_groupgap
)
Expand All @@ -173,7 +173,7 @@ def average_dark_frames_3d(dark_data, ngroups, nframes, groupgap):
"""
Averages the individual frames of data in a dark reference
file to match the group structure of a science data set.
This routine is not used for MIRI (see average_MIRIdark_frames)
This routine is not used for JWST/MIRI (see average_dark_frames_4d).
Parameters
----------
Expand Down Expand Up @@ -240,8 +240,9 @@ def average_dark_frames_4d(dark_data, nints, ngroups, nframes, groupgap):
"""
Averages the individual frames of data in a dark reference
file to match the group structure of a science data set.
MIRI needs a separate routine because the darks are integration dependent.
We need an average dark for each dark integration.
JWST/MIRI needs a separate routine because the darks are
integration-dependent and hence 4D in shape, instead of 3D.
An average dark is created for each integration.
Parameters
----------
Expand Down Expand Up @@ -339,6 +340,11 @@ def subtract_dark(science_data, dark_data):
dark-subtracted science data
"""

# The integration start number is only needed for JWST/MIRI data.
# It defaults to 1 if the keyword is not in the science data.
int_start = 1 if science_data.exp_intstart is None else science_data.exp_intstart

# Determine the number of integrations contained in the dark reference file
if len(dark_data.data.shape) == 4:
dark_nints = dark_data.data.shape[0]
else:
Expand All @@ -361,22 +367,25 @@ def subtract_dark(science_data, dark_data):
# All other instruments have a single 2D dark DQ array
darkdq = dark_data.groupdq

# Combine the dark and science DQ data
# Propagate the dark DQ data into the science DQ data
output.pixeldq = np.bitwise_or(science_data.pixeldq, darkdq)

# Loop over all integrations in input science data
for i in range(science_data.data.shape[0]):

if len(dark_data.data.shape) == 4:
# use integration-specific dark data
if i < dark_nints:
if len(dark_data.data.shape) == 4: # MIRI data
# Apply the first dark_nints-1 integrations from the dark ref file
# to the first few science integrations. There's an additional
# check of the starting integration number in case the science
# data are segmented.
if i < dark_nints and int_start == 1:
dark_sci = dark_data.data[i]
else:
# for science integrations beyond the number of
# For science integrations beyond the number of
# dark integrations, use the last dark integration
dark_sci = dark_data.data[-1]
else:
# use single-integration dark data
# Use single-integration dark data
dark_sci = dark_data.data

# Loop over all groups in this integration
Expand Down
Loading

0 comments on commit 6687594

Please sign in to comment.