Skip to content

Commit

Permalink
Merge pull request #213 from yarikoptic/enh-codespell
Browse files Browse the repository at this point in the history
  • Loading branch information
arokem authored Dec 15, 2023
2 parents 1dc1d22 + d19e190 commit 679f309
Show file tree
Hide file tree
Showing 18 changed files with 64 additions and 36 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
name: Codespell

on:
push:
branches: [master]
pull_request:
branches: [master]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
2 changes: 1 addition & 1 deletion doc/discussion/interval_object.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ consistent, in the same manner that is already implemented in
represent a time offset, relative to the attributes :attr:`t_start` and
:attr:`t_stop`. That is, it can tell us where relative to these two
time-points some interesting even, which this interval surrounds, or this
interval is close to, occurs. This can be used in order to interpert how
interval is close to, occurs. This can be used in order to interpret how
time-series access is done using the :class:`TimeInterval` object. See
:ref:`time_series_access`. This attribute can be implemented as an optional
input on initialization, such that it defaults to be equal to
Expand Down
2 changes: 1 addition & 1 deletion doc/discussion/multitaper_jackknife.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ General JN definitions
| **pseudovalues**
| :math:`\hat{\theta}_i = n\hat{\theta} - (n-1)\hat{\theta}_{-i}`
Now the jackknifed esimator is computed as
Now the jackknifed estimator is computed as

:math:`\tilde{\theta} = \dfrac{1}{n}\sum_i \hat{\theta}_i = n\hat{\theta} - \dfrac{n-1}{n}\sum_i \hat{\theta}_{-i}`

Expand Down
4 changes: 2 additions & 2 deletions doc/discussion/time_series_access.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ smaller.
~~~~~~~~~~~~~~~~~~~~

:func:`ut.index_at` returns the indices of the values in the array that are
the largest time values, smaller thatn the input values t. That is, it returns i
the largest time values, smaller than the input values t. That is, it returns i
for which $t_i$ is the maximal one, which still fulfills: $t_i<t$.

Questions
Expand All @@ -56,7 +56,7 @@ The following questions apply to all three cases:

* what happens when the t is smaller than the smallest entry in the array
return None?
* what happens when t is larget than the last entry in the time array? return
* what happens when t is larger than the last entry in the time array? return
None?

:func:`at`
Expand Down
2 changes: 1 addition & 1 deletion doc/examples/filtering_fmri.py
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@
We can do that by initializng a SpectralAnalyzer for each one of the filtered
time-series resulting from the above operation and plotting their spectra. For
ease of compariso, we only plot the spectra using the multi-taper spectral
estimation. At the level of granularity provided by this method, the diferences
estimation. At the level of granularity provided by this method, the differences
between the methods are emphasized:
"""
Expand Down
4 changes: 2 additions & 2 deletions doc/examples/multi_taper_spectral_estimation.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
.. image:: fig/multi_taper_spectral_estimation_02.png
As before, the left figure displays the windowing function in the temporal
domain and the figure on the left displays the attentuation of spectral leakage
domain and the figure on the left displays the attenuation of spectral leakage
in the other frequency bands in the spectrum. Notice that though different
windowing functions have different spectral attenuation profiles, trading off
attenuation of leakage from frequency bands near the frequency of interest
Expand Down Expand Up @@ -302,7 +302,7 @@ def dB(x, out=None):
.. image:: fig/multi_taper_spectral_estimation_06.png
As metioned above, in addition to estimating the spectrum itself, an estimate
As mentioned above, in addition to estimating the spectrum itself, an estimate
of the confidence interval of the spectrum can be generated using a
jack-knifing procedure [Thomson2007]_.
Expand Down
2 changes: 1 addition & 1 deletion doc/users/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ object does not trigger any intensive computations. Instead the computation of
the attributes of analyzer objects is delayed until the moment the user calls
these attributes. In addition, once a computation is triggered it is stored as
an attribute of the object, which assures that accessing the results of an
analysis will trigger the computation only on the first time the analysis resut
analysis will trigger the computation only on the first time the analysis result
is required. Thereafter, the result of the analysis is stored for further use
of this result.

Expand Down
2 changes: 1 addition & 1 deletion nitime/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
- ``analysis``: Contains *Analyzer* objects, which implement particular
analysis methods on the time-series objects
- ``viz``: Vizualization
- ``viz``: Visualization
All of the sub-modules will be imported as part of ``__init__``, so that users
have all of these things at their fingertips.
Expand Down
2 changes: 1 addition & 1 deletion nitime/algorithms/autoregressive.py
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ def AR_psd(ak, sigma_v, n_freqs=1024, sides='onesided'):
Returns
-------
(w, ar_psd)
w : Array of normalized frequences from [-.5, .5) or [0,.5]
w : Array of normalized frequencies from [-.5, .5) or [0,.5]
ar_psd : A PSD estimate computed by sigma_v / |1-a(f)|**2 , where
a(f) = DTFT(ak)
Expand Down
6 changes: 3 additions & 3 deletions nitime/algorithms/event_related.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ def freq_domain_xcorr(tseries, events, t_before, t_after, Fs=1):
-------
xcorr: float array
The correlation function between the tseries and the events. Can be
interperted as a linear filter from events to responses (the
interpreted as a linear filter from events to responses (the
time-series) of an LTI.
"""
Expand Down Expand Up @@ -125,9 +125,9 @@ def freq_domain_xcorr_zscored(tseries, events, t_before, t_after, Fs=1):
-------
xcorr: float array
The correlation function between the tseries and the events. Can be
interperted as a linear filter from events to responses (the
interpreted as a linear filter from events to responses (the
time-series) of an LTI. Because it is normalized to its own mean and
variance, it can be interperted as measuring statistical significance
variance, it can be interpreted as measuring statistical significance
relative to all time-shifted versions of the events.
"""
Expand Down
2 changes: 1 addition & 1 deletion nitime/algorithms/tests/test_autoregressive.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def test_AR_LD():
"""
Test the Levinson Durbin estimate of the AR coefficients against the
expercted PSD
expected PSD
"""
arsig, _, _ = utils.ar_generator(N=512)
Expand Down
4 changes: 2 additions & 2 deletions nitime/algorithms/tests/test_spectral.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def test_get_spectra_complex():

def test_get_spectra_unknown_method():
"""
Test that providing an unknown method to get_spectra rasies a ValueError
Test that providing an unknown method to get_spectra raises a ValueError
"""
tseries = np.array([[1, 2, 3], [4, 5, 6]])
Expand Down Expand Up @@ -179,7 +179,7 @@ def test_dpss_properties():
N = 2000
NW = 200
d, lam = tsa.dpss_windows(N, NW, 2*NW-2)
# 2NW-2 lamdas should be all > 0.9
# 2NW-2 lambdas should be all > 0.9
npt.assert_(
(lam > 0.9).all(), 'Eigenvectors show poor spectral concentration'
)
Expand Down
6 changes: 3 additions & 3 deletions nitime/analysis/coherence.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def __init__(self, input=None, method=None, unwrap_phases=False):
method : dict, optional,
This is the method used for spectral analysis of the signal for the
coherence caclulation. See :func:`algorithms.get_spectra`
coherence calculation. See :func:`algorithms.get_spectra`
documentation for details.
unwrap_phases : bool, optional
Expand Down Expand Up @@ -167,7 +167,7 @@ def phase(self):
""" The frequency-dependent phase relationship between all the pairwise
combinations of time-series in the data"""

#XXX calcluate this from the standard output, instead of recalculating:
#XXX calculate this from the standard output, instead of recalculating:

tseries_length = self.input.data.shape[0]
spectrum_length = self.spectrum.shape[-1]
Expand Down Expand Up @@ -693,7 +693,7 @@ def coherency(self):
cache['FFT_conj_slices'][-1] = \
seed_cache['FFT_conj_slices'][0]

#This performs the caclulation for this seed:
#This performs the calculation for this seed:
Cxy[seed_idx] = tsa.cache_to_coherency(cache, ij)

#In the case where there is only one channel in the seed time-series:
Expand Down
2 changes: 1 addition & 1 deletion nitime/analysis/event_related.py
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ def xcorr_eta(self):
def et_data(self):
"""The event-triggered data (all occurrences).
This gets the time-series corresponding to the inidividual event
This gets the time-series corresponding to the individual event
occurrences. Returns a list of lists of time-series. The first dimension
is the different channels in the original time-series data and the
second dimension is each type of event in the event time series
Expand Down
8 changes: 4 additions & 4 deletions nitime/tests/test_timeseries.py
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@ def test_TimeArray_convert_unit():

def test_TimeArray_div():

#divide singelton by singleton:
#divide singleton by singleton:
a = 2.0
b = 6.0
time1 = ts.TimeArray(a, time_unit='s')
Expand All @@ -217,7 +217,7 @@ def test_TimeArray_div():
div2 = time1 / time2
npt.assert_equal(div1, div2)

#Divide a TimeArray by a singelton:
#Divide a TimeArray by a singleton:
a = np.array([1, 2, 3])
b = 6.0
time1 = ts.TimeArray(a, time_unit='s')
Expand Down Expand Up @@ -500,7 +500,7 @@ def test_TimeSeries():
tseries3 = ts.TimeSeries(data=[1, 2, 3, 4], sampling_rate=1000,
time_unit='ms')
#you can specify the sampling_rate or the sampling_interval, to the same
#effect, where specificying the sampling_interval is in the units of that
#effect, where specifying the sampling_interval is in the units of that
#time-series:
tseries4 = ts.TimeSeries(data=[1, 2, 3, 4], sampling_interval=1,
time_unit='ms')
Expand Down Expand Up @@ -534,7 +534,7 @@ def test_TimeSeries():
with pytest.raises(ValueError) as e_info:
ts.TimeSeries(dict(data=data, time=t))

# test basic arithmetics with TimeSeries
# test basic arithmetic with TimeSeries
tseries1 = ts.TimeSeries([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], sampling_rate=1)
tseries2 = tseries1 + 1
npt.assert_equal(tseries1.data + 1, tseries2.data)
Expand Down
12 changes: 6 additions & 6 deletions nitime/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -188,11 +188,11 @@ def circularize(x, bottom=0, top=2 * np.pi, deg=False):
bottom : float, optional (defaults to 0).
If you want to set the bottom of the interval into which you
modulu to something else than 0.
modulo to something else than 0.
top : float, optional (defaults to 2*pi).
If you want to set the top of the interval into which you
modulu to something else than 2*pi
modulo to something else than 2*pi
Returns
-------
Expand Down Expand Up @@ -307,7 +307,7 @@ def jackknifed_sdf_variance(yk, eigvals, sides='onesided', adaptive=True):
sides : str, optional
Compute the jackknife pseudovalues over as one-sided or
two-sided spectra
adpative : bool, optional
adaptive : bool, optional
Compute the adaptive weighting for each jackknife pseudovalue
Returns
Expand Down Expand Up @@ -1538,7 +1538,7 @@ def tril_indices(n, k=0):
Examples
--------
Commpute two different sets of indices to access 4x4 arrays, one for the
Compute two different sets of indices to access 4x4 arrays, one for the
lower triangular part starting at the main diagonal, and one starting two
diagonals further right:
Expand Down Expand Up @@ -1613,7 +1613,7 @@ def triu_indices(n, k=0):
Examples
--------
Commpute two different sets of indices to access 4x4 arrays, one for the
Compute two different sets of indices to access 4x4 arrays, one for the
upper triangular part starting at the main diagonal, and one starting two
diagonals further right:
Expand Down Expand Up @@ -2094,7 +2094,7 @@ def fir_design_matrix(events, len_hrf):
corresponding to the bin represented by each slot in the array. In
time-bins in which no event occurred, a 0 should be entered. If negative
event values are entered, they will be used as "negative" events, as in
events that should be contrasted with the postitive events (typically -1
events that should be contrasted with the positive events (typically -1
and 1 can be used for a simple contrast of two conditions)
len_hrf : int
Expand Down
12 changes: 6 additions & 6 deletions nitime/viz.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,15 +63,15 @@ def plot_tseries(time_series, fig=None, axis=0,
subplot: an axis number (if there are several in the figure to be opened),
defaults to 0.
xticks: optional, list, specificies what values to put xticks on. Defaults
xticks: optional, list, specifies what values to put xticks on. Defaults
to the matlplotlib-generated.
yticks: optional, list, specificies what values to put xticks on. Defaults
yticks: optional, list, specifies what values to put xticks on. Defaults
to the matlplotlib-generated.
xlabel: optional, list, specificies what labels to put on xticks
xlabel: optional, list, specifies what labels to put on xticks
ylabel: optional, list, specificies what labels to put on yticks
ylabel: optional, list, specifies what labels to put on yticks
yerror: optional, UniformTimeSeries with the same sampling_rate and number
of samples and channels as time_series, the error will be displayed as a
Expand Down Expand Up @@ -276,7 +276,7 @@ def channel_formatter(x, pos=None):
# diagonal values:
idx_null = tril_indices(m.shape[0])
m[idx_null] = np.nan
# tranpose the upper triangle to lower
# transpose the upper triangle to lower
m = m.T

# Extract the minimum and maximum values for scaling of the
Expand Down Expand Up @@ -709,7 +709,7 @@ def draw_graph(G,
# Set default edge colormap
if edge_cmap is None:
# Make an object with the colormap API, that maps all input values to
# the default color (with proper alhpa)
# the default color (with proper alpha)
edge_cmap = (lambda val, alpha:
colors.colorConverter.to_rgba(default_edge_color, alpha))

Expand Down
6 changes: 6 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,9 @@ skip = "pp*"
# 64-bit builds only; 32-bit builds seem pretty niche these days, so
# don't bother unless someone asks
archs = ["auto64"]

[tool.codespell]
skip = '.git,*.pdf,*.svg,go.sum,*.css'
check-hidden = true
ignore-regex = '\b(Ines Samengo)\b'
ignore-words-list = 'nd,ans'

0 comments on commit 679f309

Please sign in to comment.