Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add flake8 pre-commit #1689

Merged
merged 87 commits into from
Mar 18, 2021
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
40dc2c3
add flake8 pre-commit
Zethson Feb 24, 2021
55737d9
fix pre-commit
Zethson Feb 24, 2021
f653e5a
add E402 to flake8 ignore
Zethson Feb 24, 2021
daf03c9
revert neighbors
Zethson Feb 24, 2021
9a53065
Merge branch 'master' into feature/flake8
Zethson Feb 24, 2021
2b79a88
fix flake8
Zethson Feb 24, 2021
617168f
address review
Zethson Feb 25, 2021
ae43e3d
fix comment character in .flake8
Zethson Feb 25, 2021
7db4e60
fix test
Zethson Feb 25, 2021
48f0648
black
Zethson Feb 25, 2021
e742c66
review round 2
Zethson Feb 25, 2021
a5b1290
review round 3
Zethson Feb 25, 2021
718a06c
readded double comments
Zethson Feb 25, 2021
2a0a19d
Ignoring E262 & reverted comment
Zethson Feb 25, 2021
ebb2b01
using self for obs_tidy
Zethson Feb 25, 2021
d2bb2a9
Restore setup.py
flying-sheep Mar 1, 2021
ecc47a2
rm call of black test (#1690)
Koncopd Feb 24, 2021
f338863
Fix print_versions for python<3.8 (#1691)
ivirshup Feb 25, 2021
ce68cd1
add codecov so we can have a badge to point to (#1693)
ivirshup Feb 25, 2021
b5cc4b6
Attempt server-side search (#1672)
ivirshup Feb 25, 2021
8b0d8f0
Fix paga_path (#1047)
flying-sheep Mar 1, 2021
24d1b2e
Switch to flit
flying-sheep Dec 3, 2020
364f320
add setup.py while leaving it ignored
flying-sheep Jan 15, 2021
8f4f87e
Update install instructions
flying-sheep Jan 14, 2021
d4f7d4c
Circumvent new pip check (see pypa/pip#9628)
flying-sheep Feb 11, 2021
3db4814
Go back to regular pip (#1702)
flying-sheep Mar 2, 2021
6a97d73
codecov comment (#1704)
ivirshup Mar 2, 2021
47af631
Use joblib for parallelism in regress_out (#1695)
ivirshup Mar 3, 2021
6d36c6b
Add sparsificiation step before sparse-dependent Scrublet calls (#1707)
pinin4fjords Mar 3, 2021
c7bd6dc
Fix version on Travis (#1713)
flying-sheep Mar 3, 2021
4eb64c2
`sc.metrics` module (add confusion matrix & Geary's C methods) (#915)
ivirshup Mar 4, 2021
c11c486
Fix clipped images in docs (#1717)
ivirshup Mar 4, 2021
f637c08
Cleanup normalize_total (#1667)
ivirshup Mar 5, 2021
1e814cb
deprecate scvi (#1703)
mjayasur Mar 9, 2021
056d183
updated ecosystem.rst to add triku (#1722)
alexmascension Mar 9, 2021
ade2975
Minor addition to contributing docs (#1726)
ivirshup Mar 10, 2021
5f7f01f
Preserve category order when groupby is a list (#1735)
gokceneraslan Mar 11, 2021
b90e730
Asymmetrical diverging colormaps and vcenter (#1551)
gokceneraslan Mar 14, 2021
8fe2897
add flake8 pre-commit
Zethson Feb 24, 2021
5a144a3
add E402 to flake8 ignore
Zethson Feb 24, 2021
55aee90
revert neighbors
Zethson Feb 24, 2021
fc9d2b6
address review
Zethson Feb 25, 2021
893a034
black
Zethson Feb 25, 2021
53948bd
using self for obs_tidy
Zethson Feb 25, 2021
95958ff
rebased
Zethson Mar 15, 2021
99e1218
rebasing
Zethson Mar 15, 2021
e030ab1
rebasing
Zethson Mar 15, 2021
38e5624
rebasing
Zethson Mar 15, 2021
9bd1f0f
Merge branch 'master' into feature/flake8
Zethson Mar 15, 2021
7529cd3
add flake8 to dev docs
Zethson Mar 15, 2021
c7b9ee4
add autopep8 to pre-commits
Zethson Mar 15, 2021
ad38870
add flake8 ignore docs
Zethson Mar 15, 2021
c968244
add exception todos
Zethson Mar 15, 2021
83e31cf
add ignore directories
Zethson Mar 15, 2021
f8b6b70
reinstated lambdas
Zethson Mar 15, 2021
9e6722a
fix tests
Zethson Mar 15, 2021
207f650
fix tests
Zethson Mar 15, 2021
7fa610e
fix tests
Zethson Mar 15, 2021
976d825
fix tests
Zethson Mar 15, 2021
e3d916c
fix tests
Zethson Mar 15, 2021
5ca8527
Add E741 to allowed flake8 violations.
Zethson Mar 16, 2021
c8b7273
Add F811 flake8 ignore for tests
Zethson Mar 16, 2021
9abc967
Fix mask comparison
Zethson Mar 16, 2021
3a83228
Fix mask comparison
Zethson Mar 16, 2021
e2a4ce7
fix flake8 config file
Zethson Mar 16, 2021
0c69d81
readded autopep8
Zethson Mar 16, 2021
d89105f
import Literal
Zethson Mar 16, 2021
5cdfa9d
revert literal import
Zethson Mar 16, 2021
da412fc
fix scatterplot pca import
Zethson Mar 16, 2021
220ac15
false comparison & unused vars
Zethson Mar 16, 2021
f373a70
Add cleaner level determination
Zethson Mar 16, 2021
5adcfae
Fix comment formatting
Zethson Mar 16, 2021
ce2fb44
Add smoother dev documentation
Zethson Mar 16, 2021
8d7e6e4
fix flake8
Zethson Mar 16, 2021
64f6d7a
Readd long comment
Zethson Mar 16, 2021
32dcf96
Assuming X as array like
Zethson Mar 16, 2021
07cab3d
fix flake8
Zethson Mar 16, 2021
699aaac
fix flake8 config
Zethson Mar 16, 2021
79619ce
reverted rank_genes
Zethson Mar 16, 2021
99a8f2e
fix disp_mean_bin formatting
Zethson Mar 16, 2021
abe0846
fix formatting
Zethson Mar 16, 2021
16a0394
add final todos
Zethson Mar 16, 2021
46f4ca7
boolean checks with is
Zethson Mar 17, 2021
ad418d8
_dpt formatting
Zethson Mar 17, 2021
10e5d76
literal fixes
Zethson Mar 17, 2021
9b1da8c
links to leafs
Zethson Mar 17, 2021
c372f0b
revert paga variable naming
ivirshup Mar 18, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[flake8]
exclude = docs, scanpy/tests
max-line-length = 120
ignore = F401, W503, E501, E203, E231, W504, E402, E126, E712, E741
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,8 @@ repos:
rev: 20.8b1
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.8.4
hooks:
- id: flake8
exclude: scanpy/tests/
4 changes: 1 addition & 3 deletions docs/extensions/function_images.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,7 @@
from sphinx.ext.autodoc import Options


def insert_function_images(
app: Sphinx, what: str, name: str, obj: Any, options: Options, lines: List[str]
):
def insert_function_images(app: Sphinx, what: str, name: str, obj: Any, options: Options, lines: List[str]):
path = app.config.api_dir / f'{name}.png'
if what != 'function' or not path.is_file():
return
Expand Down
4 changes: 1 addition & 3 deletions docs/extensions/github_links.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,7 @@ def __call__(


def register_links(app: Sphinx, config: Config):
gh_url = 'https://github.com/{github_user}/{github_repo}'.format_map(
config.html_context
)
gh_url = 'https://github.com/{github_user}/{github_repo}'.format_map(config.html_context)
app.add_role('pr', AutoLink('pr', f'{gh_url}/pull/{{}}', 'PR {}'))
app.add_role('issue', AutoLink('issue', f'{gh_url}/issues/{{}}', 'issue {}'))
app.add_role('noteversion', AutoLink('noteversion', f'{gh_url}/releases/tag/{{}}'))
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ source = ['scanpy']
omit = ['*/tests/*']

[tool.black]
line-length = 88
target-version = ['py36']
line-length = 120
target-version = ['py38']
skip-string-normalization = true
exclude = '''
/build/.*
Expand Down
25 changes: 6 additions & 19 deletions scanpy/_settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,7 @@ def _type_check(var: Any, varname: str, types: Union[type, Tuple[type, ...]]):
possible_types_str = types.__name__
else:
type_names = [t.__name__ for t in types]
possible_types_str = "{} or {}".format(
", ".join(type_names[:-1]), type_names[-1]
)
possible_types_str = "{} or {}".format(", ".join(type_names[:-1]), type_names[-1])
raise TypeError(f"{varname} must be of type {possible_types_str}")


Expand Down Expand Up @@ -141,9 +139,7 @@ def verbosity(self) -> Verbosity:

@verbosity.setter
def verbosity(self, verbosity: Union[Verbosity, int, str]):
verbosity_str_options = [
v for v in _VERBOSITY_TO_LOGLEVEL if isinstance(v, str)
]
verbosity_str_options = [v for v in _VERBOSITY_TO_LOGLEVEL if isinstance(v, str)]
if isinstance(verbosity, Verbosity):
self._verbosity = verbosity
elif isinstance(verbosity, int):
Expand All @@ -152,8 +148,7 @@ def verbosity(self, verbosity: Union[Verbosity, int, str]):
verbosity = verbosity.lower()
if verbosity not in verbosity_str_options:
raise ValueError(
f"Cannot set verbosity to {verbosity}. "
f"Accepted string values are: {verbosity_str_options}"
f"Cannot set verbosity to {verbosity}. " f"Accepted string values are: {verbosity_str_options}"
)
else:
self._verbosity = Verbosity(verbosity_str_options.index(verbosity))
Expand Down Expand Up @@ -185,10 +180,7 @@ def file_format_data(self, file_format: str):
_type_check(file_format, "file_format_data", str)
file_format_options = {"txt", "csv", "h5ad"}
if file_format not in file_format_options:
raise ValueError(
f"Cannot set file_format_data to {file_format}. "
f"Must be one of {file_format_options}"
)
raise ValueError(f"Cannot set file_format_data to {file_format}. " f"Must be one of {file_format_options}")
self._file_format_data = file_format

@property
Expand Down Expand Up @@ -293,10 +285,7 @@ def cache_compression(self) -> Optional[str]:
@cache_compression.setter
def cache_compression(self, cache_compression: Optional[str]):
if cache_compression not in {'lzf', 'gzip', None}:
raise ValueError(
f"`cache_compression` ({cache_compression}) "
"must be in {'lzf', 'gzip', None}"
)
raise ValueError(f"`cache_compression` ({cache_compression}) " "must be in {'lzf', 'gzip', None}")
self._cache_compression = cache_compression

@property
Expand Down Expand Up @@ -475,9 +464,7 @@ def _is_run_from_ipython():

def __str__(self) -> str:
return '\n'.join(
f'{k} = {v!r}'
for k, v in inspect.getmembers(self)
if not k.startswith("_") and not k == 'getdoc'
f'{k} = {v!r}' for k, v in inspect.getmembers(self) if not k.startswith("_") and not k == 'getdoc'
)


Expand Down
82 changes: 21 additions & 61 deletions scanpy/_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,7 @@ def check_versions():

# make this a warning, not an error
# it might be useful for people to still be able to run it
logg.warning(
f'Scanpy {__version__} needs umap ' f'version >=0.3.0, not {umap_version}.'
)
logg.warning(f'Scanpy {__version__} needs umap ' f'version >=0.3.0, not {umap_version}.')


def getdoc(c_or_f: Union[Callable, type]) -> Optional[str]:
Expand All @@ -77,8 +75,7 @@ def type_doc(name: str):
return cls

return '\n'.join(
f'{line} : {type_doc(line)}' if line.strip() in sig.parameters else line
for line in doc.split('\n')
f'{line} : {type_doc(line)}' if line.strip() in sig.parameters else line for line in doc.split('\n')
)


Expand Down Expand Up @@ -122,9 +119,7 @@ def _one_of_ours(obj, root: str):
return (
hasattr(obj, "__name__")
and not obj.__name__.split(".")[-1].startswith("_")
and getattr(
obj, '__module__', getattr(obj, '__qualname__', obj.__name__)
).startswith(root)
and getattr(obj, '__module__', getattr(obj, '__qualname__', obj.__name__)).startswith(root)
)


Expand Down Expand Up @@ -171,9 +166,7 @@ def _check_array_function_arguments(**kwargs):
# TODO: Figure out a better solution for documenting dispatched functions
invalid_args = [k for k, v in kwargs.items() if v is not None]
if len(invalid_args) > 0:
raise TypeError(
f"Arguments {invalid_args} are only valid if an AnnData object is passed."
)
raise TypeError(f"Arguments {invalid_args} are only valid if an AnnData object is passed.")


def _check_use_raw(adata: AnnData, use_raw: Union[None, bool]) -> bool:
Expand Down Expand Up @@ -209,12 +202,11 @@ def get_igraph_from_adjacency(adjacency, directed=None):
g.add_edges(list(zip(sources, targets)))
try:
g.es['weight'] = weights
except:
except KeyError:
pass
if g.vcount() != adjacency.shape[0]:
logg.warning(
f'The constructed graph has only {g.vcount()} nodes. '
'Your adjacency matrix contained redundant nodes.'
f'The constructed graph has only {g.vcount()} nodes. ' 'Your adjacency matrix contained redundant nodes.'
)
return g

Expand Down Expand Up @@ -281,17 +273,12 @@ def compute_association_matrix_of_groups(
reference labels, entries are proportional to degree of association.
"""
if normalization not in {'prediction', 'reference'}:
raise ValueError(
'`normalization` needs to be either "prediction" or "reference".'
)
raise ValueError('`normalization` needs to be either "prediction" or "reference".')
sanitize_anndata(adata)
cats = adata.obs[reference].cat.categories
for cat in cats:
if cat in settings.categories_to_ignore:
logg.info(
f'Ignoring category {cat!r} '
'as it’s in `settings.categories_to_ignore`.'
)
logg.info(f'Ignoring category {cat!r} ' 'as it’s in `settings.categories_to_ignore`.')
asso_names = []
asso_matrix = []
for ipred_group, pred_group in enumerate(adata.obs[prediction].cat.categories):
Expand All @@ -310,34 +297,25 @@ def compute_association_matrix_of_groups(
if normalization == 'prediction':
# compute which fraction of the predicted group is contained in
# the ref group
ratio_contained = (
np.sum(mask_pred_int) - np.sum(mask_ref_or_pred - mask_ref)
) / np.sum(mask_pred_int)
ratio_contained = (np.sum(mask_pred_int) - np.sum(mask_ref_or_pred - mask_ref)) / np.sum(mask_pred_int)
else:
# compute which fraction of the reference group is contained in
# the predicted group
ratio_contained = (
np.sum(mask_ref) - np.sum(mask_ref_or_pred - mask_pred_int)
) / np.sum(mask_ref)
ratio_contained = (np.sum(mask_ref) - np.sum(mask_ref_or_pred - mask_pred_int)) / np.sum(mask_ref)
asso_matrix[-1] += [ratio_contained]
name_list_pred = [
cats[i] if cats[i] not in settings.categories_to_ignore else ''
for i in np.argsort(asso_matrix[-1])[::-1]
if asso_matrix[-1][i] > threshold
]
asso_names += ['\n'.join(name_list_pred[:max_n_names])]
Result = namedtuple(
'compute_association_matrix_of_groups', ['asso_names', 'asso_matrix']
)
Result = namedtuple('compute_association_matrix_of_groups', ['asso_names', 'asso_matrix'])
return Result(asso_names=asso_names, asso_matrix=np.array(asso_matrix))


def get_associated_colors_of_groups(reference_colors, asso_matrix):
return [
{
reference_colors[i_ref]: asso_matrix[i_pred, i_ref]
for i_ref in range(asso_matrix.shape[1])
}
{reference_colors[i_ref]: asso_matrix[i_pred, i_ref] for i_ref in range(asso_matrix.shape[1])}
for i_pred in range(asso_matrix.shape[0])
]

Expand Down Expand Up @@ -366,16 +344,9 @@ def identify_groups(ref_labels, pred_labels, return_overlaps=False):
associated_predictions = {}
associated_overlaps = {}
for ref_label in ref_unique:
sub_pred_unique, sub_pred_counts = np.unique(
pred_labels[ref_label == ref_labels], return_counts=True
)
relative_overlaps_pred = [
sub_pred_counts[i] / pred_dict[n] for i, n in enumerate(sub_pred_unique)
]
relative_overlaps_ref = [
sub_pred_counts[i] / ref_dict[ref_label]
for i, n in enumerate(sub_pred_unique)
]
sub_pred_unique, sub_pred_counts = np.unique(pred_labels[ref_label == ref_labels], return_counts=True)
relative_overlaps_pred = [sub_pred_counts[i] / pred_dict[n] for i, n in enumerate(sub_pred_unique)]
relative_overlaps_ref = [sub_pred_counts[i] / ref_dict[ref_label] for i, n in enumerate(sub_pred_unique)]
relative_overlaps = np.c_[relative_overlaps_pred, relative_overlaps_ref]
relative_overlaps_min = np.min(relative_overlaps, axis=1)
pred_best_index = np.argsort(relative_overlaps_min)[::-1]
Expand Down Expand Up @@ -499,9 +470,7 @@ def select_groups(adata, groups_order_subset='all', key='groups'):
if key + '_masks' in adata.uns:
groups_masks = adata.uns[key + '_masks']
else:
groups_masks = np.zeros(
(len(adata.obs[key].cat.categories), adata.obs[key].values.size), dtype=bool
)
groups_masks = np.zeros((len(adata.obs[key].cat.categories), adata.obs[key].values.size), dtype=bool)
for iname, name in enumerate(adata.obs[key].cat.categories):
# if the name is not found, fallback to index retrieval
if adata.obs[key].cat.categories[iname] in adata.obs[key].values:
Expand All @@ -513,9 +482,7 @@ def select_groups(adata, groups_order_subset='all', key='groups'):
if groups_order_subset != 'all':
groups_ids = []
for name in groups_order_subset:
groups_ids.append(
np.where(adata.obs[key].cat.categories.values == name)[0][0]
)
groups_ids.append(np.where(adata.obs[key].cat.categories.values == name)[0][0])
if len(groups_ids) == 0:
# fallback to index retrieval
groups_ids = np.where(
Expand Down Expand Up @@ -551,7 +518,7 @@ def warn_with_traceback(message, category, filename, lineno, file=None, line=Non
import traceback

traceback.print_stack()
log = file if hasattr(file, 'write') else sys.stderr
log = file if hasattr(file, 'write') else sys.stderr # noqa: F841
settings.write(warnings.formatwarning(message, category, filename, lineno, line))


Expand Down Expand Up @@ -597,9 +564,7 @@ def subsample(
return Xsampled, rows


def subsample_n(
X: np.ndarray, n: int = 0, seed: int = 0
) -> Tuple[np.ndarray, np.ndarray]:
def subsample_n(X: np.ndarray, n: int = 0, seed: int = 0) -> Tuple[np.ndarray, np.ndarray]:
"""Subsample n samples from rows of array.

Parameters
Expand Down Expand Up @@ -748,17 +713,12 @@ def __contains__(self, key):
def _choose_graph(adata, obsp, neighbors_key):
"""Choose connectivities from neighbbors or another obsp column"""
if obsp is not None and neighbors_key is not None:
raise ValueError(
'You can\'t specify both obsp, neighbors_key. ' 'Please select only one.'
)
raise ValueError('You can\'t specify both obsp, neighbors_key. ' 'Please select only one.')

if obsp is not None:
return adata.obsp[obsp]
else:
neighbors = NeighborsView(adata, neighbors_key)
if 'connectivities' not in neighbors:
raise ValueError(
'You need to run `pp.neighbors` first '
'to compute a neighborhood graph.'
)
raise ValueError('You need to run `pp.neighbors` first ' 'to compute a neighborhood graph.')
return neighbors['connectivities']
24 changes: 6 additions & 18 deletions scanpy/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,7 @@ class _DelegatingSubparsersAction(_SubParsersAction):
def __init__(self, *args, _command: str, _runargs: Dict[str, Any], **kwargs):
super().__init__(*args, **kwargs)
self.command = _command
self._name_parser_map = self.choices = _CommandDelegator(
_command, self, **_runargs
)
self._name_parser_map = self.choices = _CommandDelegator(_command, self, **_runargs)


class _CommandDelegator(cabc.MutableMapping):
Expand All @@ -49,9 +47,7 @@ def __getitem__(self, k: str) -> ArgumentParser:
if which(f'{self.command}-{k}'):
return _DelegatingParser(self, k)
# Only here is the command list retrieved
raise ArgumentError(
self.action, f'No command “{k}”. Choose from {set(self)}'
)
raise ArgumentError(self.action, f'No command “{k}”. Choose from {set(self)}')

def __setitem__(self, k: str, v: ArgumentParser) -> None:
self.parser_map[k] = v
Expand All @@ -74,8 +70,7 @@ def __hash__(self) -> int:
def __eq__(self, other: Mapping[str, ArgumentParser]):
if isinstance(other, _CommandDelegator):
return all(
getattr(self, attr) == getattr(other, attr)
for attr in ['command', 'action', 'parser_map', 'runargs']
getattr(self, attr) == getattr(other, attr) for attr in ['command', 'action', 'parser_map', 'runargs']
)
return self.parser_map == other

Expand Down Expand Up @@ -103,9 +98,7 @@ def parse_known_args(
args: Optional[Sequence[str]] = None,
namespace: Optional[Namespace] = None,
) -> Tuple[Namespace, List[str]]:
assert (
args is not None and namespace is None
), 'Only use DelegatingParser as subparser'
assert args is not None and namespace is None, 'Only use DelegatingParser as subparser'
return Namespace(func=partial(run, [self.prog, *args], **self.cd.runargs)), []


Expand All @@ -115,20 +108,15 @@ def _cmd_settings() -> None:
print(settings)


def main(
argv: Optional[Sequence[str]] = None, *, check: bool = True, **runargs
) -> Optional[CompletedProcess]:
def main(argv: Optional[Sequence[str]] = None, *, check: bool = True, **runargs) -> Optional[CompletedProcess]:
"""\
Run a builtin scanpy command or a scanpy-* subcommand.

Uses :func:`subcommand.run` for the latter:
`~run(['scanpy', *argv], **runargs)`
"""
parser = ArgumentParser(
description=(
"There are a few packages providing commands. "
"Try e.g. `pip install scanpy-scripts`!"
)
description=("There are a few packages providing commands. " "Try e.g. `pip install scanpy-scripts`!")
)
parser.set_defaults(func=parser.print_help)

Expand Down
Loading