Skip to content

Commit

Permalink
Trim trailing whitespace (zarr-developers#2563)
Browse files Browse the repository at this point in the history
  • Loading branch information
dstansby authored Dec 17, 2024
1 parent a615ee9 commit a7714c7
Show file tree
Hide file tree
Showing 9 changed files with 37 additions and 36 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/gpu_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
cache: 'pip'
- name: Install Hatch and CuPy
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip
pip install hatch
- name: Set Up Hatch Env
run: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/releases.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:

- name: Install PyBuild
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip
pip install hatch
- name: Build wheel and sdist
run: hatch build
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ jobs:
cache: 'pip'
- name: Install Hatch
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip
pip install hatch
- name: Set Up Hatch Env
run: |
Expand Down Expand Up @@ -84,7 +84,7 @@ jobs:
cache: 'pip'
- name: Install Hatch
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade pip
pip install hatch
- name: Set Up Hatch Env
run: |
Expand Down
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ repos:
rev: v5.0.0
hooks:
- id: check-yaml
- id: trailing-whitespace
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.13.0
hooks:
Expand Down
2 changes: 1 addition & 1 deletion README-v3.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ hatch env create test
## Run the Tests

```
hatch run test:run
hatch run test:run
```

or
Expand Down
40 changes: 20 additions & 20 deletions bench/compress_normal.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Line # Hits Time Per Hit % Time Line Contents
==============================================================
137 def compress(source, char* cname, int clevel, int shuffle):
138 """Compress data in a numpy array.
139
139
140 Parameters
141 ----------
142 source : array-like
Expand All @@ -30,33 +30,33 @@ Line # Hits Time Per Hit % Time Line Contents
147 Compression level.
148 shuffle : int
149 Shuffle filter.
150
150
151 Returns
152 -------
153 dest : bytes-like
154 Compressed data.
155
155
156 """
157
157
158 cdef:
159 char *source_ptr
160 char *dest_ptr
161 Py_buffer source_buffer
162 size_t nbytes, cbytes, itemsize
163 200 506 2.5 0.2 array.array char_array_template = array.array('b', [])
164 array.array dest
165
165
166 # setup source buffer
167 200 458 2.3 0.2 PyObject_GetBuffer(source, &source_buffer, PyBUF_ANY_CONTIGUOUS)
168 200 119 0.6 0.0 source_ptr = <char *> source_buffer.buf
169
169
170 # setup destination
171 200 239 1.2 0.1 nbytes = source_buffer.len
172 200 103 0.5 0.0 itemsize = source_buffer.itemsize
173 200 2286 11.4 0.8 dest = array.clone(char_array_template, nbytes + BLOSC_MAX_OVERHEAD,
174 zero=False)
175 200 129 0.6 0.0 dest_ptr = <char *> dest.data.as_voidptr
176
176
177 # perform compression
178 200 1734 8.7 0.6 if _get_use_threads():
179 # allow blosc to use threads internally
Expand All @@ -67,24 +67,24 @@ Line # Hits Time Per Hit % Time Line Contents
184 cbytes = blosc_compress(clevel, shuffle, itemsize, nbytes,
185 source_ptr, dest_ptr,
186 nbytes + BLOSC_MAX_OVERHEAD)
187
187
188 else:
189 with nogil:
190 cbytes = blosc_compress_ctx(clevel, shuffle, itemsize, nbytes,
191 source_ptr, dest_ptr,
192 nbytes + BLOSC_MAX_OVERHEAD, cname,
193 0, 1)
194
194
195 # release source buffer
196 200 616 3.1 0.2 PyBuffer_Release(&source_buffer)
197
197
198 # check compression was successful
199 200 120 0.6 0.0 if cbytes <= 0:
200 raise RuntimeError('error during blosc compression: %d' % cbytes)
201
201
202 # resize after compression
203 200 1896 9.5 0.6 array.resize(dest, cbytes)
204
204
205 200 186 0.9 0.1 return dest

*******************************************************************************
Expand All @@ -100,19 +100,19 @@ Line # Hits Time Per Hit % Time Line Contents
==============================================================
75 def decompress(source, dest):
76 """Decompress data.
77
77
78 Parameters
79 ----------
80 source : bytes-like
81 Compressed data, including blosc header.
82 dest : array-like
83 Object to decompress into.
84
84
85 Notes
86 -----
87 Assumes that the size of the destination buffer is correct for the size of
88 the uncompressed data.
89
89
90 """
91 cdef:
92 int ret
Expand All @@ -122,7 +122,7 @@ Line # Hits Time Per Hit % Time Line Contents
96 array.array source_array
97 Py_buffer dest_buffer
98 size_t nbytes
99
99
100 # setup source buffer
101 200 573 2.9 0.2 if PY2 and isinstance(source, array.array):
102 # workaround fact that array.array does not support new-style buffer
Expand All @@ -134,13 +134,13 @@ Line # Hits Time Per Hit % Time Line Contents
108 200 112 0.6 0.0 release_source_buffer = True
109 200 144 0.7 0.1 PyObject_GetBuffer(source, &source_buffer, PyBUF_ANY_CONTIGUOUS)
110 200 98 0.5 0.0 source_ptr = <char *> source_buffer.buf
111
111
112 # setup destination buffer
113 200 552 2.8 0.2 PyObject_GetBuffer(dest, &dest_buffer,
114 PyBUF_ANY_CONTIGUOUS | PyBUF_WRITEABLE)
115 200 100 0.5 0.0 dest_ptr = <char *> dest_buffer.buf
116 200 84 0.4 0.0 nbytes = dest_buffer.len
117
117
118 # perform decompression
119 200 1856 9.3 0.8 if _get_use_threads():
120 # allow blosc to use threads internally
Expand All @@ -149,12 +149,12 @@ Line # Hits Time Per Hit % Time Line Contents
123 else:
124 with nogil:
125 ret = blosc_decompress_ctx(source_ptr, dest_ptr, nbytes, 1)
126
126
127 # release buffers
128 200 754 3.8 0.3 if release_source_buffer:
129 200 326 1.6 0.1 PyBuffer_Release(&source_buffer)
130 200 165 0.8 0.1 PyBuffer_Release(&dest_buffer)
131
131
132 # handle errors
133 200 128 0.6 0.1 if ret <= 0:
134 raise RuntimeError('error during blosc decompression: %d' % ret)
10 changes: 5 additions & 5 deletions docs/guide/storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Storage
Zarr-Python supports multiple storage backends, including: local file systems,
Zip files, remote stores via ``fsspec`` (S3, HTTP, etc.), and in-memory stores. In
Zarr-Python 3, stores must implement the abstract store API from
:class:`zarr.abc.store.Store`.
:class:`zarr.abc.store.Store`.

.. note::
Unlike Zarr-Python 2 where the store interface was built around a generic ``MutableMapping``
Expand Down Expand Up @@ -50,8 +50,8 @@ filesystem.
Zip Store
~~~~~~~~~

The :class:`zarr.storage.ZipStore` stores the contents of a Zarr hierarchy in a single
Zip file. The `Zip Store specification_` is currently in draft form.
The :class:`zarr.storage.ZipStore` stores the contents of a Zarr hierarchy in a single
Zip file. The `Zip Store specification_` is currently in draft form.

.. code-block:: python
Expand All @@ -65,7 +65,7 @@ Remote Store

The :class:`zarr.storage.RemoteStore` stores the contents of a Zarr hierarchy in following the same
logical layout as the ``LocalStore``, except the store is assumed to be on a remote storage system
such as cloud object storage (e.g. AWS S3, Google Cloud Storage, Azure Blob Store). The
such as cloud object storage (e.g. AWS S3, Google Cloud Storage, Azure Blob Store). The
:class:`zarr.storage.RemoteStore` is backed by `Fsspec_` and can support any Fsspec backend
that implements the `AbstractFileSystem` API,

Expand All @@ -80,7 +80,7 @@ Memory Store
~~~~~~~~~~~~

The :class:`zarr.storage.RemoteStore` a in-memory store that allows for serialization of
Zarr data (metadata and chunks) to a dictionary.
Zarr data (metadata and chunks) to a dictionary.

.. code-block:: python
Expand Down
8 changes: 4 additions & 4 deletions docs/roadmap.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Roadmap
- Martin Durrant / @martindurant

.. note::

This document was written in the early stages of the 3.0 refactor. Some
aspects of the design have changed since this was originally written.
Questions and discussion about the contents of this document should be directed to
Expand Down Expand Up @@ -227,7 +227,7 @@ expose the required methods as async methods.
async def get_partial_values(self, key_ranges: List[Tuple[str, int, int]) -> bytes:
...
async def set(self, key: str, value: Union[bytes, bytearray, memoryview]) -> None:
... # required for writable stores
Expand All @@ -246,10 +246,10 @@ expose the required methods as async methods.
# additional (optional methods)
async def getsize(self, prefix: str) -> int:
...
async def rename(self, src: str, dest: str) -> None
...
Recognizing that there are many Zarr applications today that rely on the
``MutableMapping`` interface supported by Zarr-Python 2, a wrapper store
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1015,12 +1015,12 @@ class from ``fsspec``. The following example demonstrates how to access
a ZIP-archived Zarr group on s3 using `s3fs <https://s3fs.readthedocs.io/en/latest/>`_ and ``ZipFileSystem``:

>>> s3_path = "s3://path/to/my.zarr.zip"
>>>
>>>
>>> s3 = s3fs.S3FileSystem()
>>> f = s3.open(s3_path)
>>> fs = ZipFileSystem(f, mode="r")
>>> store = FSMap("", fs, check=False)
>>>
>>>
>>> # caching may improve performance when repeatedly reading the same data
>>> cache = zarr.storage.LRUStoreCache(store, max_size=2**28)
>>> z = zarr.group(store=cache)
Expand Down

0 comments on commit a7714c7

Please sign in to comment.