You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some large files that I need to extract, given their size and that pylzma reads the entire file into memory, can a stream API be provided for py7zlib.ArchiveFile objects.
This could be via a read_streamed(chunk_size=...) -> Iterator[bytes] function or allowing a size to be entered into read - read(amount=None) -> bytes. Allowing users to roll their own read_streamed:
def read_streamed(archive, chunk_size):
while True:
data = archive.read(chunk_size)
if not data:
break
yield data
This would allow:
Users to read large archived files, without breaking the computer.
Also in the context of GlobaLeaks we would find this pretty useful.
Our need is to be able to perform zip+encryption for files bigger than the available server RAM.
For this reason we currently implement zip in streaming with a modified version of zipfile inspired form spideroa, but we are looking for the possibility to bind AES encryption to it
I have some large files that I need to extract, given their size and that pylzma reads the entire file into memory, can a stream API be provided for
py7zlib.ArchiveFile
objects.This could be via a
read_streamed(chunk_size=...) -> Iterator[bytes]
function or allowing a size to be entered into read -read(amount=None) -> bytes
. Allowing users to roll their ownread_streamed
:This would allow:
The text was updated successfully, but these errors were encountered: