-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CRUD operations #14
Comments
Hello, Carlo - It's certainly possible to implement using parts of the HDF5 API, which provides a way of making resizable datasets - but it is not currently implemented. Implementation would involve: at Dataset creation:
when writing a point:
None of these features is currently implemented in h5wasm (slice is enabled for reading, but not writing - and no option for passing chunk sizes, maxsize are used in the create_dataset function) Out of curiosity, are you hoping to do this in the browser (where you would be accumulating points in memory no matter how you do it) or in node.js, where you could write directly to disk? |
You can create resizable datasets (must specify chunks and maxshape for create_dataset), and you can resize resizable datasets, and you can overwrite sections of data (so if you resize, you can write to the new extended region) in v0.4.11 released just now. |
Hi all,
I’m trying to do CRUD OPS in order to manage and populate a dataset: writing and reading on it are straightforwared, but i cannot understand how to update or delete attributes, group or dataset.
I’m working with a stream of data in a nest js server, where elements are:
{
_id,
timestamp,
value
}
I need to create a dataset for every _id that contains timestamp and value (1st and 2nd columns in a n x 2 table).
I’m working with a big volume of data, so i cannot collect everything server-side before creating the dataset, so i need to create the dataset and update it at every step.
As my last point, i'm not aware if the creation of the dataset using dynamic length is possible; but I can retrieve the final length of the dataset.
Kind Regards, Carlo
The text was updated successfully, but these errors were encountered: