You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is for tracking projects outside this repo that need to land before we can ship 0.8.0. For things within this repo, check the 0.8.0 milstone
Qri can now keep your data up to date for you. 0.8.0 overhauls qri update into a service that schedules & runs updates in the background on your computer. Qri runs datasets and maintains a log of changes.
schedule shell scripts
Scheduling datasets that have starlark transforms is the ideal workflow in terms of portability, but a new set of use cases open by adding the capacity to schedule & execute shell scripts within the same cron environment.
Starlark changes
We've made two major changes, and one small API-breaking change. Bad news first:
ds.set_body has different optional arguments
ds.set_body(csv_string, raw=True, data_format="csv") is now ds.set_body(csv_string, parse_as="csv"). We think think this makes more sense, and that the previous API was confusing enough that we needed to completely deprecate it. Any prior transform scripts that used raw or data_format arguments will need to update.
new beautiful soup-like HTML package
Our html package is difficult to use, and we plan to deprecate it in a future release. In it's place we've introduced bsoup, a new package that implements parts of the beautiful soup 4 api. It's much easier use, and will be familiar to anyone coming from the world of python.
the "ds" passed to a transform is now the previous dataset version
The ds that's passed to is now the existing dataset, awaiting transformation. For technical reasons, ds used to be a blank dataset. In this version we've addressed those issues, which makes examining the current state a dataset possible without any extra load_dataset work. This makes things like append-only datasets a one-liner:
Lots of Qri output is, well, long, so we now check for the presence of the $PAGER environment variable and use it to show "scrolling" data where appropriate. While we're at it we've cleaned up output to make things a little more readable. Windows should be unaffected by this change. If you ever want to avoid pagination, I find the easiest way to do so is by piping to cat. For example:
$ qri ls | cat
Happy paging!
Switch to go modules
Our project has now switched entirely to using go modules. In the process we've deprecated gx, the distributed package manager we formerly used to fetch qri dependencies. This should dramatically simplify the process of building Qri from source by bringing dependency management into alignment with idiomatic go practices.
Dataset Strict mode
dataset.structure has a new boolean field: strict. If strict is true, a dataset must pass validation against the specified schema in order to save. When a dataset Dataset is in strict mode, Qri can assume that all data in the body is valid. Being able to make this assumption will allow us to provide additional functionality and performance speedups in the future. If your dataset has no errors, be sure to set strict to true.
The text was updated successfully, but these errors were encountered:
b5
added
the
chore
Changes to the build process or auxiliary tools and libraries such as documentation generation
label
May 13, 2019
This issue is for tracking projects outside this repo that need to land before we can ship 0.8.0. For things within this repo, check the 0.8.0 milstone
Feel free to chime in with stuff that should make the notes. I'll update the draft below as we go
Draft Release Notes:
Version 0.8.0 is our best-effort to close out the first set of public features
Automatic Updates (RFC0024)
Qri can now keep your data up to date for you. 0.8.0 overhauls
qri update
into a service that schedules & runs updates in the background on your computer. Qri runs datasets and maintains a log of changes.schedule shell scripts
Scheduling datasets that have starlark transforms is the ideal workflow in terms of portability, but a new set of use cases open by adding the capacity to schedule & execute shell scripts within the same cron environment.
Starlark changes
We've made two major changes, and one small API-breaking change. Bad news first:
ds.set_body
has different optional argumentsds.set_body(csv_string, raw=True, data_format="csv")
is nowds.set_body(csv_string, parse_as="csv")
. We think think this makes more sense, and that the previous API was confusing enough that we needed to completely deprecate it. Any prior transform scripts that usedraw
ordata_format
arguments will need to update.new beautiful soup-like HTML package
Our
html
package is difficult to use, and we plan to deprecate it in a future release. In it's place we've introducedbsoup
, a new package that implements parts of the beautiful soup 4 api. It's much easier use, and will be familiar to anyone coming from the world of python.the "ds" passed to a transform is now the previous dataset version
The
ds
that's passed to is now the existing dataset, awaiting transformation. For technical reasons,ds
used to be a blank dataset. In this version we've addressed those issues, which makes examining the current state a dataset possible without any extraload_dataset
work. This makes things like append-only datasets a one-liner:CLI uses '$PAGER' on POSIX systems
Lots of Qri output is, well, long, so we now check for the presence of the
$PAGER
environment variable and use it to show "scrolling" data where appropriate. While we're at it we've cleaned up output to make things a little more readable. Windows should be unaffected by this change. If you ever want to avoid pagination, I find the easiest way to do so is by piping tocat
. For example:Happy paging!
Switch to go modules
Our project has now switched entirely to using go modules. In the process we've deprecated
gx
, the distributed package manager we formerly used to fetch qri dependencies. This should dramatically simplify the process of building Qri from source by bringing dependency management into alignment with idiomatic go practices.Dataset Strict mode
dataset.structure
has a new boolean field:strict
. Ifstrict
istrue
, a dataset must pass validation against the specified schema in order to save. When a dataset Dataset is in strict mode, Qri can assume that all data in the body is valid. Being able to make this assumption will allow us to provide additional functionality and performance speedups in the future. If your dataset has no errors, be sure to setstrict
totrue
.The text was updated successfully, but these errors were encountered: