Skip to content

Commit

Permalink
WIP DOC starting to tweak the walktroughs
Browse files Browse the repository at this point in the history
  • Loading branch information
adswa committed Jul 15, 2020
1 parent c674917 commit 584ff60
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 35 deletions.
31 changes: 14 additions & 17 deletions docs/source/tutorial/exportdatacode.rst
Original file line number Diff line number Diff line change
@@ -1,29 +1,26 @@
.. include:: ../links.inc

Use case 2: Using the OSF as a data store for a GitHub-based project
Use case 3: Using the OSF as a data store for a GitHub-based project
====================================================================

Imagine you are a PhD student and want to collaborate on a fun little side
project with a student at another institute. It is quite obvious for the two of
you that your code will be hosted on GitHub_. And you also know enough about
DataLad_, that using it for the whole project will be really beneficial.
.. admonition:: Problem statement

But what about the data you are collecting?
The Dropbox is already full (`DataLad third party providers <http://handbook.datalad.org/en/latest/basics/101-138-sharethirdparty.html>`_).
And Amazon services don't seem to be your best alternative.
Suddenly you remember, that you got an OSF_ account recently, and that there is this nice `Datalad extension <https://github.com/datalad/datalad-osf/>`_ to set up a `Special Remote`_ on OSF_.
Imagine you are a PhD student and want to collaborate on a fun little side
project with a student at another institute. It is quite obvious for the two of
you that your code will be hosted on GitHub_. And you also know enough about
DataLad_, that using it for the whole project will be really beneficial.

Walk through
------------
But what about the data you are collecting?
The Dropbox is already full (`DataLad third party providers <http://handbook.datalad.org/en/latest/basics/101-138-sharethirdparty.html>`_).
And Amazon services don't seem to be your best alternative.
Suddenly you remember, that you got an OSF_ account recently, and that there is this nice `Datalad extension <https://github.com/datalad/datalad-osf/>`_ to set up a `Special Remote`_ on OSF_.

Installation
Walk through
^^^^^^^^^^^^
For installation checkout the :ref:`installation page <install>`.



Creating an Example Dataset
^^^^^^^^^^^^^^^^^^^^^^^^^^^
"""""""""""""""""""""""""""

As a very first step you want to set up a DataLad dataset. For this you should
run. In all examples a `$` in front indicates a new line in the Bash-Shell,
Expand Down Expand Up @@ -56,7 +53,7 @@ And we also want to add a text file, which will be saved on GitHub_ - in your ca
We now have a dataset with one file that can be worked on using GitHub and one that should be tracked using `git-annex`.

Setting up the OSF Remote
^^^^^^^^^^^^^^^^^^^^^^^^^
"""""""""""""""""""""""""

To use OSF as a storage, you need to provide either your OSF credentials or an OSF access token.
You can create such a token in your account settings (`Personal access token` and then `Create token`), make sure to create a `full_write` token to be able to create OSF projects and upload data to OSF.
Expand All @@ -72,7 +69,7 @@ We are now going to use datalad to create a sibling dataset on OSF with name `os
$ datalad create-sibling-osf -s osf --title OSF_PROJECT_NAME
Setting up GitHub Remote
^^^^^^^^^^^^^^^^^^^^^^^^
""""""""""""""""""""""""

We can set-up a GitHub Remote with name `github` and include a publish dependency with OSF - that way, when we publish our dataset to GitHub, the data files get automatically uploaded to OSF.

Expand Down
39 changes: 21 additions & 18 deletions docs/source/tutorial/exporthumandata.rst
Original file line number Diff line number Diff line change
@@ -1,24 +1,21 @@
.. include:: ../links.inc
.. _export:

Export a human-readable dataset to OSF
**************************************
Use case 2: Export a human-readable dataset to OSF
==================================================

Imagine you have been creating a reproducible workflow using DataLad_ from the
get go. Everything is finished now, code, data, and paper are ready. Last thing
to do: Publish your data.
.. admonition:: Problem statement

Using datalad-osf makes this really convenient.
Imagine you have been creating a reproducible workflow using DataLad_ from the
get go. Everything is finished now, code, data, and paper are ready. Last thing
to do: Publish your data.

Walk through
------------

Installation
Walk through
^^^^^^^^^^^^
For installation instructions, please checkout the `installation page <install>`.


Creating an Example Dataset
^^^^^^^^^^^^^^^^^^^^^^^^^^^
"""""""""""""""""""""""""""
We will create a small example DataLad dataset to show the functionality.

.. code-block:: bash
Expand All @@ -40,16 +37,19 @@ like in the `Datalad Handbook`_):
-O books/bash_guide.pdf
Setting up the OSF Remote
^^^^^^^^^^^^^^^^^^^^^^^^^

To use OSF as a storage, you first need to provide either your OSF credentials (username and password) or an OSF access token.
"""""""""""""""""""""""""

If you choose to use your credentials, proceed as follows:
To use OSF as a storage, you first need to provide either your OSF credentials (username and password) or an OSF access token, either as environment variables, or using ``datalad osf-credentials``:

.. code-block:: bash
export OSF_USERNAME=YOUR_USERNAME_FOR_OSF.IO
export OSF_PASSWORD=YOUR_PASSWORD_FOR_OSF.IO
$ datalad osf-credentials
You need to authenticate with 'https://osf.io' credentials. https://osf.io/settings/tokens provides information on how to gain access
token: <your token here>
You need to authenticate with 'https://osf.io' credentials. https://osf.io/settings/tokens provides information on how to gain access
token (repeat): <your token here>
osf_credentials(ok): [authenticated as <user> <e-mail>]
In this example, we are going to use an OSF access token instead.
You can create such a token in your account settings (`Personal access token` and then `Create token`).
Expand All @@ -76,3 +76,6 @@ After that we can export the current state (the `HEAD`) of our dataset in human
.. code-block:: bash
git annex export HEAD --to NAME_OF_REMOTE
https://git-annex.branchable.com/git-annex-export/

0 comments on commit 584ff60

Please sign in to comment.