Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: STITCHES: a Python package to amalgamate existing Earth system model output into new scenario realizations #5525

Closed
editorialbot opened this issue Jun 7, 2023 · 118 comments
Assignees
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 6 (ESE) Earth Sciences and Ecology

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jun 7, 2023

Submitting author: @abigailsnyder (Abigail Snyder)
Repository: https://github.com/JGCRI/stitches
Branch with paper.md (empty if default branch): main
Version: v0.13
Editor: @observingClouds
Reviewers: @znicholls, @Zeitsperre
Archive: 10.5281/zenodo.11094934

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/ad81e6a435c13ae644a7ca8cb0ffbc35"><img src="https://joss.theoj.org/papers/ad81e6a435c13ae644a7ca8cb0ffbc35/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/ad81e6a435c13ae644a7ca8cb0ffbc35/status.svg)](https://joss.theoj.org/papers/ad81e6a435c13ae644a7ca8cb0ffbc35)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@znicholls & @Zeitsperre, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @observingClouds know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @znicholls

📝 Checklist for @Zeitsperre

@editorialbot editorialbot added Jupyter Notebook Python review TeX Track: 6 (ESE) Earth Sciences and Ecology waitlisted Submissions in the JOSS backlog due to reduced service mode. labels Jun 7, 2023
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.88  T=0.04 s (1004.0 files/s, 138666.7 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Python                          23            675            957           1664
Jupyter Notebook                 2              0            786            406
reStructuredText                 5            290            308            260
Markdown                         7             62              0            223
TeX                              1              4              0             52
YAML                             2             10              4             45
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            42           1053           2063           2685
-------------------------------------------------------------------------------


gitinspector failed to run statistical information for the repository

@editorialbot
Copy link
Collaborator Author

Wordcount for paper.md is 857

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5194/esd-2022-14 is OK
- 10.1017/9781009157940.001 is OK
- 10.5194/gmd-9-1937-2016 is OK
- 10.5194/gmd-9-3461-2016 is OK
- 10.1038/nclimate3310 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@observingClouds observingClouds removed the waitlisted Submissions in the JOSS backlog due to reduced service mode. label Jun 7, 2023
@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@observingClouds
Copy link

Welcome to the review process 🎉
@znicholls, @Zeitsperre please start your review by typing @editorialbot generate my checklist in a comment below.

@znicholls
Copy link

znicholls commented Jun 8, 2023

Review checklist for @znicholls

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/JGCRI/stitches?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@abigailsnyder) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
    • I agree that summary and statement of need are backwards
    • The summary could be made clearer for non-specialist audiences, but it is there. (As a suggestion, the summary could say something more like, "There is a need to inspect the interaction between climate change and impacts. At the moment this isn't possible with our most expensive tools. This package provides a way to build that link and examine the interaction without the computational cost. In a scientific paper, it has been shown that this method does not come with unworkably large errors [cite Tebaldi paper])"
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
    • Also there and states needs very clearly. References to other work could be better explained and fleshed out I think, particularly discussion of why other tools (e.g. MESMER, METEOR I would guess?) don't achieve what STITCHES does. A clearer reference to the full scientific explanation in this section would also be helpful I think as that would be what scientific readers need to fully understand how the tool works.
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
    • I don't know of any other package which attempts to do such stitching. This package does quite a lot of ESGF data handling so it could be compared to other packages which do such handling and processing such as ESMValTool, but that could also be out of scope.
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@znicholls
Copy link

Just an FYI, I probably won't get to this until the end of the month unfortunately

@Zeitsperre
Copy link

Hi there! I'll be taking a look at this very soon, hopefully next week. Thanks again for reaching out to me, @observingClouds.

@Zeitsperre
Copy link

Zeitsperre commented Jun 9, 2023

Review checklist for @Zeitsperre

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/JGCRI/stitches?
  • License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@abigailsnyder) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
    • Data is installed via a helper function available at the top-level of the library. Points to a Zenodo DOI repository.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
    • No results in paper.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
    • Package sources are only available via GitHub (PyPI?)
  • Functionality: Have the functional claims of the software been confirmed?
    • The quick-start examples can be reproduced locally on Linux. Code logic assumes POSIX environment (no Windows support).
    • Generating new data does not seem to be feasible (stitches.make_tas_archive) as fetch calls to pangeo are not "lazy" (estimated ~8h to collect data values on a reasonably fast connection; memory requirements are probably significant).
    • Stitches largely operates with direct calls to values (loading operations) and uses pandas DataFrames for internal logic before converting back to xarray/NetCDF. Xarray is used for fetching data, and nearly all operations are xarray-compatible. Why not leverage xarray/intake data stores to reduce computational/memory/bandwidth load?
    • Data values are loaded early and/or copied very often within the logic (slowdowns, thread safety concerns).
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally, these should be handled with an automated package management solution.
    • Dependencies for installation of the package are present in requirements.txt. Requirements pin pandas<1.5 (should be updated) and use pkg_resources (deprecated).
    • Documentation build requirements (recipe with sphinx + sphinx extensions) are not present in setup.py or repository.
    • Testing requirements make use of standard libraries (no additional dependencies needed).
    • Package metadata in setup() doesn't adhere to PEP standards, no use of package metadata classifiers (https://pypi.org/classifiers/).
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
    • An example notebook is present within the repository and built documentation, showcasing general purpose usage: construction of scenarios, as well as validation that the scenarios do not diverge significantly from source data.
    • Documentation is unclear on the data schema that is expected for the primary analysis operations (Necessary fields? Data formats? Expected outputs?)
    • It isn't clear if the library can work with other existing intake-esm data stores or is hardcoded for the data facets seen in https://storage.googleapis.com/cmip6/pangeo-cmip6.json.
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
    • API is showcased in the documentation. Functions use ReST-format docstring, but are not statically-typed.
    • Docstrings do not really follow a consistent formatting, mostly adhere to conventions (https://peps.python.org/pep-0257/).
    • Package API imports all functions to the top-level (would significantly benefit from modular divisions between testing/setup functions, analysis functions, and visualization functions).
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
    • There are GitHub Workflow CI tests configured for the repository. Tests are found within the package (stitches.tests) but should be moved to top-level / excluded from wheel.
    • Testing configuration relies on outdated installation method ($ python setup.py install).
    • Testing setup relies on python calls to library (python -c 'import stitches; stitches.install_package_data()') but this should be made part of the testing setup stage or exposed via a CLI.
    • Only Linux * Python3.9 is tested (would benefit from a testing matrix). GitHub Workflows are using deprecated Actions (actions/checkout@v1). Running tests locally shows that they pass (unittest or pytest) but shows many DeprecationWarnings.
    • Code coverage reporting via Codecov seems to be misconfigured and not up-to-date (local testing shows 52%; Badge shows 8%).
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
    • Repository contains a CONTRIBUTING.md guide specific for the project.
    • Authors ask that contributors clarify whether contributions are copyright-restricted (mentions of a project called "Hector"?) - this could be made easier with Pull Request Templates.
    • No Issue Templates or Pull Request Templates.
    • Testing/tooling setup and metrics are not mentioned in contributor guidelines.
    • No guidance is mentioned on how to report problems with software
    • Repository and ReadMe identify Code of Conduct (Contributor Covenant v2.0). No contact method is listed for reporting abuse.

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
    • Summary and Statement of Need appear to be mislabelled.
    • Summary (Statement of Need) is relatively clear and highlights well how stitches is unique in its approaches.
    • Summary (Statement of Need) could be shortened and made more accessible to general audiences.
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
    • Statement of Need (Summary) covers the value of stitches within the field of impact analysis.
    • The value of output from stitches to help in impact modelling is made clear, but the intended audience is only explicitly stated in the final paragraph. Should be stated earlier.
    • Stitches builds on existing science frameworks with a unique approach, but could be clearer in stating the relative strengths/weaknesses of its approach versus others (speed? versatility? portability? statistical/physical consistency?)
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
    • I'm not familiar with python packages that aid in generating new variable curves from ESM runs via stitched model data, but it would be good to briefly talk about others that may exist in the scientific Python community (I will search around as well).
    • It would be interesting to mention the unique value of stitches in scenario development as compared to multimodel ensemble statistics.
    • Other packages are not explored. Mention is made to PANGEO and PANGEO-backed software (xarray), but no comparisons are made to other scenario-building packages.
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
    • Paper could benefit from some editing; some repetitive statements, some redundant phrases could be removed.
    • A few statements require citations; "While many existing ESM emulation methods rely on ‘bottom up’ methods", "Research from the climate science community has indicated that many ESM output variables are tightly dependent upon the GSAT trajectory and thus scenario independent", "emulators trained with bottom- up methods often can only handle a small number of variables jointly (e.g. temperature and precipitation)", etc.
    • The description of the library could benefit from an explanation of the data structure or a brief summary of the capabilities or core functionality.
    • An example of the expected outputs from the tool (e.g. Quickstarter examples) would help showcase the capabilities of stitches.
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@Zeitsperre
Copy link

@observingClouds @abigailsnyder

My review here is complete. I think that STITCHES does something quite unique that I haven't seen before, and can genuinely see the value of it in performing global-scale impact modelling analyses. Thanks again for asking me to review this software.

I think there are quite a few improvements that can be done (many of them are relatively low-effort) that I would gladly open as issues (or submit as fixes in Pull Requests) if they are welcome (and if Pull Requests from reviewers are allowed from reviewers). Please let me know.

@observingClouds
Copy link

Thanks @Zeitsperre for the update! Please open issues over in the STITCHES' repository. If it helps the discussion of the issue feel free to open a PR as well.

@znicholls
Copy link

@observingClouds re conflict of interest: I have published with both Kalyn and Claudia previously. Sorry I should have read that more closely earlier. I would request that the COI is simply noted here (and ideally waived) given that the community is relatively small and I think I can provide an impartial review nonetheless.

@znicholls
Copy link

I have also completed my review. In the process, I have made a number of issues (all cross-linked here) to follow up with areas where I haven't ticked the box above yet.

In general, I think I could use the package without too much trouble but I would really struggle to extend it beyond its main use case. The major reason is that many of the internal data is built around pandas data frames, but it wasn't clear to me where I should look to understand what sort of form these data frames should take or what the data in them should be. This may require a separate section on the various 'models' used in the code (e.g. https://pyam-iamc.readthedocs.io/en/stable/data.html). Such sections can be a bit painful to write and maintain, but they are generally very helpful for helping new users and maintainers to understand how the system works. In particular, it wasn't clear to me what the recipe should look like nor the archive data (I could sort of guess from the examples, but I am not sure I would guess correctly so explicit docs could be very helpful). The other option would be to use something like pandera to add more structure to the data frames and communicate more explicitly what each column should be and means. Each option comes with pro's and con's, but I think I would find it very hard to build a mental model of the package as it is currently written and documented.

Arising from the above, there were a few areas where the functionality of the package wasn't super clear to me. I think adding a section like the 'model' section above and/or refactoring would make it much clearer what is going on. The issues were:

  • why are 'ensemble', 'experiment' and 'model' needed as part of the target_data? Could this not work just based on grouping by everything except the key columns of variable, year and value? In addition, shouldn't there be some units in target_data and perhaps also a reference period? Using a different reference period from that which was used to create the archive in the first place could cause havoc no?

  • Is the pangeo table hard-coded? This is probably fine for CMIP6, but seems problematic as we rapidly move beyond CMIP6 (perhaps this can be left for future work, but it seemed a bit odd to me to not give users a way to use a different archive if they want)

  • Is the historical/scenario split also intentionally hard-coded to know about the SSPs? This also seems like it could be problematic if anyone wanted to use STITCHES in a different context or after CMIP6 (or before, e.g. CMIP5).

  • Does the stitching only pull data from one model or can you end up with stitching that joins together windows/samples from multiple different models (e.g. CanESM5 and NASA-GISS output)? Reading the diagram, I think the answer is you can have more than one model but looking at the code, it wasn't super clear to me (e.g. this line csv_to_load = [file for file in all_files if (model in file)][0] in gmat_stitching confused me, maybe the stitching_id handles this?)

In general, the repo would also benefit from the use of some other tools e.g. code linters and auto-formatters. They would make the code much more readable and introduce very little cost (at least in my opinon).

@Zeitsperre
Copy link

(Hi all, I've been meaning to open some issues/PRs in the repo (and will when I find a minute), but I wanted to piggyback on Zeb's great comments here)

In general, I think I could use the package without too much trouble but I would really struggle to extend it beyond its main use case. The major reason is that many of the internal data is built around pandas data frames, but it wasn't clear to me where I should look to understand what sort of form these data frames should take or what the data in them should be. This may require a separate section on the various 'models' used in the code (e.g. https://pyam-iamc.readthedocs.io/en/stable/data.html). Such sections can be a bit painful to write and maintain, but they are generally very helpful for helping new users and maintainers to understand how the system works.

I am very much in agreement with this. Having worked with CORDEX/CMIP5/CMIP6 data for many years, I also felt it wasn't entirely clear how I could format my local NetCDF collections for use in STITCHES. A data model would provide a first step towards implementing methods for generalizing use-cases.

In particular, it wasn't clear to me what the recipe should look like nor the archive data (I could sort of guess from the examples, but I am not sure I would guess correctly so explicit docs could be very helpful). The other option would be to use something like pandera to add more structure to the data frames and communicate more explicitly what each column should be and means. Each option comes with pro's and con's, but I think I would find it very hard to build a mental model of the package as it is currently written and documented.

I mentioned in my review that xarray and intake are used to fetch data by STITCHES and I had found it odd that efforts were done to convert their Dataset format to CSVs and Pandas DataFrames. The xarray format is quite robust/extensible (https://docs.xarray.dev/en/stable/user-guide/data-structures.html) and preserves the metadata fetched from Pangeo. With tools intake and dask, it allows for methods of structuring subsetted requests for data in advance of GET requests, significantly reducing download time/memory requirements. I'm not familiar with Pandera, but it seems to have integrated dask support as well. Adherence to a standardized data structure would be essential to making the project much more portable.

In general, the repo would also benefit from the use of some other tools e.g. code linters and auto-formatters. They would make the code much more readable and introduce very little cost (at least in my opinon).

In my work, I do a fair amount of package maintenance and coding standards enforcement, and would be willing to give a hand in this way. If my schedule allows for it, I'd be more than happy to open a PR.

@observingClouds
Copy link

Thank you @Zeitsperre and @znicholls for your review and starting the discussion on some action-items. Based on the agreement between your reviews, I suggest @abigailsnyder and their colleagues to already go ahead and address these common issues/suggestions.

@observingClouds
Copy link

@abigailsnyder please also have a look at some editorial changes at JGCRI/stitches#100.

@observingClouds
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@observingClouds
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.5194/esd-2022-14 is OK
- 10.1017/9781009157940.001 is OK
- 10.5194/gmd-9-1937-2016 is OK
- 10.5194/gmd-9-3461-2016 is OK
- 10.1038/nclimate3310 is OK
- 10.5194/esd-11-139-2020 is OK
- 10.5194/esd-13-851-2022 is OK
- 10.1029/2022GL099012 is OK
- 10.5194/gmd-8-939-2015 is OK
- 10.5194/acp-11-1417-2011 is OK
- 10.5194/gmd-11-2273-2018 is OK
- 10.1029/2022EF002803 is OK
- 10.1002/wcc.457 is OK
- 10.1017/9781009157896.002 is OK
- 10.1017/9781009157896.001 is OK
- 10.59327/IPCC/AR6-9789291691647.001 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.5281/zenodo.10951361 is OK
- 10.5281/zenodo.3509134 is OK
- 10.5334/jors.148 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/ese-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#5316, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label May 7, 2024
@observingClouds
Copy link

@abigailsnyder thank you for integrating the last changes. I have now recommended the manuscript for acceptance. The editor-in-chief will now take over in the next few days, makes a few more checks and then your manuscript should be soon published. In the meantime you can have a look at the final proof above and let us know if it does all look to your satisfaction (or not).

@Zeitsperre
Copy link

Ack! I see: "team, T. pandas development." again

@kthyng
Copy link

kthyng commented May 8, 2024

@Zeitsperre Who needs to do something to fix that?

@kthyng
Copy link

kthyng commented May 8, 2024

Oh I see, @Zeitsperre you are a reviewer and @abigailsnyder you are the author. @abigailsnyder can you address this comment in the references? I assume it came up previously too.

@kthyng
Copy link

kthyng commented May 8, 2024

Hi! I'll take over now as Track Associate Editor in Chief to do some final submission editing checks. After these checks are complete, I will publish your submission!

  • Are checklists all checked off?
  • Check that version was updated and make sure the version from JOSS matches github and Zenodo.
  • Check that software archive exists, has been input to JOSS, and title and author list match JOSS paper (or purposefully do not).
  • Check paper.

@kthyng
Copy link

kthyng commented May 8, 2024

Paper:

  • 1st paragraph: CMIP6-ScenarioMIP looks strange - is that the normal way to reference this? The rest of the paper you see to use a "/" instead of a "-".
  • paragraph 3 pg 2:
  • ( see (V Masson): the reference should be inline in the parentheses, otherwise remove the outer parenthesis and keep the reference parenthetical.
  • Also comment above ⬆️

@abigailsnyder
Copy link

@kthyng Thank you! I finally have more time this morning to proof read and address

@abigailsnyder
Copy link

@kthyng I have addressed the raised issues (and one additional, switching from quotations to backtick on line 75 of the PDF). I have done a proof read on my end and will complete another on the proof editorialbot produces here. Thank you!

@kthyng
Copy link

kthyng commented May 9, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@kthyng
Copy link

kthyng commented May 9, 2024

ok ready to go!

@kthyng
Copy link

kthyng commented May 9, 2024

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Snyder
  given-names: Abigail C.
  orcid: "https://orcid.org/0000-0002-9034-9948"
- family-names: Dorheim
  given-names: Kalyn R.
  orcid: "https://orcid.org/0000-0001-8093-8397"
- family-names: Tebaldi
  given-names: Claudia
  orcid: "https://orcid.org/0000-0001-9233-8903"
- family-names: Vernon
  given-names: Chris R.
  orcid: "https://orcid.org/0000-0002-3406-6214"
doi: 10.5281/zenodo.11094934
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Snyder
    given-names: Abigail C.
    orcid: "https://orcid.org/0000-0002-9034-9948"
  - family-names: Dorheim
    given-names: Kalyn R.
    orcid: "https://orcid.org/0000-0001-8093-8397"
  - family-names: Tebaldi
    given-names: Claudia
    orcid: "https://orcid.org/0000-0001-9233-8903"
  - family-names: Vernon
    given-names: Chris R.
    orcid: "https://orcid.org/0000-0002-3406-6214"
  date-published: 2024-05-09
  doi: 10.21105/joss.05525
  issn: 2475-9066
  issue: 97
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5525
  title: "STITCHES: a Python package to amalgamate existing Earth system
    model output into new scenario realizations"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05525"
  volume: 9
title: "STITCHES: a Python package to amalgamate existing Earth system
  model output into new scenario realizations"

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.05525 joss-papers#5339
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.05525
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels May 9, 2024
@kthyng
Copy link

kthyng commented May 9, 2024

Congratulations on your new publication @abigailsnyder! Many thanks to @observingClouds and to reviewers @znicholls and @Zeitsperre for your time, hard work, and expertise!! JOSS wouldn't be able to function nor succeed without your efforts.

@kthyng kthyng closed this as completed May 9, 2024
@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.05525/status.svg)](https://doi.org/10.21105/joss.05525)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.05525">
  <img src="https://joss.theoj.org/papers/10.21105/joss.05525/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.05525/status.svg
   :target: https://doi.org/10.21105/joss.05525

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Jupyter Notebook published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 6 (ESE) Earth Sciences and Ecology
Projects
None yet
Development

No branches or pull requests

7 participants