diff --git a/.github/actions/acceptance-test/action.yml b/.github/actions/acceptance-test/action.yml index 150c7bf7..3c772d35 100644 --- a/.github/actions/acceptance-test/action.yml +++ b/.github/actions/acceptance-test/action.yml @@ -15,10 +15,6 @@ runs: echo "Set timeouts: ${{ inputs.timeouts }}" shell: bash - - name: Append hosts file to enable "pass.local" on localhost - shell: bash - run: echo "127.0.0.1 pass.local" | sudo tee -a /etc/hosts - - name: Checkout pass-docker uses: actions/checkout@v3 with: diff --git a/docs/dev/authentication-authorization.md b/docs/dev/authentication-authorization.md index 291774bf..eae681c1 100644 --- a/docs/dev/authentication-authorization.md +++ b/docs/dev/authentication-authorization.md @@ -4,44 +4,26 @@ ### User Interface Authentication -Authentication for the user interface occurs through the use of an authentication service provider (SP), [pass-auth](https://github.com/eclipse-pass/pass-auth), written as a node application employing the express framework and acting as a reverse proxy in front of pass-core and other api services. +Authentication for the user interface occurs through the use of an authentication service provider (SP), [pass-core](https://github.com/eclipse-pass/pass-core). -`pass-auth` is configured to initiate a SAML exchange with a known identity provider (IDP) that supports [Shibboleth](https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview). Although `pass-auth` itself is not a Shibboleth service provider specifically, it is a generalized SAML service provider that can handle specific Shibboleth interactions with an IDP. In response to a valid `authn` assertion against an IDP, `pass-auth` expects to receive and validate a Shibboleth SAML assertion against its assertion consumer service (ACS) URL. This assertion is expected to contain the following Shibboleth attributes: +`pass-core` is configured to initiate a SAML exchange with a known identity provider (IDP) that supports [Shibboleth](https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview). Although `pass-core` itself is not a Shibboleth service provider specifically, it is a generalized SAML service provider that can handle specific Shibboleth interactions with an IDP. In response to a valid `authn` assertion against an IDP, `pass-core` expects to receive and validate a Shibboleth SAML assertion against its assertion consumer service (ACS) URL. This assertion is expected to contain the following Shibboleth attributes: ``` -'urn:oid:2.16.840.1.113730.3.1.241': 'displayName' -'urn:oid:1.3.6.1.4.1.5923.1.1.1.9': 'scopedAffiliation' -'urn:oid:0.9.2342.19200300.100.1.3': 'email' -'urn:oid:2.16.840.1.113730.3.1.3': 'employeeNumber' -'urn:oid:1.3.6.1.4.1.5923.1.1.1.1': 'employeeIdType' -'urn:oid:1.3.6.1.4.1.5923.1.1.1.6': 'eppn' -'urn:oid:2.5.4.42': 'givenName' -'urn:oid:2.5.4.4': 'surname' -'urn:oid:1.3.6.1.4.1.5923.1.1.1.13': 'uniqueId' -'urn:oid:0.9.2342.19200300.100.1.1': 'uniqueIdType' +'urn:oid:2.16.840.1.113730.3.1.241': 'Display name' +'urn:oid:1.3.6.1.4.1.5923.1.1.1.9': 'Scoped affiliation' +'urn:oid:0.9.2342.19200300.100.1.3': 'Email' +'urn:oid:2.16.840.1.113730.3.1.3': 'Employee id' +'urn:oid:1.3.6.1.4.1.5923.1.1.1.6': 'eduPersonPrincipalName' +'urn:oid:2.5.4.42': 'Given name' +'urn:oid:2.5.4.4': 'Surname' +'urn:oid:1.3.6.1.4.1.5923.1.1.1.13': 'Unique id' ``` -These Shibboleth attributes are used to locate a user in `pass-core` and set up a user object on the session. Initially, `pass-auth` will use these attributes to build shibboleth headers that allow the user service in `pass-core` to locate a user. Additionally, on all subsequent requests to `pass-core`, `pass-auth` will add the following headers using the Shibboleth attributes stored on a session: - -``` -'Displayname' -'Mail' -'Eppn' -'Givenname' -'Sn' -'Affiliation' -'Employeenumber' -'unique-id' -'employeeid' -``` - -These headers are required by `pass-core` to authenticate and authorize requests to its API. - -`pass-auth`, establishes a server side session and delivers a http-only cookie to the browser client which [`pass-ui`](https://github.com/eclipse-pass/pass-ui/) will use to establish a client side session in the user interface. This http-only cookie is delivered back to `pass-auth` by the user interface with every request which `pass-auth` will validate before it forwards on a request with the required Shibboleth headers to `pass-core`. +These Shibboleth attributes are used to locate a user in `pass-core` and set up a user object on the session. `pass-core`, establishes a server side session and delivers a http-only cookie to the browser client which [`pass-ui`](https://github.com/eclipse-pass/pass-ui/) will use to establish a client side session in the user interface. This http-only cookie is delivered back to `pass-core` by the user interface with every request. This series of interactions is depicted as follows: -![pass auth interactions diagram](https://user-images.githubusercontent.com/6305935/234353077-519a6987-96bc-44df-80e1-ee06920bcb40.png) +![authentication interactions diagram](pass_authn.png) ### REST API Authentication @@ -49,34 +31,34 @@ Every request to the [REST API](https://github.com/eclipse-pass/pass-core) must Requests to the API come from two types of clients, backend services and users. Requests from users must have already been authenticated with Shibboleth and have the headers specified above. If a request contains Shibboleth headers, it is considered trusted, authentication succeeds, it is associated with a user, and given the SUBMITTER role. If the user does not exist, it is created. If the user does exist, it is updated to reflect the information in the headers. If the request does not contain the Shibboleth headers, it undergoes HTTP basic authentication. There is one HTTP basic user defined with the BACKEND role for the backend services. -Mapping from Shibboleth headers to PASS users: - * displayName: Displayname header - * email: Mail header - * firstName: Givenname header - * lastName: Sn header - * username: Eppn header - * affiliations: DOMAIN, all values after splitting affiliation header on `;` +Mapping from Shibboleth attributes to PASS users: + * displayName: Display name + * email: Email + * firstName: Given name + * lastName: Surname + * username: Eppn + * affiliations: DOMAIN, all values after splitting affiliation on `;` * locatorIds: UNIQUE ID, INSTITUTIONAL_ID, EMPLOYEE_ID * role: SUBMITTER -The DOMAIN is the value of the Eppn header after `@`. -The UNIQUE_ID is `DOMAIN:unique-id:` joined to the value of the unique id header before `@`. -The INSTITUTIONAL_ID is `DOMAIN:eppn` joined to the value of the Eppn header before the `@`. -The EMOLOYEE_ID is `DOMAIN:employeeid` joined to the value of the Employeenumber header. +The DOMAIN is the value of the Eppn attribute after `@`. +The UNIQUE_ID is `DOMAIN:unique-id:` joined to the value of the unique id attribute before `@`. +The INSTITUTIONAL_ID is `DOMAIN:eppn` joined to the value of the Eppn attribute before the `@`. +The EMOLOYEE_ID is `DOMAIN:employeeid` joined to the value of the Employe id value. The locatorIds are used to find an existing user in the system. If any of the locatorIds match an existing user, the user is considered to match. ### Example mapping -Request headers: +Shibboleth attributes: * Eppn: sallysubmitter@johnshopkins.edu - * Displayname: Sally M. Submitter + * Display name: Sally M. Submitter * Mail: sally232@jhu.edu - * Givenname: Sally - * Sn: Submitter + * Given name: Sally + * Surnamen: Submitter * Affiliation: FACULTY@johnshopkins.edu - * Employeenumber: 02342342 - * unique-id: sms2323@johnshopkins.edu + * Employee id: 02342342 + * Unique id: sms2323@johnshopkins.edu Resulting User: * affiliation: FACULTY@johnshopkins.edu, johnshopkins.edu @@ -98,10 +80,10 @@ Object permissions: | Type | Create | Read | Update | Delete | | ------- | ------- | ---- | ------- | ------- | -| Submission | BACKEND or SUBMITTER | any | BACKEND or owns submission | BACKEND | -| SubmissionEvent | BACKEND or owns submission | any | BACKEND or owns submission | BACKEND | -| File | BACKEND or owns submission | any | BACKEND or owns submission | BACKEND | -| Publication | BACKEND or SUBMITTER | any | BACKEND or SUBMITTER | BACKEND | +| Submission | BACKEND or SUBMITTER | any | BACKEND or owns submission | BACKEND or owns submission| +| SubmissionEvent | BACKEND or owns submission | any | BACKEND | BACKEND | +| File | BACKEND or owns submission | any | BACKEND or owns submission | BACKEND or owns submission| +| Publication | BACKEND or owns submission | any | BACKEND or owns submission | BACKEND or owns submission| | * | BACKEND | any | BACKEND | BACKEND | The permissions are all role based with the exception of "owns submission". By "owns submission" what is meant is that the user is the submitter or a preparer on a submission associated with the object. A submitter is the target of the submitter relationship on a Submission. A preparer is the target of the preparers relationship on a Submission. SubmissionEvent and File are associated with a submission through a submission relationship. The intent is to make sure submitters can only modify submissions which they have created or are explicitly allowed to help on. diff --git a/docs/dev/local_demo.md b/docs/dev/local_demo.md index 84ba7507..9842e2b0 100644 --- a/docs/dev/local_demo.md +++ b/docs/dev/local_demo.md @@ -1,18 +1,5 @@ # Setting Up a Local Demo System -## Configure pass.local - -You will need edit your local hosts file with - -```bash -127.0.0.1 pass.local -``` - -Instructions to edit the `/etc/hosts` file are avaiable for - -* [Windows hosts file](https://www.freecodecamp.org/news/how-to-find-and-edit-a-windows-hosts-file/) -* [Mac/Linux hosts file](https://setapp.com/how-to/edit-mac-hosts-file) - ## Install Docker The demo application runs on [Docker](https://www.docker.com) and [Docker Compose](https://docs.docker.com/compose/). @@ -41,67 +28,21 @@ cd pass-docker From here you can `git fetch` the latest code and `git checkout ` to switch between code branches. -There is a helper script [demo.sh](https://github.com/eclipse-pass/pass-docker/blob/main/demo.sh) -that wraps up the `docker-compose` command with the appropriate configuration files, and -you can run any [docker compose cli command](https://docs.docker.com/compose/reference/). - - -## Pull Latest Docker Images - -This will pull all the latest pass docker images. - -```bash -./demo.sh pull -``` +Look at the [pass-docker](https://github.com/eclipse-pass/pass-docker/) documentation for how to use the +`docker-compose` command to start PASS. You can run any [docker compose cli command](https://docs.docker.com/compose/reference/). ## Start Pass -This will start pass in the background - -```bash -./demo.sh up -d -``` - -At this point you will need to watch the logs to wait until the -`pass-core` shows that it has started. This is something we -are actively working to address. +Pull Docker images and start PASS in the background: ```bash -./demo.sh logs -f -``` - -It might take a while, but once the logs _stop_ with the message below -then it is ready to load the base data. - -``` -[main] [Pass, ] INFO org.eclipse.pass.main.Main.logStarted - Started Main in 69.863 seconds (JVM running for 79.59) -``` - -That base data can now be loaded using the following command. - -```bash -./demo.sh up loader -``` - -If run successfully it should exit with a message like - -```bash -loader | ### ./data/submissions.json -loader | Reading file ./data/submissions.json -loader | Request: [POST] (http://pass-core:8080/data/submission) -loader | Request: [POST] (http://pass-core:8080/data/submission) -loader | Request: [POST] (http://pass-core:8080/data/submission) -loader | Request: [POST] (http://pass-core:8080/data/submission) -loader | > Response (http://pass-core:8080/data/submission) -loader | > Response (http://pass-core:8080/data/submission) -loader | > Response (http://pass-core:8080/data/submission) -loader | > Response (http://pass-core:8080/data/submission) -loader exited with code 0 +docker compose -f docker-compose.yml -f eclipse-pass.local.yml up -d --no-build --quiet-pull --pull always ``` +You will see various containers start. Once the `loader` container has started PASS should be available. ## Open browser -In your browser, navigate to [pass.local](https://pass.local). +In your browser, navigate to [http://localhost:8080]. ![Welcome to PASS](../assets/passapp/welcome_screen.png) @@ -113,27 +54,17 @@ And then you are authenticated and can view the PASS dashboard. ![PASS dashbaord](../assets/passapp/dashboard.png) - ## Shutting down the demo -The running demo can be stopped with the following command +The running demo can be stopped with the following command: ```bash -./demo.sh down +docker compose -f docker-compose.yml -f eclipse-pass.local.yml down -v ``` -## Troubleshooting - -### `WARN[0000]` can be ignored +This will also delete volumes. -You might see warnings about unset variables. That is OK, but also feel free -to push a PR to address the code to no longer display these warnings. - -```bash -WARN[0000] The "METADATA_SCHEMA_URI" variable is not set. Defaulting to a blank string. -WARN[0000] The "EMBER_GIT_BRANCH" variable is not set. Defaulting to a blank string. -WARN[0000] The "EMBER_GIT_REPO" variable is not set. Defaulting to a blank string. -``` +## Troubleshooting ### Cannot connect to the Docker daemon @@ -154,16 +85,4 @@ If you see an error like failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount2714819657/Dockerfile: no such file or directory ``` -It's likely pulling all the images did not complete successfully. Re-run - -```bash -./demo.sh pull -``` - - -## References - -* [Pass-docker prerequisites](https://github.com/eclipse-pass/pass-docker#prerequisites) - - - +It's likely pulling all the images did not complete successfully. Try restarting. diff --git a/docs/dev/pass_authn.png b/docs/dev/pass_authn.png new file mode 100644 index 00000000..0edef0b9 Binary files /dev/null and b/docs/dev/pass_authn.png differ diff --git a/docs/dev/pass_authn.svg b/docs/dev/pass_authn.svg new file mode 100644 index 00000000..c956bf8f --- /dev/null +++ b/docs/dev/pass_authn.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/docs/dev/running-pass-ui-on-your-host-machine.md b/docs/dev/running-pass-ui-on-your-host-machine.md index fe6e9ffe..1e15e2bf 100644 --- a/docs/dev/running-pass-ui-on-your-host-machine.md +++ b/docs/dev/running-pass-ui-on-your-host-machine.md @@ -1,26 +1,26 @@ In order to run pass-ui outside of the docker environment: -1) run docker compose commands with the local override files: +1) Configure the docker environment + +You will need to configure `pass-core` to load the UI from `localhost:4200`. + +This can be done by seting the environment variable `PASS_CORE_APP_LOCATION` to `http://host.docker.internal:4200/` in `.env`. + +Then simply use docker compose like normal. + ``` docker compose -f docker-compose.yml -f eclipse-pass.local.yml ``` -The local override yml and env contain entries relevant to allowing the pass-auth container to access the host machine's network and directing requests to the host machine. -In `eclipse-pass.local.yml` the relevant entry is the `extra_hosts`: +You may need to investigate other ways of accessing the host machine network, [see](https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host). -``` - auth: - env_file: - - .eclipse-pass.local_env - extra_hosts: - - "host.docker.internal:host-gateway" -``` +You may also consider stopping the pass-ui container. -In `.eclipse-pass.local_env` the relevant env var is `PASS_UI_URL=http://host.docker.internal:4200/`. Note, this will work on a MacOS environment, but may not work on all environments. +2) Run pass-ui on your host machine -You may need to investigate other ways of accessing the host machine network, [see](https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host). +Start ember on port 4200. -2) run pass-ui on your host machine with a proxy flag: ``` -ember s --proxy=http://pass.local +ember s ``` + diff --git a/docs/dev/ui-api-usage.md b/docs/dev/ui-api-usage.md deleted file mode 100644 index 4ac4cce7..00000000 --- a/docs/dev/ui-api-usage.md +++ /dev/null @@ -1,417 +0,0 @@ -A UI centric view on PASS APIs as of PASS v0.1.0. These are a list of services consumed by the UI along with brief descriptions of each. These descriptions do not include full configurations of the services, but includes only those configurations needed for the UI to work, typically just the URL endpoint for the services. This documentation is meant to describe the APIs as they were for our initial versions of PASS to inform ongoing development and not necessarily meant to be carried forward. - -- [Metadata Schema Service](#metadata-schema-service) -- [DOI Service](#doi-service) -- [User Service](#user-service) -- [Download Service](#download-service) - - [Lookup](#lookup) - - [Download](#download) -- [Policy Service](#policy-service) - - [Policies](#policies) - - [Repositories](#repositories) -- [PASS Data Entities](#pass-data-entities) - - [Create](#create) - - [Read](#read) - - [Single](#single) - - [Multiple](#multiple) - - [Update](#update) - - [Delete](#delete) -- [Misc operations](#misc-operations) - - [Setup Fedora](#setup-fedora) - - [Clear](#clear) - -# Metadata Schema Service -https://github.com/eclipse-pass/pass-metadata-schemas - -Retrieve an ordered list of relevant [JSON schemas](https://json-schema.org/) describing metadata given a list of PASS Repositories. - -| | | -| --- | --- | -| URL | `/schemaservice` | -| config | `SCHEMA_SERVICE_URL` | -| Method | POST | -| Headers | "Content-Type: application/json; charset=utf-8" OR
"Content-Type: text/plain" | -| Parameters | `?merge=true` (optional) | -| body | Array of PASS Repository object IDs (for type json) OR
List of PASS Repository object IDs, separated by newline characters (plain text) | -| Response | List of JSON schemas | - -Sample request: -``` -POST - Content-Type: application/json; charset=utf-8 - -Body: -[ - "https://pass.jhu.edu/fcrepo/rest/repositories/foo", - "https://pass.jhu.edu/fcrepo/rest/repositories/bar" -] -``` - -Sample response: -``` JSON -[ - { - "title": "Common schema", - "type": "object", - "definitions": …(all of the common fields) - }, - { - "title": "Foo schema", - "type": "object", - "definitions": …. - }, - { - "title": "NIHMS schema", - "type": "object", - "$schema": "http://json-schema.org/draft-07/schema#", - "definitions": { - "form": { - "title": "This is the title Alpaca displays in the UI", - "type": "object", - "required": ["journal-NLMTA-ID"], - "properties": { - "journal-NLMTA-ID": {"$ref": "http://localhost:8080/schemas/global.json#/properties/journal-NLMTA-ID"} - }, - "options": {"$ref": "http://localhost:8080/schemas/global.json#/options"} - }, - "prerequisites": { - "title": "prerequisites", - "type": "object", - "required": ["authors"], - "properties": { - "authors": {"$ref": "http://localhost:8080/schemas/global.json#/properties/authors"} - } - } - }, - "allOf": [ - {"$ref": "http://localhost:8080/schemas/global.json#"}, - {"$ref": "#/definitions/prerequisites"}, - {"$ref": "#/definitions/form"} - ] - } -] -``` - -The intent of these schemas is to drive a dynamic form system for metadata entry in the UI. We currently use [AlpacaJS](http://www.alpacajs.org/) to parse the schemas and generate the forms client side - -If the `merge` parameter is set with any value, the service will merge all relevant schemas into a single schema and Response that merged schema to the requesting client. - -* [JSON Schema](https://json-schema.org/) -* [AlpacaJS](http://www.alpacajs.org/) - -**Errors** - -| Code | Description | ---- | --- -`409` | Service was unable to merge the schemas together. This will only occur if the `merge` parameter is set. If this error occurs, the client should issue a new request for the unmerged schemas - ---- - -# DOI Service - -https://github.com/eclipse-pass/pass-doi-service - -Interacts with Crossref to get data about a given DOI. [See example DOI data](https://gist.github.com/jabrah/c268c6b027bd2646595e266f872c883c) -Get an ID to a PASS Journal object represented by the DOI and the raw Crossref data for the given DOI - -The UI does some data transformation to trim the Crossref data and light processing to fit it into our PASS Publication model before persisting the Publication. - -| | | -| --- | --- | -| URL | `/doiservice/journal` | -| Config | `DOI_SERVICE_URL` | -| Method | `GET` | -| Parameters | `?doi` (string) a DOI | -| Headers | `Accept: "application/json; charset=utf-8" | -| Body | | -| Response |
{
"crossref": {
"message": { ... }, // Raw data from Crossref[^xref]
},
"journal-id": ""
}
| - -In the UI, we ultimately process the Crossref data into a Publication model object and add anything else we can into the Submission's metadata blob to fit the submission's known metadata schema. Should these transformations be done server side, or is there a reason that it needs to be done client side (client review & approval?) - see `doi#doiToMetadata()` - -[^xref]: [Crossref data format](https://github.com/CrossRef/rest-api-doc/blob/master/api_format.md ) - ---- - -# User Service - -Get the currently logged in User. - -| | | -| --- | --- | -| URL | `/pass-user-service/whoami` | -| Config | `USER_SERVICE_URL` | -| Method | `GET` | -| Parameters | `userToken`: auth token | -| Headers | `Accept: "application/json; charset=utf-8"` | -| Body | | -| Response | A PASS User object | - -# Download Service - -https://github.com/eclipse-pass/pass-download-service - -Allows client lookups and downloads of previously uploaded files associated with Submissions. - -## Lookup - -Get a list of open access copies for the given DOI - -| | | -| --- | --- | -| URL | `/downloadservice/lookup` | -| Config | `MANUSCRIPT_SERVICE_LOOKUP_URL` | -| Method | `GET` | -| Parameters | `userToken`: auth token | -| Headers | `Accept: "application/json; charset=utf-8"` | -| Body | | -| Response | Array:
[
{
"url": "",
"name": "",
"type": "",
"source": "",
"repositoryLabel": ""
},
...
]
| - -* `url`: URL where the manuscript can be retrieved -* `name` file name -* `type` file MIME type -* `source`: Source of the file (e.g. "Unpaywall") -* `repositoryLabel`: Human readable label of the repository where the manuscript is stored - -## Download - -| | | -| --- | --- | -| URL | `/downloadservice/download` | -| Config | `MANUSCRIPT_SERVICE_DOWNLOAD_URL` | -| Method | `GET` | -| Parameters |

`doi` (string) publication DOI

`url` download URL of the file

| -| Headers | | -| Body | | -| Response | The file, downloaded via the backend download service | - -Response - -Not sure whether or not this is used by the UI - ---- - -# Policy Service - -https://github.com/eclipse-pass/pass-policy-service - -[Included docs](https://github.com/eclipse-pass/pass-policy-service/blob/main/web/README.md) - -Config: `POLICY_SERVICE_URL` base URL for the policy service - -## Policies - -Get a list of policies that apply to a submission, given its PASS ID (URI). Typically only used with in-progress submissions. - -| | | -| --- | --- | -| URL | `/policy-service/policies` | -| Config | `POLICY_SERVICE_POLICY_ENDPOINT` | -| Method | `GET` or `POST` | -| Parameters | `submission` a submission ID | -| Headers | `Content-Type: application/x-www-form-urlencoded` | -| Body | `submission=submission_id` if a POST | -| Response | An array of PASS Policy object IDs | - -Sample request: -``` -GET /policy-service/policies?submission= -``` - -Sample response: -``` JSON -[ - { - "id": "http://pass.local:8080/fcrepo/rest/policies/2d/...", - "type": "funder" - }, - { - "id": "http://pass.local:8080/fcrepo/rest/policies/63/...", - "type": "institution" - } -] -``` - -The `type` property identifies the source of the policy and will only take the values of `funder` or `institution`. - -## Repositories - -| | | -| --- | --- | -| URL | `policy-service/repositories` | -| Config | `POLICY_SERVICE_REPOSITORY_ENDPOINT` | -| Method | `GET` or `POST` | -| Parameters | `submission` submission ID | -| Headers | `Content-Type: application/x-www-form-urlencoded` | -| Body | `submission=submission_id` if a POST | -| Response | JSON object containing required, optional, and choice repositories | - -Sample request: -``` -GET /policy-service/repositories -``` - -Sample response: -``` JSON -{ - "required": [ - { - "url": "http://pass.local/fcrepo/rest/repositories/1", - "selected": true - } - ], - "one-of": [ - [ - { - "url": "http://pass.local/fcrepo/rest/repositories/2", - "selected": true - }, - { - "url": "http://pass.local/fcrepo/rest/repositories/3", - "selected": false - } - ], - [ - { - "url": "http://pass.local/fcrepo/rest/repositories/4", - "selected": true - }, - { - "url": "http://pass.local/fcrepo/rest/repositories/5", - "selected": false - } - ] - ], - "optional": [ - { - "url": "http://pass.local/fcrepo/rest/repositories/6", - "selected": true - } - ] -} -``` -* `selected` status denotes default choices, if the user is presented with options -* `required` submissions MUST be depositied into these repositories -* `one-of` array of arrays presenting choice-sets. The submission must be deposited into at least one from each choice-set. In this example, the submission must be deposited into repository (2 OR 3) AND (4 OR 5). -* `optional` the submission MAY be submitted to these repositories - ---- - -# PASS Data Entities - -All CRUD requests for PASS data entities route through the [`pass-ember-adapter`](https://github.com/eclipse-pass/pass-ember-adapter). - -Headers: -* `Accept: application/ld+json; profile="http://www.w3.org/ns/json-ld#compacted"` -* `Prefer: return=representation; omit="http://fedora.info/definitions/v4/repository#ServerManaged"` -* `Authorization=<...>` - -All operations use these headers, unless otherwise specified. - -| Operation | Adapter Function | URL | Headers | Parameters -| --- | --- | -| - -## Create - -| | | -| --- | --- | -| Adapter function | `#createRecord` | -| URL | `/` | -| Config | | -| Method | `POST` | -| Parameters | | -| Headers | +`Content-Type: application/ld+json; charset=utf-8` | -| Body | JSON-LD serialized data | -| Response | Fedora responds with the newly created entity ID in the `response.Location` header | - -## Read - -### Single - -| | | -| --- | --- | -| Adapter function | `#findRecord` | -| URL | Entity ID | -| Config | | -| Method | `GET` | -| Parameters | | -| Headers | | -| Body | | -| Response | The entity, serialized as an Ember model object | - -### Multiple - -| | | -| --- | --- | -| Adapter function | `#query` | -| URL | `/pass/_search` | -| Config | `FEDORA_ADAPTER_ES` | -| Method | `POST` | -| Parameters | | -| Headers | `Content-Type: application/json; charset=utf-8` | -| Body | An Elasticsearch query in JSON format | -| Response | List of matching entities | - -| | | -| --- | --- | -| Adapter function | `#findAll` | -| URL | `/pass/_search` | -| Config | `FEDORA_ADAPTER_ES` | -| Method | `POST` | -| Parameters | | -| Headers | `Content-Type: application/json; charset=utf-8` | -| Body | An Elasticsearch query that will match all entities of a given model | -| Response | List of entities | - -## Update - -| | | -| --- | --- | -| Adapter function | `#updateRecord` | -| URL | Entity ID | -| Config | | -| Method | `PATCH` | -| Parameters | | -| Headers | `Content-Type: application/merge-patch+json; charset=utf-8" | -| Body | JSON-LD serialized entity | -| Response | | - -## Delete - -Will delete the entity then delete it's tombstone. - -| | | -| --- | --- | -| Adapter function | `#deleteRecord` | -| URL | Entity ID AND (entity_ID/fcr:tombstone) | -| Config | | -| Method | `DELETE` | -| Parameters | | -| Headers | | -| Body | | -| Response | | - -# Misc operations - -## Setup Fedora - -`#setupFedora` - -Remnant of the original demo code, now only called in tests. Calls the Delete then Create functions for all known model types in order to create the fresh containers in Fedora. - -## Clear - -Only used in testing for this adapter. - -| | | -| --- | --- | -| Adapter function | `#clearElasticsearch` | -| URL | `/_doc/_delete_by_query?conflicts=proceed&refresh` | -| Config | | -| Method | `POST` | -| Parameters | Static/baked into URL | -| Headers | Content-Type: application/json` | -| Body | `{ query: { match_all: {} } }` | -| Response | | - - diff --git a/docs/infra/docker-composer-to-k8s-manifest.md b/docs/infra/docker-composer-to-k8s-manifest.md deleted file mode 100644 index 1cf7ec66..00000000 --- a/docs/infra/docker-composer-to-k8s-manifest.md +++ /dev/null @@ -1,127 +0,0 @@ -# Migrating from Docker Compose to k8s Manifest. - -The [docker-pass](https://github.com/eclipse-pass/pass-docker) is orchestrated using -[Docker Compose](https://docs.docker.com/compose/). The team is now moving towards -[kubernetes](https://kubernetes.io) and this article documents our attempt to -convert our `docker-compose.yml` into [Kubernetes configuration files](https://github.com/eclipse-pass/pass-docker/tree/main/k8s/). - -The migration was started with an automated tool named [Kompose](https://kompose.io) -and then has continued with a manual effort. - -## Migration Scripts - -The following instructions are repo specific for migrating the -docker-compose.yml into k8s-maniftest.yml. Should we decide to -drop support for docker-compose then consider dropping this section -but keeping the generic documentation below. - -### Pass-Docker - -From - -```bash -USER_SERVICE_URL=pass.local \ - docker-compose config | \ - tail -n +2 > docker-compose-k8s.yaml && \ - kompose convert -f docker-compose-k8s.yaml -``` - -## Instructions - -This will outline various techniques to help convert a docker-compose.yml -file into a k8s-manifest.yml. - -We are using [Kompose](https://kompose.io) -to help with the migration. - -```bash -brew install kompose -``` - -From there, locate your `docker-compose.yml` file, -for example, [docker-pass](https://github.com/eclipse-pass/pass-docker/blob/main/docker-compose.yml) -and then run - -```bash -docker-compose config > docker-compose-k8s.yaml # if you have a .env file -kompose convert -f docker-compose-k8s.yaml -``` - -Kompose does not handle .env files so we will want to resolve that first. - -## Troubleshooting - -### array items[0,1] must be unique - -You might see an error like - -``` -FATA services.fcrepo.ports array items[0,1] must be unique -``` - -Then most likely you need to _realize_ your docker-compose file -to ensure that your `.env` variables are properly considered -[as kompose does not do that for you](https://github.com/kubernetes/kompose/issues/1164) - - -### 'Ports': No port specified: : - -Another error you might see - -``` -* error decoding 'Ports': No port specified: : -``` - -See above, as [kompose does not handle a .env file](https://github.com/kubernetes/kompose/issues/1164) - -### WARN[0000] The "XXXX" variable is not set. - -If you see an error about missing environment variables like - -```bash -WARN[0000] The "USER_SERVICE_URL" variable is not set. Defaulting to a blank string. -``` - -Then when consider adding them to your config converter, for example. - -```bash -USER_SERVICE_URL=pass.local \ - docker-compose config > docker-compose-k8s.yaml -``` - -### cannot unmarshal !!str `pass-do...` into config.RawService - -The `docker-compose config` might add an invalid `name: xxx` attributes, -so we will just throw it out - -```bash -docker-compose config | \ - tail -n +2 > docker-compose-k8s.yaml -``` - -## Post-Conversion - -These are issues found after using Kompose to convert the docker-compose configuration into -Kubernetes manifests. - -### No external IP - -The service created for the httpd proxy container is generated to expose ports 80 and 443 within -the cluster instead of making those ports accessible publicly. - -To fix this, I modified `proxy-service.yaml` to set the service `type` to `LoadBalancer`. - -### Persistent Volume Claim remains stuck in Pending state - -Run `kubectl describe pvc` to see the associated errors. If you see an OutOfRange warning then -it should have a description that tells you the minimum supported volume size ('1Gi' for me). - -I updated all persistent volume claims to use 1Gi for storage. - -### Persistent Volume not found - -Some deployments require that a persistent volume be in place prior to their creation (e.g., assets-deployment). - -## Reference - -* [Translate a Docker Compose File to Kubernetes Resources](https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/) \ No newline at end of file diff --git a/docs/infra/ec2.md b/docs/infra/ec2.md index e6f5d88c..d51e7e05 100644 --- a/docs/infra/ec2.md +++ b/docs/infra/ec2.md @@ -4,46 +4,16 @@ This will document how to run PASS in EC2 by reusing the existing pass-docker se ## Background -The existing pass-docker setup is intended for local deployment. It uses a hard-coded hostname of `pass.local`. It includes a customized Service Provider and Identity Provider for authentication as well as DSpace for testing deposits. All of the other images are production images just configured for `pass.local`. -See the [pass-docker project](https://github.com/eclipse-pass/pass-docker) for more information. +The existing pass-docker setup is intended for local deployment. It uses a hard-coded hostname of `localhost`. It includes a customized Identity Provider for authentication as well as DSpace for testing deposits. All of the other images are production images just configured for `localhost`. See the [pass-docker project](https://github.com/eclipse-pass/pass-docker) for more information. ## Configuration -First the server must have a hostname and certificate. Then pass-docker needs to be modified to use this hostname instead of `pass.local`. Docker images like `httpd-proxy`, `sp`, and `idp` have hard-codeds configurations for `pass.local` included in their pass-docker build setup. To avoid building new images, configuration files within those images that need to be changed, can be replaced by bind mounted ones modified with the new hostname. The rest of the images can be configured just with environment variables in `.env`. +First the server must have a hostname and certificate. Then the pass-core image which is a Spring Boot application needs to be modified to use this hostname instead of `localhost`. +See [https://docs.spring.io/spring-boot/reference/features/ssl.html] for instructions. -The simplest way to accomplish the hostname change is to do a global string replace of all files in pass-docker of `pass.local` to the new hostname. Then the files that need to be bind mounted are the files that have changed (find them with `git status`) that are part of `proxy`, `idp`, or `sp`. To find the location of these configuration files in the image, examine the relevant Dockerfile. In addition the certificate and key for the host will need to be bind mounted into the proxy. Generally the bind mounts should look like the below. - -Bind mounts for proxy: -``` -volumes: - - ./newhost.crt:/etc/httpd/ssl/domain.crt - - ./newhost.key:/etc/httpd/ssl/domain.key - - ./httpd-proxy/etc-httpd/conf.d/httpd.conf:/etc/httpd/conf.d/httpd.conf -``` - -Bind mounts for idp: -``` - volumes: - - ./idp/common/shibboleth-idp/conf/cas-protocol.xml:/opt/shibboleth-idp/conf/cas-protocol.xml - - ./idp/common/shibboleth-idp/metadata/sp-metadata.xml:/opt/shibboleth-idp/metadata/sp-metadata.xml - - ./idp/jhu/shibboleth-idp/conf/idp.properties:/opt/shibboleth-idp/conf/idp.properties - - ./idp/jhu/shibboleth-idp/metadata/idp-metadata.xml:/opt/shibboleth-idp/metadata/idp-metadata.xml -``` - -Bind mounts for sp: -``` -volumes: - - ./sp/2.6.1/etc-httpd/conf.d/sp.conf:/etc/httpd/conf.d/sp.conf - - ./sp/2.6.1/etc-shibboleth/idp-metadata.xml:/etc/shibboleth/idp-metadata.xml - - ./sp/2.6.1/etc-shibboleth/shibboleth2.xml:/etc/shibboleth/shibboleth2.xml -``` - -## Additional complications - -For the moment, instead of starting from pass-docker main, you may wish to start from the [200-test-deployment-of-pass-in-ec2-to-validate-startup-procedures-outside-of-jhu branch](https://github.com/eclipse-pass/pass-docker/tree/200-test-deployment-of-pass-in-ec2-to-validate-startup-procedures-outside-of-jhu). That branch contains a few fixes and updates and has the stack configured for sandbox.library.jhu.edu. A global string replace with a new hostname and a new certificate should be all that is needed. +Variables that that mention `localhost` in the `.env` need to be changed. ## Running -Just do a `docker-compose up` like the standard pass-docker setup. Then you will be able to access all the url as described in the [pass-docker project](https://github.com/eclipse-pass/pass-docker). Note that dspace is on 8181 and fcrepo is on 8080 if you want to access them. - +Follow the [pass-docker project](https://github.com/eclipse-pass/pass-docker) instructions to run.