-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue #2029] Merge Nava fork to HHS #2173
Commits on Sep 18, 2024
-
Configuration menu - View commit details
-
Copy full SHA for cc16fee - Browse repository at this point
Copy the full SHA cc16feeView commit details -
Configuration menu - View commit details
-
Copy full SHA for 112a766 - Browse repository at this point
Copy the full SHA 112a766View commit details -
Configuration menu - View commit details
-
Copy full SHA for 7e4e622 - Browse repository at this point
Copy the full SHA 7e4e622View commit details -
[Issue HHS#2082]: finish e2e tests (#38)
Fixes HHS#2082 - add some of the relevant tests from bug bash
Configuration menu - View commit details
-
Copy full SHA for fd25c65 - Browse repository at this point
Copy the full SHA fd25c65View commit details -
[Issue #]: sortby posted date desc default (#4)
Fixes # - Update sortby labels and ordering
Configuration menu - View commit details
-
Copy full SHA for 72b8d72 - Browse repository at this point
Copy the full SHA 72b8d72View commit details -
Upgrade dependencies for API (May 21, 2024) (#48)
Needed to upgrade dependencies for the API for grype issue: https://github.com/navapbc/simpler-grants-gov/actions/runs/9180615894/job/25245519194?pr=47 As usual, just ran `poetry update`
Configuration menu - View commit details
-
Copy full SHA for e29a1fa - Browse repository at this point
Copy the full SHA e29a1faView commit details -
[Issue HHS#2089] Setup opensearch locally (#39)
Fixes HHS#2089 Setup a search index to run locally via Docker Updated makefile to automatically initialize the index + added a script to wait for the index to start up before proceeding. Setup a very basic client for connecting to the search index (will be expanded more in subsequent PRs) Basic test / test utils to verify it is working (also will be expanded) This is the first step in getting the search index working locally. This actually gets it running, and the client works, we just aren't doing anything meaningful with it yet besides tests. This doesn't yet create an index that we can use, except in the test. However, if you want to test out a search index, you can go to http://localhost:5601/app/dev_tools#/console (after running `make init`) to run some queries against the (one node) cluster. https://opensearch.org/docs/latest/getting-started/communicate/#sending-requests-in-dev-tools provides some examples of how to create + use indexes that you can follow.
Configuration menu - View commit details
-
Copy full SHA for 2c69c14 - Browse repository at this point
Copy the full SHA 2c69c14View commit details -
[Issue HHS#2093] Setup the opportunity v1 endpoint which will be back…
…ed by the index (#44) Fixes HHS#2093 Made a new set of v1 endpoints that are basically copy-pastes of the v0.1 opportunity endpoints Some changes I want to make to the schemas wouldn't make sense without the search index (eg. adding the filter counts to the response). As we have no idea what the actual launch of the v0.1 endpoint is going to look like, I don't want to mess with any of that code or try to make a weird hacky approach that needs to account for both the DB implementation and the search index one. Also, I think we've heard that with the launch of the search index, we'll be "officially" launched, so might as well call in v1 at the same time. Other than adjusting the names of a few schemas in v0.1, I left that implementation alone and just copied the boilerplate that I'll fill out in subsequent tickets. The endpoint appears locally: ![Screenshot 2024-05-20 at 12 18 32 PM](https://github.com/navapbc/simpler-grants-gov/assets/46358556/86231ec1-417a-41c6-ad88-3d06bb6214e5) --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 3626e2a - Browse repository at this point
Copy the full SHA 3626e2aView commit details -
[Issue HHS#2092] Populate the search index from the opportunity tables (
#47) Fixes HHS#2092 Setup a script to populate the search index by loading opportunities from the DB, jsonify'ing them, loading them into a new index, and then aliasing that index. Several utilities were created for simplifying working with the OpenSearch client (a wrapper for setting up configuration / patterns) Iterating over the opportunities and doing something with them is a common pattern in several of our scripts, so nothing is really different there. The meaningful implementation is how we handle creating and aliasing the index. In OpenSearch you can give any index an alias (including putting multiple indexes behind the same alias). The approach is pretty simple: * Create an index * Load opportunities into the index * Atomically swap the index backing the `opportunity-index-alias` * Delete the old index if they exist This approach means that our search endpoint just needs to query the alias, and we can keep making new indexes and swapping them out behind the scenes. Because we could remake the index every few minutes, if we ever need to re-configure things like the number of shards, or any other index-creation configuration, we just update that in this script and wait for it to run again. I ran this locally after loading `83250` records, and it took about 61s. You can run this locally yourself by doing: ```sh make init make db-seed-local poetry run flask load-search-data load-opportunity-data ``` If you'd like to see the data, you can test it out on http://localhost:5601/app/dev_tools#/console - here is an example query that filters by the word `research` across a few fields and filters to just forecasted/posted. ```json GET opportunity-index-alias/_search { "size": 25, "from": 0, "query": { "bool": { "must": [ { "simple_query_string": { "query": "research", "default_operator": "AND", "fields": ["agency.keyword^16", "opportunity_title^2", "opportunity_number^12", "summary.summary_description", "opportunity_assistance_listings.assistance_listing_number^10", "opportunity_assistance_listings.program_title^4"] } } ], "filter": [ { "terms": { "opportunity_status": [ "forecasted", "posted" ] } } ] } } } ```
Configuration menu - View commit details
-
Copy full SHA for 251524d - Browse repository at this point
Copy the full SHA 251524dView commit details -
Fixes #6 1. Move all pages to [`[locale]`](https://next-intl-docs.vercel.app/docs/getting-started/app-router/with-i18n-routing#getting-started) folder 2. Add [`generateMetata()`](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#generatemetadata-function) function and [next-intl `getTranslations()`](https://next-intl-docs.vercel.app/docs/environments/metadata-route-handlers#metadata-api) implementation * @rylew1 commented we could remove this from each page. To do that we could use [prop arguments](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#with-segment-props) and update the based on the param. There is also more we can do with the metadata to properly add [app links and twitter cards](https://nextjs.org/docs/app/api-reference/functions/generate-metadata#applinks). TODO: create ticket 4. Replace i18n's `useTranslation` with next-intl's `useTranslations` 5. Remove hard-coded strings that were present b/c we were still b/w i18next and next-intl * [Move process page to app](32ba4ee) * [Move research page to app](5b5ad1a) * [Move health page to app](a3e6255) * [Move feature flag page to app](395baed) * [Move search page to app router](1e261e3) * [Move newsletter pages to app router](b509ef8) * [Move home page to app router](de1be98) * [Move home page to app router](74077ae) * [Move 404 page to app router](ccbc956) 1. [Delete hello api](5bad6ea) * This was left from the project creation 2. [Add USWDS icon component](0120c7b) * as noted in a slack discussion, when trying to access [one of the icons](https://github.com/trussworks/react-uswds/blob/main/src/components/Icon/Icons.ts) using `<Icon.Search/>` next errors: `You cannot dot into a client module from a server component. You can only pass the imported name through`. I'm not sure why it thinks the Icon component is a client module. [Dan A. suggests](vercel/next.js#51593 (comment)) trussworks should re-export as named exports. I tried importing the SVGs directly from the trussworks library but [svgr requires a custom webpack config](https://react-svgr.com/docs/next/) which is a road I didn't want to go down and [react svg](https://www.npmjs.com/package/react-svg) through an error in the app router 😥 . * I implemented @sawyerh 's [suggestion](0120c7b#diff-dadb35bd2f3f61f2c179f033cd0a2874fc343974236f2fb8613664703c751429), which did not work initially b/c next reported the USWDS icon was corrupt, which was fixed by adding a `viewBox` to the svg element 😮💨 . * [Remove unused WtGIContent](75490f7) * [Move layout and update for app router](af112fd) * [Update global components for the app router](40119e6) * [Move i18n strings for app router](eb3c07c) * [Adds next-intl config and removes i18n](c546571) * [Update tests for app router](3b9b193) * [Removes i18next and next-i18n packages](9d2e08a) * [Update storybook settings for app router](39f115d)
Configuration menu - View commit details
-
Copy full SHA for 753b67e - Browse repository at this point
Copy the full SHA 753b67eView commit details -
Configuration menu - View commit details
-
Copy full SHA for cc86313 - Browse repository at this point
Copy the full SHA cc86313View commit details -
[Issue #50]: change SortBy to USWDS component (#52)
Fixes #50 - switch to using USWDS `Select` component
Configuration menu - View commit details
-
Copy full SHA for 3d0dec8 - Browse repository at this point
Copy the full SHA 3d0dec8View commit details -
[Issue 56]: Date rounding bug (#57)
Fixes #56 - Update the date creation to the parsed year/month/day so it creates the `Date` object using the local time - this may prevent any type of rounding - Add some tests to ensure the right dates are showing up on search
Configuration menu - View commit details
-
Copy full SHA for 4dc6f77 - Browse repository at this point
Copy the full SHA 4dc6f77View commit details -
[Issue HHS#2072] Locally, preserve the auth token in the OpenAPI acro…
…ss refreshes (#67) Fixes HHS#2072 Set the `persistAuthorization` OpenAPI config locally to True For local development, we frequently need to go to http://localhost:8080/docs - enter the auth token, and then repeat this process every time we reopen this page or refresh. Having to either copy paste or retype in the auth token is tedious. This flag makes it so it gets preserved in your browsers local storage. We are only enabling this for the local endpoint at the moment as there are possibly security implications we would need to consider non-locally (eg. what if someone is using a public computer).
Configuration menu - View commit details
-
Copy full SHA for 3836c4c - Browse repository at this point
Copy the full SHA 3836c4cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 2be7f7a - Browse repository at this point
Copy the full SHA 2be7f7aView commit details -
[Issue HHS#2071] update to Next 14.2.3 (#65)
Fixes HHS#2071 - update to next 14.2.3 (https://nextjs.org/blog/next-14-2)
Configuration menu - View commit details
-
Copy full SHA for f9f392e - Browse repository at this point
Copy the full SHA f9f392eView commit details -
[Issue HHS#2066] Remove the BASE_RESPONSE_SCHEMA (#70)
Fixes HHS#2066 Remove the `BASE_RESPONSE_SCHEMA` configuration. Adjust the healthcheck endpoint to be consistent in defining the schema / responses with the other endpoints. APIFlask allows you to set a `BASE_RESPONSE_SCHEMA` - the idea is that its the shared schema that every API endpoint will return, and they will only differ in the `data` object within. This sounds great on paper as it prevents you from needing to define most of the response schema for many endpoints, but in practice, its clunky to use. If we want to modify anything in the response schema outside of the `data` object, it will affect every single endpoint. This means when we add something like the pagination info to our search endpoint, a pagination object appears on the healthcheck response. As we intend to make these docs something the public will use, this is going to be confusing. There is also a "bug" (unsure if it is as it was an intended change a few months ago in APIFlask/apispec) that the error response objects end up nested within themselves in the examples. For example, currently the error response for the healthcheck endpoint (which can literally only return a 5xx error) has an example response of: ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` When in reality, the error response it actually returns looks like: ```json { "data": {}, "errors": [], "message": "Service Unavailable", "status_code": 503 } ``` This set of changes works around all of these confusing issues and just requires us to define specific response schemas for each endpoint with some small set of details. I've kept some base schema classes to derive from that we can use in most cases. Before & After Examples in OpenAPI <table> <tr><th>Endpoint</th><th>Before</th><th>After</th><th>Actual Response (before change)</th></tr> <tr><td>GET /health (200) </td> <td> ```json { "data": { "message": "string" }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": null, "message": "Success", "status_code": 200 } ``` </td> <td> ```json { "data": {}, "message": "Service healthy", "pagination_info": null, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>GET /health (503)</td> <td> ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "Error", "status_code": 0 } ``` </td> <td> ```json { "data": {}, "message": "Service unavailable", "pagination_info": null, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>POST /v0.1/opportunities/search (200)</td> <td> ```json { "data": [ {.. Excluding for brevity } ], "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": [ {.. Excluding for brevity } ], "message": "Success", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 200 } ``` </td> <td> ```json { "data": [ {}, {}, {} // excluded for brevity ], "message": "Success", "pagination_info": { "order_by": "opportunity_id", "page_offset": 1, "page_size": 3, "sort_direction": "ascending", "total_pages": 1010, "total_records": 3030 }, "status_code": 200, "warnings": [] } ``` </td> </tr> <tr><td>POST /v0.1/opportunities/search (401 or 422)</td> <td> ```json { "data": { "data": "string", "errors": [ { "field": "string", "message": "string", "type": "string" } ], "message": "string", "status_code": 0 }, "message": "string", "pagination_info": { "order_by": "id", "page_offset": 1, "page_size": 25, "sort_direction": "ascending", "total_pages": 2, "total_records": 42 }, "status_code": 0, "warnings": [ { "field": "string", "message": "string", "type": "string" } ] } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "Error", "status_code": 0 } ``` </td> <td> ```json { "data": {}, "errors": [], "message": "The server could not verify that you are authorized to access the URL requested", "status_code": 401 } ``` </td> </tr> </table> --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 3b97574 - Browse repository at this point
Copy the full SHA 3b97574View commit details -
[Task]: Finish adding Postgres Integration to Analytics Library (#72)
Fixes #45 * update `config.py` database url * add function in `cli.py` * updated packages in `poetry.lock` N/A Row created manually in the database alongside a row created via `test_connection` ![Screen Shot 2024-06-11 at 1 49 53 PM](https://github.com/navapbc/simpler-grants-gov/assets/37313082/b83afad8-5fe1-404f-adf3-c94945740bbe)
Configuration menu - View commit details
-
Copy full SHA for d1ee9f1 - Browse repository at this point
Copy the full SHA d1ee9f1View commit details -
[Issue HHS#2077]: Setup pa11y-ci (#41)
Fixes HHS#2077 - Add `pa11y-ci` to run PR checks - Tests for each of the pages we have so far
Configuration menu - View commit details
-
Copy full SHA for c39f3f7 - Browse repository at this point
Copy the full SHA c39f3f7View commit details -
[Issue HHS#2063] Adjust docker commands based on recent updates (#89)
Fixes HHS#2063 Remove the version field from the docker-compose files Adjust the docker commands to use `docker compose` instead of `docker-compose` A recent version of docker removed the need for specifying a version in the compose files - so the field is now obsolete and just gives a warning whenever you run a command: https://docs.docker.com/compose/compose-file/04-version-and-name/ The change in the command we run is actually from 2021 and makes sure we use docker compose v2. Depending on how you've setup your local instance of docker, `docker-compose` may have been aliased to `docker compose` anyways (I actually think if it wasn't, certain things break anyways). This is just using the proper format for certain. I went through running several docker commands and noticed no difference, as this change shouldn't meaningfully change anything, that is to be expected.
Configuration menu - View commit details
-
Copy full SHA for 977d103 - Browse repository at this point
Copy the full SHA 977d103View commit details -
[Issue HHS#2070]: Dynamic sitemap for pa11y-ci (#83)
Fixes HHS#2070 - Generate dynamic sitemap with `/app/sitemap.ts` (next convention) - Split pa11y config into `pa11y-desktop` and `pa11y-mobile`
Configuration menu - View commit details
-
Copy full SHA for 5c54962 - Browse repository at this point
Copy the full SHA 5c54962View commit details -
[Issue HHS#2068]: Opportunity listing page (first pass) (#97)
Fixes HHS#2068 - Add new id-based opportunity page - Add new `OpportunityListingAPI` class extended from `BaseAPI` - Make `searchInputs`/`QueryParamData` in `BaseAPI` and `errors.ts` optional params (only used for search page) - Update sitemap to replace [id] in url with 1 - Add test coverage
Configuration menu - View commit details
-
Copy full SHA for e8f3df7 - Browse repository at this point
Copy the full SHA e8f3df7View commit details -
[Issue HHS#2147]: Pa11y API setup (#99)
Fixes #{74} > Added API setup to PA11y > Updated Pa11y runtime to enable FF for search > FF requirement was discovered with Ryan while exploring feature. Will enable the search page for proper testing > • svg elements with an img role have an alternative text (https://dequeuniversity.com/rules/axe/4.2/svg-img-alt?application=axeAPI) (html) <html lang="en"><head><meta charset="utf-8"><me...</html> • svg elements with an img role have an alternative text (https://dequeuniversity.com/rules/axe/4.2/svg-img-alt?application=axeAPI) (html) <html lang="en"><head><meta charset="utf-8"><me...</html> >Two new errors found now that API results are loading. ![desktop-main-view](https://github.com/navapbc/simpler-grants-gov/assets/29316916/f13859bd-87a7-466d-a20b-e86a9dbe71e5)
Configuration menu - View commit details
-
Copy full SHA for f2e7595 - Browse repository at this point
Copy the full SHA f2e7595View commit details -
[Issue HHS#2091] Setup utils for creating requests and parsing respon…
…ses from search (#54) Fixes HHS#2091 Created utilities for creating requests to opensearch, and parsing the responses into more manageable objects. Added some logic for configuring how we create indexes to use a different tokenizer + stemmer as the defaults aren't great. The search queries we need to create aren't that complex, but they're pretty large and very nested objects. To help with this, I've built a few generic utilities for creating the requests by using a builder to pass in each of the components of the search request. This way when the API gets built out next, the search logic really is just taking our requests and passing the details to the factories, which is pretty trivial. Responses are a bit less complex, they're just very nested, and adding a simple wrapper around them in the response helps any usage of the search client need to do a bit less indexing into dictionaries (eg. to access the response objects I was doing `values = [record["_source"] for record in response["hits"]["hits"]]` which is fun). That logic just safely handles parsing the responses in a very generic manner. Note that in both cases, there are many cases that we don't handle yet (a ton of other request params for example), we can add those as needed. Just focused on the ones we need right now. --- One unrelated change made it in here and that was adjusting how the analysis was done on an index. In short, the default tokenization/stemming of words was clunky for our use case. For example, the default stemmer treats `king` and `kings` as separate words. I adjusted the stemmer to use the [snowball stemmer](https://snowballstem.org/) which seems to work a bit better although we should definitely investigate this further. I also changed the tokenization to be on whitespaces as before it would separate on dashes, and a lot of terms in our system have dashes (opportunity number and agency pretty prominently). Since this logic is potentially going to be shared across many components (if we ever build out more search indexes) - I tried to document it pretty thoroughly with links to a lot of documentation.
Configuration menu - View commit details
-
Copy full SHA for 50d74f6 - Browse repository at this point
Copy the full SHA 50d74f6View commit details -
[Issue HHS#2079] Add GET /opportunity/:opportunityId/versions (#82)
Fixes HHS#2079 Adds an endpoint to fetch opportunity versions * Only includes some of the filters that we'll need to include Adds a lot of utilities for setting up opportunities for local development and testing with versions https://docs.google.com/document/d/18oWmjQJKunMKy6cfnfUnyGEX33uu5UDPnRktD_wRFlE/edit#heading=h.4xmkylqq7mnx provides a lot of context for how opportunity versioning works in the existing system - which is to say its very very complex. I'm sure we'll alter that behavior as we go forward, for now I kept the endpoint simple in terms of its filters, just removing obvious cases (eg. the summary record is marked as deleted). I'm also not certain what we want to do with naming. I really don't like my current approach of "forecast" and "non-forecast", but we can address that later as well. -- Beyond understanding what versioning logic we needed to support, the most complex component by far is setting up the data in the first place in an easy manner. I originally tried some ideas using the factory classes themselves, but due to the order of operations necessary to do that, that wasn't possible (in short, to create prior history records, I first need the current record, but that doesn't exist until after everything else in a factory runs). So, I went with a builder process that wraps the factories and sets up some reasonable scenarios for you. Its clunky, but seems to work well enough. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for de13658 - Browse repository at this point
Copy the full SHA de13658View commit details -
Update packages in poetry.lock Working around a vulnerability https://github.com/navapbc/simpler-grants-gov/actions/runs/9664332757/job/26659684726 which is because we have a fairly old version of a package in our lock file. Skimmed through the package updates, they all seem to be very very minor version updates as most other packages were fairly up-to-date. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8127b83 - Browse repository at this point
Copy the full SHA 8127b83View commit details -
Add make sprint-data-import and issue-data-import to import github sp…
…rint and issue data to database (#84) Fixes #46 * added `sprint-db-data-import` to Makefile * added `export_json_to_database` > One strategy would be to keep the make sprint-data-export and issue-data-export and create make sprint-db-data-import and issue-data-db-import so that the data is exported to JSON and then imported into the database. > > A single make command could then be created to run the the export and then import files. Sample data in database <img width="1133" alt="Screen Shot 2024-06-26 at 3 38 47 PM" src="https://github.com/navapbc/simpler-grants-gov/assets/37313082/34c962d6-a78e-4963-be15-ef0f7de3bccf">
Configuration menu - View commit details
-
Copy full SHA for 4978eb6 - Browse repository at this point
Copy the full SHA 4978eb6View commit details -
[Issue HHS#2084] Connect the API to use the search index (#63)
Fixes HHS#2084 Make the v1 search opportunity endpoint connect to the search index and return results. Adjust the structure of the response to be more flexible going forward. The actual building of the search request / parsing the response is pretty simple. Other than having to map some field names, that logic is mostly contained in the builder I made in the prior PR. However, there is a lot of configuration and other API components that had to be modified as part of this including: * Adjusting the API response schema (to better support facet counts) * Piping through the search client + index alias name configuration. * A monumental amount of test cases to verify everything is connected / behavior works in a way we expect - note that I did not test relevancy as that'll break anytime we adjust something. Note that the change in API schema means the API does not work with the frontend, but there are a few hacky changes you can make to connect them. In [BaseApi.ts](https://github.com/navapbc/simpler-grants-gov/blob/main/frontend/src/app/api/BaseApi.ts#L47) change the version to `v1`. In [SearchOpportunityAPI.ts](https://github.com/navapbc/simpler-grants-gov/blob/main/frontend/src/app/api/SearchOpportunityAPI.ts#L56) add `response.data = response.data.opportunities;` to the end of the `searchOpportunities` method. With that, the local frontend will work. To actually get everything running locally, you can run: ```sh make db-recreate make init make db-seed-local args="--iterations 10" poetry run flask load-search-data load-opportunity-data make run-logs npm run dev ``` Then go to http://localhost:3000/search --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 3319388 - Browse repository at this point
Copy the full SHA 3319388View commit details -
[Task]: Document the new steps for the analytics local database import (
#113) Fixes #100 * Added documentation about local database import > The current analytics documentation is focused on the slack integration. This task is to add the work from #84 to the documentation and include the local Metabase steps. > Screenshots, GIF demos, code examples or output to help show the changes working as expected.
Configuration menu - View commit details
-
Copy full SHA for c866c88 - Browse repository at this point
Copy the full SHA c866c88View commit details -
Add Logging to Sprint import (#130)
Fixes #110 * added info level logging statement to the `db_import` command _Terminal_ <img width="1446" alt="Screen Shot 2024-07-01 at 4 35 44 PM" src="https://github.com/navapbc/simpler-grants-gov/assets/37313082/4f1ddf6c-5ae5-45d6-8bc9-229ff47e6b4b"> _Database_ <img width="1148" alt="Screen Shot 2024-07-01 at 4 35 56 PM" src="https://github.com/navapbc/simpler-grants-gov/assets/37313082/99ac5d11-6073-42b2-bffe-32dff78e75f4"> > Screenshots, GIF demos, code examples or output to help show the changes working as expected.
Configuration menu - View commit details
-
Copy full SHA for 4a12e8d - Browse repository at this point
Copy the full SHA 4a12e8dView commit details -
Configuration menu - View commit details
-
Copy full SHA for c3dcdec - Browse repository at this point
Copy the full SHA c3dcdecView commit details -
Adjust transformation deletes to handle cascading deletes (#103)
Note this is a duplicate of HHS#2000 - just want to pull it into this repo first Updates the transformation code to handle a case where a parent record (ie. opportunity or opportunity_summary) is deleted, AND the child records (everything else) is marked to be deleted as well. Also added a new way to set metrics that handles adding more specific prefixed ones (eg. `total_records_processed` and `opportunity.total_records_processed`) - will expand more on this later. Imagine a scenario an opportunity with a summary (synopsis) and a few applicant types gets deleted. The update process for loading from Oracle will mark all of our staging table records for those as `is_deleted=True`. When we go to process, we'll first process the opportunity, and delete it uneventfully, however we have cascade-deletes setup. This means that all of the children (the opportunity summary, and assistance listing tables among many others) also need to be deleted. SQLAlchemy handles this for us. However, this means when we then start processing the synopsis record that was marked as deleted - we would error and say "I can't delete something that doesn't exist". To work around this, we're okay with these orphan deletes, and we just assume we already took care of it. To further test this, I loaded a subset of the prod data locally (~2500 opportunities, 35k records total). I then marked all of the data is `is_deleted=True, transformed_at=null` and ran it again. It went through the opportunities deleting them. When it got to the other tables, it didn't have to do very much as they all hit the new case. The metrics produced look like: ``` total_records_processed=37002 total_records_deleted=2453 total_delete_orphans_skipped=34549 total_error_count=0 opportunity.total_records_processed=2453 opportunity.total_records_deleted=2453 assistance_listing.total_records_processed=3814 assistance_listing.total_delete_orphans_skipped=3814 opportunity_summary.total_records_processed=3827 opportunity_summary.total_delete_orphans_skipped=3827 applicant_type.total_records_processed=17547 applicant_type.total_delete_orphans_skipped=17547 funding_category.total_records_processed=4947 funding_category.total_delete_orphans_skipped=4947 funding_instrument.total_records_processed=4414 funding_instrument.total_delete_orphans_skipped=4414 ``` And as a sanity check, running again processes nothing. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8c16376 - Browse repository at this point
Copy the full SHA 8c16376View commit details -
[Task] Update analytics db to use local.env and Remove Dynaconf from …
…Analytics (#136) Fixes #107 Fixes #115 * removed the `*.toml` files related to dynaconf * removed references to Dynaconf (e.g. in Docstrings, gitignore) * use Pydantic for loading > After getting feedback for #107, the consensus was to reevaluate the way the database loader works for more uniformity. Changes will need to be made primarily in db.py and cli.py > > With the PR #84 , the env settings for the db are stored in settings.toml. The config settings should be updated to use the existing local.env file > Screenshots, GIF demos, code examples or output to help show the changes working as expected.
Configuration menu - View commit details
-
Copy full SHA for df5d7d6 - Browse repository at this point
Copy the full SHA df5d7d6View commit details -
Update API packages Mostly to address a vulnerability in the certifi library: https://github.com/navapbc/simpler-grants-gov/actions/runs/9844240120/job/27177446181?pr=127
Configuration menu - View commit details
-
Copy full SHA for a816207 - Browse repository at this point
Copy the full SHA a816207View commit details -
[Issue HHS#2046] Setup s3 localstack (#161)
Fixes HHS#2046 Setup S3 localstack for having a local version of S3 to use (for future work) Script / utils for interacting with S3 Localstack is a tool that creates a mock version of AWS locally. While the ability to mock out certain features varies, S3 being just a file storage system is pretty simple and fully featured even when mocked. Note that localstack has a paid version as well that adds more features, but all of S3's features are [supported in the free community tier](https://docs.localstack.cloud/references/coverage/coverage_s3/). We've used localstack for s3 and a few other AWS services on other projects. The script creates the S3 bucket in localstack. You can actually interact with the localstack instance of s3 with the AWS cli like so: ```sh aws --endpoint-url http://localhost:4566 s3 ls > 2024-07-12 13:10:24 local-opportunities ``` I created a tmp file in it succesfully: ```sh aws --endpoint-url http://localhost:4566 s3 cp tmp.txt s3://local-opportunities/path/to/tmp.txt ``` I can see the tmp file: ```sh aws --endpoint-url http://localhost:4566 s3 ls s3://local-opportunities/path/to/ > 2024-07-12 13:23:22 15 tmp.txt ``` And I can download it: ```sh aws --endpoint-url http://localhost:4566 s3 cp s3://local-opportunities/path/to/tmp.txt local_tmp.txt ```
Configuration menu - View commit details
-
Copy full SHA for 118481d - Browse repository at this point
Copy the full SHA 118481dView commit details -
[Issue HHS#2064] Download the search response as a CSV file (#87)
Fixes HHS#2064 Modified the search endpoint to be able to return its response as a CSV file My understanding is that the CSV download in the current experience is a frequently used feature - so adding it is worthwhile. An important detail is that all it takes to switch from getting the response as a normal JSON response body is to change the new "format" field in the request. So if the frontend added a new download button, they would just make an identical request adding this format field (they'd likely want to also adjust the page size to return more than 25 items). The actual logic is pretty simple, instead of return the normal JSON body, we instead construct a CSV file object and return that. There is some level of formatting/parsing that we need to do with this, but its pretty minor. Note that it is explicit with which fields it returns that way the CSV won't keep changing on users if we make adjustments to the schemas elsewhere. As for returning the file, it just relies on Flask itself. I'm not as familiar with file operations in an endpoint like this, so if there are scaling concerns (ie. very large output files), let me know. I know there are a few tools in Flask for streaming file responses and other complexities. If we wanted to add support for more file types like a JSON file or XML, we'd just need to add converters for those and the file logic should all work the same. I originally implemented this as JSON but realized it was just the exact response body shoved in a file - if a user wants that they might just create the file themselves from the API response. You can see what the file looks like that this produced either by running the API yourself, or looking at this one I generated. Note that for the list fields, I used `;` to separate the values within a single cell. [opportunity_search_results_20240617-152953.csv](https://github.com/user-attachments/files/15873437/opportunity_search_results_20240617-152953.csv) --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 159feac - Browse repository at this point
Copy the full SHA 159feacView commit details -
[Issue HHS#2054] Refactor/restructure the transformation code (#112)
Fixes HHS#2054 Restructured the transformation code - Split each chunk of the transformation logic into separate "Subtask" classes - Made a constants file - A few duplicated pieces of implementation were pulled into functions that the subtasks are derived from Some additional logging Created a Subtask class for breaking up a task into multiple steps for organizational reasons Added configuration to the transformation step for enabling/disabling different parts of the process (not used yet - but lets us build things out and not worry about breaking non-local environments). This looks far larger than it actually is, most of the actual changes are very small, I made all of the changes without adjusting the tests (outside of a few small bits of cleanup) and then refactored the tests as well. This does not aim to change the meaningful behavior of the transformation logic, but instead make it a lot easier to parse. Now when we add new transformations, that's conceptually simpler as its adding another one of the subtasks rather than adding to the massive mess of functions it was before. There are a few small logging / metric related changes from the Subtask just so we can have very granular metrics of how long each part of the task takes. I ran locally with a full snapshot of the production data and didn't see anything of note different from prior runs. Still takes ~10 minutes. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b9c60e8 - Browse repository at this point
Copy the full SHA b9c60e8View commit details -
[Task HHS#2056] Add post_date and close_date filters to search endpoi…
…nt schema (navapbc/simpler-grants-govnavapbc/simpler-grants-gov#168) Fixes HHS#2056 - Added .with_start_date to search_schema builder to allow building a date field with key of "start_date" - Added .with_end_date to search_schema builder to allow building a date field with key of "end_date" - Added post_date and close_date properties to OpportunitySearchFilterV1Schema class, which utilize the above to build schema filters for post_date and close_date which can utilize start_date and/or end_date fields. - Added two unit tests in test_opportunity_route_search that will test the data validation of these new filters. One test is for 200 response cases and the other test is for 422 (invalid) response cases. Note: As noted in the AC of Issue #163, this PR does NOT include implementation of the filters. Currently, these filters do nothing as they haven't been tied to any sort of query. This PR is just to lay the ground work. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 373b03b - Browse repository at this point
Copy the full SHA 373b03bView commit details -
Add suspense boundary around search page results (#101)
Fixes #59 This makes the search page static and adds a suspense boundary for the data being fetched by the server. The data comes from the API and is called from 3 components: * [`<SearchPaginationFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-9dbdda5096b97ad049cccea24c5a046581d26c151a6f94fcc32c05cb33ee9dee) * [`<SearchResultsHeaderFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-14a084f66c050414cc2bbd0875256511630438971022073301bbfe91c4aa8cd1) * [`<SearchResultsListFetch />`](https://github.com/navapbc/simpler-grants-gov/pull/101/files#diff-aabe6a7d19434a9b26199430bbcde5d31a0790aebc4cd844b922ac2fa1348dce) This also simplifies the state model by pushing state changes directly to the browser query params and rerendering the changed items. This makes things a lot simpler and thus a lot of state management code is removed and there results list is no longer wrapped in a form and passing form refs between components. This is the recommended approach by next: https://nextjs.org/learn/dashboard-app/adding-search-and-pagination There are several items that needed to be shared among the client components: the query, total results count, and total pages. These are wrapped in a `<QueryProvider />` that updates the state of these items. This was added so that if someone enters a query in the text box and the clicks a filter their query is not lost, so that the "N Opportunities" text doesn't need to be rerendered when paging or sorting, and so that the pager stays the same length when paging or sorting. The data is fetched a couple of times in a duplicative fashion, however this follows [NextJS best practice](https://nextjs.org/docs/app/building-your-application/rendering/composition-patterns#sharing-data-between-components) since the requests are cached. The pager has been updated to reload only when there is a change in the page length. Because of an issue with the way the pager renders, it is unavailable while data is being updated: <img width="1229" alt="image" src="https://github.com/navapbc/simpler-grants-gov/assets/512243/a097b0e2-f646-43b5-bc5a-664db02780a2"> This is because the Truss React component [switches between a link and a button as it renders](https://github.com/trussworks/react-uswds/blob/main/src/components/Pagination/Pagination.tsx#L42) and there isn't an option to supply query arguments, so if a user where to click it they would lose the query params. Overall this puts us on nice footing for the upcoming work using NextJS best practice.
Configuration menu - View commit details
-
Copy full SHA for 4b68cd2 - Browse repository at this point
Copy the full SHA 4b68cd2View commit details -
[Issue HHS#2042] Fix search page string translation (#169)
Fixes HHS#2042 Updated most search strings on Search page to use correct next-intl translation components Added strings to Messages file Updated the unit tests as well because they were not inheriting context due to improper import path from non-global context source <img width="1583" alt="Screenshot 2024-08-07 at 1 36 18 PM" src="https://github.com/user-attachments/assets/1b613d26-cbd0-4d0e-b831-0380c6f72af6">
Configuration menu - View commit details
-
Copy full SHA for 989dbbc - Browse repository at this point
Copy the full SHA 989dbbcView commit details -
[Issue: #166] Create ecs task to export opportunity data as csv and j…
…son (#176) Fixes #166 - Adds export_opportunity_data task - Changes opportunity_to_csv function to opportunities_to_csv to be more flexible by including output as a parameter - Adds unit test for export_opportunity_data task. - The test runs the export_opportunity_data task, uploading a csv and json file to mock_s3_bucket. Then it reads the files and verifies contents. --------- Co-authored-by: Michael Chouinard <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ab2cb7f - Browse repository at this point
Copy the full SHA ab2cb7fView commit details -
[Issue HHS#2041] Remove inline styling from pagination wrapper compon…
…ent (#173) Fixes HHS#2041 > Removed inline styling on component > Ended up having to create a custom loading style for pagination due to no support from USWDS on this. Honestly though IDK why we don't just hide pagination during loading... > Screenshots, GIF demos, code examples or output to help show the changes working as expected. ![2024-08-08 13 59 02](https://github.com/user-attachments/assets/04994a3f-a48e-4511-839d-aa686a7f1414)
Configuration menu - View commit details
-
Copy full SHA for 03f7c76 - Browse repository at this point
Copy the full SHA 03f7c76View commit details -
[Issue HHS#2040] Hide pagination component if no results are found (#175
) Fixes HHS#2040 > Updated pagination component to hide if there are no results. Previously it would show 7 pages that you could navigate between, all with no results. Updated all unit tests for pagination component. They were broken and commented out previously. ![2024-08-08 15 45 10](https://github.com/user-attachments/assets/27e8dc6a-dddd-4c52-8086-4cd4579d73fe)
Configuration menu - View commit details
-
Copy full SHA for 766a7e6 - Browse repository at this point
Copy the full SHA 766a7e6View commit details -
[Issue HHS#2037]Add no console log to eslint rules (#182)
Fixes HHS#2037 > Added new rule to eslint to prevent leaving behind console log statements > normal expected rule in eslint configurations
Configuration menu - View commit details
-
Copy full SHA for 7a0ce80 - Browse repository at this point
Copy the full SHA 7a0ce80View commit details -
[Issue HHS#2035] Pin Python version to 3.12 + dependency updates (#185)
Fixes HHS#2035 Pin the Python version and document details on how to upgrade Upgraded packages as well while I was tinkering with this This is mirroring changes from the template repo: navapbc/template-application-flask#235 Python releases new minor versions (3.12, 3.13, etc.) every year in October. It usually takes a few weeks for all of our dependencies and tooling to also be upgraded to the latest version, causing our builds to break. There isn't much we can do except wait a few weeks and then do the upgrade (assuming no breaking changes in features we use). However, we had some of our dependencies pinned to the major version (Python 3) so it has broken the past few years until it started working again when the dependencies got fixed. This is just getting ahead of that and making sure the upgrade to Python 3.13 doesn't cause any problems. This change is largely documentation as the Python version used in the dockerfile + pyproject.toml already would have resolved to Python 3.12, this just makes it so it won't auto-upgrade to 3.13 when that releases in October. The package updates are all very minor - we updated them not too long ago, mostly just cleaning up a few things like the types-requests issue that is no longer present.
Configuration menu - View commit details
-
Copy full SHA for e481a83 - Browse repository at this point
Copy the full SHA e481a83View commit details -
[Issue HHS#2033] More filters in search schema (#189)
Fixes HHS#2033 Added a builder for the search schema for integer and boolean fields. Updated the string builder to fix a small bug + allow you to specify a pattern for the string. This is just adding additional filters that we'll be able to search by (once the rest of the implementation is built out). These are all fields that fall into a "yeah, I think someone might want to narrow down by that" group, and definitely doesn't encompass every filter we might want to add. Mostly just wanted to get integer and boolean fields implemented so the search logic could all get built out with actual use cases. Added examples to the OpenAPI docs to verify these work, at least at the request parsing phase. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 4fb6d98 - Browse repository at this point
Copy the full SHA 4fb6d98View commit details -
Updated API packages Mostly to upgrade flask-cors which had a vulnerability in releases before 5.0
Configuration menu - View commit details
-
Copy full SHA for 431fec2 - Browse repository at this point
Copy the full SHA 431fec2View commit details -
[Issue HHS#2032] Cleanup makefile commands around OpenSearch (#192)
Fixes HHS#2032 Adjusted a few makefile commands / added some additional docs Renamed `db-recreate` to `volume-recreate` as the command also recreates the OpenSearch volume. Added a command for populating your OpenSearch index locally for opportunities rather than needing to build the full command yourself. As we've added more commands to the Makefile, we haven't kept ontop of how they all interact. This should add some level of clarity to the commands. The new `make populate-search-opportunities` command is a much easier way of writing out the full command of `docker compose run --rm grants-api poetry run flask load-search-data load-opportunity-data`. Note that we have a `make help` command which prints out any command with a `## help text` on the target definition in the makefile: <img width="1353" alt="Screenshot 2024-09-06 at 4 38 02 PM" src="https://github.com/user-attachments/assets/d21f25c4-031f-4028-bf6b-d73908215b01">
Configuration menu - View commit details
-
Copy full SHA for fef56ff - Browse repository at this point
Copy the full SHA fef56ffView commit details -
Update search query builder to support int and date range queries (#195)
Fixes #164 Adds support to our search query builder layer to handle building queries for filtering on ints and dates between specific date ranges. Added ints and dates in the same ticket as the queries are essentially the same, just ints vs dates. While it throws an exception if both start and end values are None, I think the API request schema should also do that so that the error is more relevant/accurate to the API, but I can add that later (likely a lot more niche edge cases to handle for requests). Nothing too exciting with these queries, they work as expected and are just simple ranges. The test dataset is roughly accurate (turns out books didn't always have exact release dates until the last ~20 years). I also have tested these queries manually with the opportunity endpoints fields, the following two queries would work (can be tested locally at http://localhost:5601/app/dev_tools#/console ): ``` GET opportunity-index-alias/_search { "size": 5, "query": { "bool": { "filter": [ { "range": { "summary.post_date": { "gte": "2020-01-01", "lte": "2025-01-01" } } } ] } } } GET opportunity-index-alias/_search { "size": 12, "query": { "bool": { "filter": [ { "range": { "summary.award_floor": { "gte": 1234, "lte": 100000 } } } ] } } } ```
Configuration menu - View commit details
-
Copy full SHA for 499e766 - Browse repository at this point
Copy the full SHA 499e766View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4b7501b - Browse repository at this point
Copy the full SHA 4b7501bView commit details -
[Issue HHS#2034] Onboarding Documentation Improvements (#193)
Fixes HHS#2034 > Cleaning up docs around getting onboarded to the project/repo > I walked through the READMEs and linked documentation to try to get things up and running on my new laptop without having to go to any outside sources of information (including you all) without adding that to the docs.
Configuration menu - View commit details
-
Copy full SHA for 25513a5 - Browse repository at this point
Copy the full SHA 25513a5View commit details -
[Issue HHS#2050] Setup agency tables for transformation (#127)
Fixes HHS#2050 Added agency tables Added tgroups tables to the staging & foreign tables Added factories / data setup for these tables https://docs.google.com/document/d/1EPZJyqTQruq-BkQoojtrkLqgVl8GrpfsZAIY9T1dEg8/edit provides a lot of rough details on how this data works in the existing system The next ticket will be to transform the data stored in tgroups into the new agency table(s). The agency table I created contains most of the fields from the legacy system (in a normalized structure) including a few that I don't think we quite need, but just in case they have a separate purpose than what I can understand, I'm preferring to keep them. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e4b62ff - Browse repository at this point
Copy the full SHA e4b62ffView commit details -
[Issue HHS#2038] Incrementally load search data (#180)
Fixes HHS#2038 Updated the load search data task to partially support incrementally loading + deleting records in the search index rather than just fully remaking it. Various changes to the search utilities to support this work Technically this doesn't fully support a true incremental load as it updates every record rather than just the ones with changes. I think the logic necessary to detect changes both deserves its own ticket, and may evolve when we later support indexing files to OpenSearch, so I think it makes sense to hold off on that for now.
Configuration menu - View commit details
-
Copy full SHA for 7dcff04 - Browse repository at this point
Copy the full SHA 7dcff04View commit details -
[Issue HHS#2039] Finish connecting new search parameters to backend q…
…ueries (#197) Fixes HHS#2039 Adjusted the logic that connects the API requests to the builder in the search layer to now connect all of the new fields. A few minor validation adjustments to the API to prevent a few common mistakes a user could make The length of the search tests are getting pretty long, I think a good follow-up would be to split the test file into validation and response testing. I adjusted some validation/setup of the API schemas because I don't see a scenario where min/max OR start/end dates would not ever be needed together. This also let me add a quick validation rule that a user would provide at least one of the values. I adjusted some of the way the search_opportunities file was structured as we only supported filtering by strings before, and it used the name of the variables to determine the type. I made it a bit more explicit, as before random variables could be passed through to the search layer which seems potentially problematic if not filtered out somewhere.
Configuration menu - View commit details
-
Copy full SHA for e6232b7 - Browse repository at this point
Copy the full SHA e6232b7View commit details -
[Issue HHS#2051] Transform agency data (#157)
Fixes HHS#2051 Add transformations for agency data Agency data is structured oddly in the existing system, instead of being in ordinary tables, its in a `tgroups` table that has values stored as key-value pairs. We want to normalize that into something more workable, so the transformation needs to work a bit differently than the transformations of other tables. For simplicity, I load all of the data for every agency (and later filter to just what changed) as this removes a lot of weird edge cases that we would have otherwise needed to consider. Only modified rows actually get used, but we know we have the full set of data now. I have a snapshot of the prod tgroups table and loaded it into my DB locally and ran the transform script. In total, it takes ~2 seconds to run and didn't hit any issues. A set of the relevant metrics: ``` total_records_processed=1152 total_records_deleted=0 total_records_inserted=1152 total_records_updated=0 total_error_count=0 agency.total_records_processed=1152 agency.total_records_inserted=1152 TransformAgency_subtask_duration_sec=2.14 task_duration_sec=2.14 ``` As a sanity test, I also loaded in the tgroups data from dev and tried running it through. While it generally worked, there were 12 agencies that failed because they were missing the ldapGp and AgencyContactCity fields. I'm not certain if we want to do anything about that as they all seemed to be test agencies based on the names. --------- Co-authored-by: nava-platform-bot <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c67fa68 - Browse repository at this point
Copy the full SHA c67fa68View commit details -
Configuration menu - View commit details
-
Copy full SHA for f6d4bdc - Browse repository at this point
Copy the full SHA f6d4bdcView commit details -
[Issue HHS#2036] Opportunity Page Design Implementation (#196)
Fixes HHS#2036 > What was added, updated, or removed in this PR. > Testing instructions, background context, more in-depth details of the implementation, and anything else you'd like to call out or ask reviewers. Explain how the changes were verified. > Screenshots, GIF demos, code examples or output to help show the changes working as expected. --------- Co-authored-by: Aaron Couch <[email protected]> Co-authored-by: Aaron Couch <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for f51418d - Browse repository at this point
Copy the full SHA f51418dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 3ab8191 - Browse repository at this point
Copy the full SHA 3ab8191View commit details -
Configuration menu - View commit details
-
Copy full SHA for 92f266f - Browse repository at this point
Copy the full SHA 92f266fView commit details