-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Archive social care extracts #927
Conversation
This comment has been minimized.
This comment has been minimized.
@check-spelling-bot Report🔴 Please reviewSee the 📂 files view, the 📜action log, or 📝 job summary for details. Unrecognized words (1)hri To accept these unrecognized words as correct, you could run the following commands... in a clone of the [email protected]:Public-Health-Scotland/source-linkage-files.git repository curl -s -S -L 'https://raw.githubusercontent.com/check-spelling/check-spelling/main/apply.pl' |
perl - 'https://github.com/Public-Health-Scotland/source-linkage-files/actions/runs/8357011382/attempts/1' OR To have the bot accept them for you, reply quoting the following line: Available 📚 dictionaries could cover words (expected and unrecognized) not in the 📘 dictionaryThis includes both expected items (235) from .github/actions/spelling/expect.txt and unrecognized words (1)
Consider adding them (in with:
extra_dictionaries:
cspell:swift/src/swift.txt
cspell:k8s/dict/k8s.txt
cspell:csharp/csharp.txt
cspell:java/src/java-terms.txt
cspell:typescript/dict/typescript.txt To stop checking additional dictionaries, add (in check_extra_dictionaries: '' Errors (2)See the 📂 files view, the 📜action log, or 📝 job summary for details.
See ❌ Event descriptions for more information. If the flagged items are 🤯 false positivesIf items relate to a ...
|
* Remove redundant code * Update documentation * Style code * Reorder when we match on client variables This was causing NSUs to show a social care id. This now resolves this. * Update documentation * Style code * Revert "Update logic to use end of Quarter" This reverts commit 004e831. * Style code * Update documentation * add check comment (TO DO for this PR) * Remove `check_quarter_format` function * Remove `check_quarter_format` * Add chi parameter to `create_demog_test_flags` * Style code * Use CHI parameter for ep/indiv tests * Use CHI parameter for extract tests (chi) * Change test sheet names to lowercase * Change date to lowercase * Update documentation * Update documentation * Update documentation * Style code * Fix pick variables This was not taking the correct variables, leading to NSUs being assigned psychiatry * SC Demographics and SDS (#900) * Style code * # read in sc demographics different variables - removed extract date as not accurate, using chi over upi after discussion with social care data management. Added in date of death just for fun. * social care demographics first draft removed a lot of the submitted variables and instead using chi variables from chi seeding. Other changes: - Fill in missing values, - create flag for latest social care id (one from database is not accurate), this makes sure that each chi only has ONE sc id as the latest to stop it creating duplicates - change postcode to choose chi over submitted * Style code * had a github error? Not sure what happened but commiting first draft of sc demographics * Style code * first draft sds. No major changes - only how demographics is matched on and how latest social care id is selected * Update documentation * demographics - add sending location to group by * Style code * Update documentation * Added ungroup() * Remove comments * Remove comments * Style code --------- Co-authored-by: SwiftySalmon <[email protected]> Co-authored-by: marjom02 <[email protected]> Co-authored-by: Jennit07 <[email protected]> Co-authored-by: Jennit07 <[email protected]> Co-authored-by: Zihao Li <[email protected]> * Sc all at speedup (#904) * speed up process_sc_all_alarms_telecare function with data.table package * Update documentation --------- Co-authored-by: lizihao-anu <[email protected]> Co-authored-by: Megan McNicol <[email protected]> Co-authored-by: Jennit07 <[email protected]> * Add case_when statement for `high_cc` cohort * Bug - `high_cc` in demographic cohort showing `NAs` instead of `TRUE/FALSE` (#911) Add case_when statement for `high_cc` cohort * added a casewhen to update property type description for homelessness * Update documentation * Style code * Bug - deal with missing variables (#914) * Add missing sc variables for no sc data * Fix code for including `_inc_dna` variables * Remove commented line * Bug - Fix get pop path failing and preventing the indiv file from running. (#913) Fix bug - pop file paths breaking indiv file * correct file hscp file path * Update process_sc_all_home_care.R A small issue was identified when running targets. Linked with changes to the function `fix_sc_end_dates()` * Update process_sc_all_alarms_telecare.R * remove duplicate columns * Fix targets (#892) * fix sc_client_lookup sc_send_lca * fix an issue of get_pop_path * Style code * fix the rest of get_pop_path from get_datazone_pop_path * Update documentation * fix sc_send_lca * add missing year column * explicitly specify the argument year to avoid corruption of targets * Update documentation * new data pipeline with targets remove create_individual_files from targets and append it to run_targets script * minor changes * Style code * undo sc_send_lca bit * Update targets scripts * Remove top level targets scripts --------- Co-authored-by: lizihao-anu <[email protected]> Co-authored-by: Megan McNicol <[email protected]> Co-authored-by: Jennit07 <[email protected]> Co-authored-by: Jennifer Thom <[email protected]> * remove cases that start date is later than end date * Update Refs for March24 SLF update * 758 investigate extracts to identify areas of code which can be cut down for processing times (#899) * re-writing process_sc_all sds and alarm_telecare with data.table to improve the speed * Update documentation * Style code * changes in line with new process_sc_all_sds dplyr version * Style code * remove duplicate columns * remove duplicated columns --------- Co-authored-by: lizihao-anu <[email protected]> Co-authored-by: Megan McNicol <[email protected]> * Update homelessness completeness path * Update check_year_valid function * 920 issues with file permissions need constant monitoring (#921) * set a correct file permission * update descriptions in process_tests function * Update documentation --------- Co-authored-by: lizihao-anu <[email protected]> * change joining with sc_demog_lookup to right_join and move person_id down * Archive social care extracts (#927) * Set up `get_sandpit_extract_path` * Update documentation * Update sc `all` data paths * Write sandpit extract if file does not exist * Style code --------- Co-authored-by: Jennit07 <[email protected]> * Update excel sg completeness tabs --------- Co-authored-by: Jennit07 <[email protected]> Co-authored-by: Megan McNicol <[email protected]> Co-authored-by: SwiftySalmon <[email protected]> Co-authored-by: marjom02 <[email protected]> Co-authored-by: Zihao Li <[email protected]> Co-authored-by: lizihao-anu <[email protected]> Co-authored-by: rchlv <[email protected]> Co-authored-by: Zihao Li <[email protected]>
closes #923
I have moved current processed social care files into folders in /conf/hscdiip/SLF_Extracts/Social_care to keep things organised. Ive also created a new folder called
Sandpit_Extracts
. The idea behind this is to save a copy of the raw social care extract for each dataset before we do any processing. This will allow us to compare this each quarter to the previous quarter. It will help to highlight any potential issues and we can feedback the changes to the social care team/data management colleagues.I have set this up in a way that it will only write to disk once to prevent us from overwriting the file every time we run social care extracts. It means we should have an accurate copy of data from the snapshot at that point in time. For example, the processed datasets will be overwritten in between updates if we are testing code and therefore uses the most up to date snapshot data. Whereas this will save one copy of the data and be accurate/reflective of the data used in the quarterly updates.