-
Notifications
You must be signed in to change notification settings - Fork 1
/
0400-results.Rmd
350 lines (249 loc) · 15.8 KB
/
0400-results.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
# Results
## Collaborative GIS Environment
A summary of background information layers loaded to the `background_layers.gpkg` geopackage of the `restoration_wedzin_kwa` project at the time of writing are included in Table \@ref(tab:tab-rfp-tracking).
```{r rfp-tracking-copy, eval = gis_update}
# first we will copy the doc from the Q project to this repo - the location of the Q project is outside of the repo!!
q_path_stub <- "~/Projects/gis/restoration_wedzin_kwa/"
rfp_tracking_raw <- sf::st_read(paste0(q_path_stub, "background_layers.gpkg"),
layer = "rfp_tracking",
quite = TRUE)
# grab the metadata
md <- rfp_meta_bcd_xref()
# remove the `_vw` from the end of content
rfp_tracking_prep <- dplyr::left_join(
rfp_tracking_raw %>%
dplyr::arrange(desc(timestamp)) %>%
dplyr::distinct(content, .keep_all = FALSE),
md %>%
dplyr::select(content = object_name, url = url_browser, description),
by = "content"
) %>%
arrange(content)
rfp_tracking_prep %>%
readr::write_csv("data/rfp_tracking_prep.csv")
```
```{r tab-rfp-tracking}
rfp_tracking_prep <- readr::read_csv(
"data/rfp_tracking_prep.csv"
)
rfp_tracking_prep %>%
fpr::fpr_kable(caption_text = "Layers loaded to collaborative GIS project.",
footnote_text = "Metadata information for bcfishpass and bcfishobs layers can be provided here in the future but currently can usually be sourced from https://smnorris.github.io/bcfishpass/06_data_dictionary.html .")
```
## Historic Information Regarding Impacts and Restoration Initiatives
Information amalgamated from past restoration initiatives has been added to the `restoration_wedzin_kwa` project in spatial formats through the `sites_restoration.gpkg` with details of source document locations, geometry type, description of the source document amalgamated, etc. documented in the `sites_restoration_gpkg_tracking.csv` file located [here](https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/main/data/sites_restoration_gpkg_tracking.csv) with results presented in Table \@ref(tab:tab-sites-restoration-gpkg-tracking).
```{r tab-sites-restoration-gpkg-tracking}
readr::read_csv(
"data/sites_restoration_gpkg_tracking.csv"
) %>%
fpr::fpr_kable(caption_text = "Summary of past restoration initiatives added to the `sites_restoration.gpkg` geopackage of the `restoration_wedzin_kwa` project. Work in progress.")
```
<br>
### @price2014UpperBulkleya - Upper Bulkley floodplain habitat: modifications, physical barriers, and sites of potential importance to salmonids
@price2014UpperBulkleya documented human-induced modifications to floodplain habitat and river channelization, investigated potential barriers to fish migration, and assessed areas of upwelling groundwater in the Upper Bulkley mainstem having potential importance to salmonids. The coordinates of points of interest were extracted from a pdf version of the report and are included in the `price_2014_waypoints.csv` file located [here](https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/main/data/inputs_extracted/price_2014_waypoints.csv) and were added to the `sites_restoration.gpkg` (layer name = price_2014). Individual waypoint information is also presented in Table \@ref(tab:tab-price-2014-waypoints).
```{r extract-price2014, eval = F}
path <- "/Users/airvine/zotero/storage/7JCL42Y3/price_2014_upper_bulkley_floodplain_habitat_-_modifications,_physical_barriers,_and_sites.pdf"
# name our csv we are creating
path_file <- "data/inputs_extracted/price_2014_waypoints.csv"
# define page to extract table
page <- 32
# #if you wanted to define the area to extract would run with this the first time
# tabulapdf::locate_areas(path, pages = page)
# to get....
# tab_area = list(c(151.17224, 87.13111, 715.14139,604.27249 ))
# but....looks like it will guess correct with no help so let's do that
tabulapdf::extract_tables(path,
pages = page,
# method = "lattice",
# output = c("tibble"),
# guess = FALSE,
# area = tab_area
) |>
purrr::pluck(1) |>
# need to make the first row be the names due to muti-line header
janitor::row_to_names(1) |>
# now let's burn it to a csv so we can correct the types
readr::write_csv(path_file)
# read it back in to get the types right
tab1 <- readr::read_csv(path_file)
# now let's define the rest of the pages to extract from
page <- 33:36
# build a function to pull them all out at the same time
extract_tables_multi <- function(page){
tabulapdf::extract_tables(path,
pages = page,
) |>
purrr::pluck(1) |>
# use the names from the header table
purrr::set_names(nm = names(tab1))
}
# run our function
tabs_extra <- page |>
purrr::map_df(
extract_tables_multi
)
# join our og table to our multi-page output but fix a type
tab <- dplyr::bind_rows(
tab1 |> dplyr::mutate(number = as.numeric(number)),
tabs_extra
) |>
# clean the names for the report
purrr::set_names(nm = stringr::str_to_title(names(tab1))) |>
# add a theme
dplyr::mutate(Theme = dplyr::case_when(
stringr::str_detect(Notes, stringr::regex("spawning|Spawners", ignore_case = TRUE)) ~ "Spawning",
stringr::str_detect(Notes, stringr::regex("culvert|bridge|crossing", ignore_case = TRUE)) ~ "Stream Crossing",
stringr::str_detect(Notes, stringr::regex("richfield", ignore_case = TRUE)) ~ "Other",
stringr::str_detect(Notes, stringr::regex("floodplain", ignore_case = TRUE)) ~ "Floodplain",
stringr::str_detect(Notes, stringr::regex("rail", ignore_case = TRUE)) ~ "Railway",
stringr::str_detect(Notes, stringr::regex("field|cattle", ignore_case = TRUE)) ~ "Agriculture",
stringr::str_detect(Notes, stringr::regex("beaver", ignore_case = TRUE)) ~ "Beaver",
stringr::str_detect(Notes, stringr::regex("forest|clearcut|cutblock", ignore_case = TRUE)) ~ "Forestry",
stringr::str_detect(Notes, stringr::regex("rip-rap", ignore_case = TRUE)) ~ "Rip-rap",
stringr::str_detect(Notes, stringr::regex("Log jam", ignore_case = TRUE)) ~ "Log jam",
TRUE ~ "Other" # Default to Other if no other conditions are met
)) |>
#there is an issue at row 182... so fix by appending UB-643 to 645 [UB- to the beginning of the photo series column
dplyr::mutate(`Photo Series` = dplyr::case_when(
Number == 182 ~ paste0("UB-643 to 645 [UB-", `Photo Series`),
TRUE ~ `Photo Series`
)) |>
# remove rows without a Number
dplyr::filter(!is.na(Number))
# burn over our csv so we can put as table in report
tab |>
readr::write_csv(path_file, na = '')
# create a spatial file and save to the shared project
tab |>
janitor::clean_names() |>
# convert the time to character to preserve as gpkg won't accept
dplyr::mutate(time = as.character(time)) |>
dplyr::filter(!is.na(easting)) |>
sf::st_as_sf(coords = c("easting", "northing"), crs = 26909) |>
sf::st_transform(3005) |>
# put time after number and put notes as last column
dplyr::select(number, time, theme, everything()) |>
sf::st_write(
dsn = "~/Projects/gis/restoration_wedzin_kwa/sites_restoration.gpkg",
layer = 'price_2014',
delete_layer = TRUE
)
```
```{r tab-price-2014-waypoints}
# read in the price 2014 data
price_2014 <- read_csv("data/inputs_extracted/price_2014_waypoints.csv")
# present as table
price_2014 %>%
fpr::fpr_kable(caption_text = "Summary of Price (2014) waypoints for the Upper Bulkley floodplain including locations of river channelization, potential barriers to fish migration, and assessed areas of upwelling groundwater having potential importance to salmonids.")
```
### @ncfdc1998MidBulkleyDetailed - Mid-Bulkley Detailed Fish Habitat/Riparian/Channel Assessment for Watershed Restoration
@ncfdc1998MidBulkleyDetailed utilized detailed fish and fish habitat assessments to inform restoration planning in the Neexdzii Kwah with numerous detailed prescriptions included within their reporting and digital deliverables. Summarized details of this prioritization work and prescription locations and details extracted from the pdf and included in the `ncfdc_1998_prescriptions.csv` file located and viewable [here](https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/main/data/ncfdc_1998_prescriptions.csv). A summary of prioritization for sub-basins in the region are included in Table \@ref(tab:tab-ncfdc-1998-priority) with detailed prioritization information in Table \@ref(tab:tab-ncfdc-1998-pres) and details of individual prescriptions presented in Table \@ref(tab:tab-ncfdc-1998-priority-detailed) and stored [here](https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/main/data/inputs_extracted/ncfdc_1998_prescriptions.csv). The prescription dataset has also been converted to a spatial file and added to the `restoration_wedzin_kwa` shared GIS project through the `sites_restoration.gpkg` file (layer name - `ncfdc_1998_prescriptions`).
<br>
```{r tab-ncfdc-1998-priority}
ncfdc_1998_71a %>%
fpr::fpr_kable(caption_text = "Table 71a from the NCFDC (1998) report - summary of the prioritization of sub-basins in the Neexdzii Kwah region.",
scroll = FALSE)
```
<br>
```{r tab-ncfdc-1998-priority-detailed-prep, eval = F}
# read in from the path using row 4 as teh header
path <- "~/Library/CloudStorage/OneDrive-Personal/Projects/2024-069-ow-wedzin-kwa-restoration/data/ncfdc_1998/Mid Bulkley Detailed FHAP_CAP_RAP/Appendix/AppH.xls"
path_file <- "data/inputs_extracted/ncfdc_1998_73_app_h.csv"
# read in the data, clean and burn to csv
readxl::read_excel(path, skip = 3) |>
# fill down for System and Sub-basin columns
tidyr::fill(System, `Sub-Basin`) |>
# save as csv to data/inputs_extracted/ncfdc_1998_73_app_h.csv
readr::write_csv(path_file)
```
<br>
```{r tab-ncfdc-1998-priority-detailed}
# read the file back in and present as table
my_caption <- "Table 73 from digital Appendix H of the NCFDC (1998) report (not included in pdf). Decision matrix to prioritize resches for restoration based on watershed position, fisheries value, synergistic (downstream impact) value, the nature of impacts"
path_file <- "data/inputs_extracted/ncfdc_1998_73_app_h.csv"
readr::read_csv(path_file) %>%
fpr::fpr_kable(caption_text = my_caption)
```
<br>
```{r tab-ncfdc-1998-pres}
ncfdc_1998 <- readr::read_csv(
"data/ncfdc_1998_prescriptions_hand_bomb.csv"
)
ncfdc_1998 %>%
dplyr::select(sub_basin:technical_references) %>%
fpr::fpr_kable(caption_text = "Summary of NCFDC (1998) prescriptions for the Neexdzii Kwah area.")
```
## Future Restoration Site Selection
### Fish Passage
High priority fish passage restoration opportunities in the Neexdzii Kwah watershed include mulitple culverts on Highway 16 such as Richfield Creek, Johnny David Creek, and Byman Creek along with crossings on private roads, secondary roads and the railway such as Ailport Creek, Perow Creek, tributary to Buck Creek (PSCIS 197640) and Cesford Creek. Details are presented in the following reports:
- [Bulkley Watershed Fish Passage Restoration Planning 2022](https://www.newgraphenvironment.com/fish_passage_bulkley_2022_reporting/)[@irvine2023BulkleyWatershed]
- [Bulkley River and Morice River Watershed Groups Fish Passage Restoration Planning 2021](https://www.newgraphenvironment.com/fish_passage_skeena_2021_reporting/)[@irvine2022BulkleyRiver]
- [Bulkley River and Morice River Watershed Groups Fish Passage Restoration Planning 2020](https://www.newgraphenvironment.com/fish_passage_bulkley_2020_reporting/)[@irvine2021BulkleyRiver]
At the time of writing, lateral connectivity analysis had been run for areas of the Neexdzii Kwah and tributaries floodplains for railway only. The results of this analysis are included in the `lateral_habitat.tiff` layer in the `background_layers.gpkg` and are viewable in the `restoration_wedzin_kwa` project. The results of this analysis and future analysis incorperating major roadways as well (currently under development) will be used to inform future restoration activities.
### Local Knowledge for Riparian Area and Erosion Protection Site Selection
Sites proposed for riparian area and erosion protection activities in the next two fiscal years are located in the `sites_restoration.gpkg` and are viewable in the `restoration_wedzin_kwa` project. These sites were selected based on local knowledge and landowner willingness to participate in restoration activities.
### Evaluation of Historic and Current Imagery
This work is ongoing and will be added to the `restoration_wedzin_kwa` project in the future.
### Delineation of Areas of High Fisheries Values
This work is ongoing and will be added to the `restoration_wedzin_kwa` project in the future.
### Parameter Ranking to Select Future Restoration Sites
A summary of an initial draft of GIS and user input parameters used to rank future restoration sites generated from the `restoration_site_priority_parameters.csv` file located [here](https://github.com/NewGraphEnvironment/restoration_wedzin_kwa_2024/blob/main/data/restoration_site_priority_parameters.csv) is included in Table \@ref(tab:tab-gis-params).
<br>
```{r gis-params-update_meta, eval = gis_update}
# load the data
gis_params_raw_all <- readr::read_csv(
"data/inputs_raw/restoration_site_priority_parameters_raw.csv"
)
gis_params_raw_to_update <- gis_params_raw_all %>%
dplyr::filter(!is.na(source_schema_table) &
!is.na(source_column_name) &
!str_detect(source_schema_table, "bcfishobs|bcfishpass")) %>%
# we know that whse_forest_tenure.ften_range_poly_carto_vw is problematic so lets remove
dplyr::filter(!str_detect(source_schema_table, "whse_forest_tenure.ften_range_poly_carto_vw"))
# grab the details about the source_schema_table as source_schema_table_details and then the
# source_column_name as source_column_name_details
gis_param_details_prep_schtab <- left_join(
gis_params_raw_to_update,
rfp::rfp_meta_bcd_xref(),
by = c("source_schema_table" = "object_name")
)
# grab the column details
params_cols_des <- purrr::map2_df(gis_param_details_prep_schtab$source_schema_table, gis_param_details_prep_schtab$source_column_name, rfp::rfp_meta_bcd_xref_col_comments)
# left_join the column details to the gis_param_details_prep_schtab
gis_param_details_prep <- left_join(
gis_param_details_prep_schtab,
params_cols_des,
by = c("source_schema_table" = "object_name", "source_column_name" = "col_name")
)
# need to add back all the layers filtered out above - this should not be repeated but will do for now
gis_params_raw_all_updated <- bind_rows(
gis_params_raw_all %>%
dplyr::filter(is.na(source_schema_table) |
is.na(source_column_name) |
str_detect(source_schema_table, "bcfishobs|bcfishpass") |
str_detect(source_schema_table, "whse_forest_tenure.ften_range_poly_carto_vw")),
gis_param_details_prep
) %>%
dplyr::mutate(url_browser = case_when(
str_detect(source_schema_table, "bcfishobs|bcfishpass") ~ "https://smnorris.github.io/bcfishpass/06_data_dictionary.html",
TRUE ~ url_browser)
) %>%
arrange(source_schema_table, source_column_name, column_name_raw)
# this is time consuming so lets save it as a csv and make this chunk conditional on the gis_update flag
gis_params_raw_all_updated %>%
write_csv("data/restoration_site_priority_parameters.csv")
```
```{r tab-gis-params, eval = TRUE}
read_csv("data/restoration_site_priority_parameters.csv") %>%
dplyr::select(
source_schema_table,
source_column_name,
column_name_raw,
user_input,
url_browser,
# description_table = description,
col_comments
) %>%
fpr::fpr_kable(caption_text = "GIS parameters used to rank future restoration sites.",
scroll = gitbook_on)
```