diff --git a/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json b/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json index baa1ce8..58431a6 100644 --- a/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json +++ b/_freeze/posts/2024-06-30-land-cover/index/execute-results/html.json @@ -1,8 +1,8 @@ { - "hash": "6e8cd85f1a590f25e688374ff57f6c05", + "hash": "e5b275c5f5c13f1e1771e7210a25e801", "result": { "engine": "knitr", - "markdown": "---\ntitle: \"Mapping Land Cover with R\"\nauthor: \"al\"\ndate: \"2024-07-01\"\ndate-modified: \"2024-07-01\"\ncategories: [land cover, R, planetary computer, remote sensing]\nimage: \"image.jpg\"\nparams:\n repo_owner: \"NewGraphEnvironment\"\n repo_name: \"new_graphiti\"\n post_dir_name: \"2024-06-30-land-cover\"\n update_pkgs: FALSE\n update_gis: FALSE\nexecute:\n warning: false\nformat: \n html:\n code-fold: true\n---\n\n\nVisualize and quantify remotely sense land cover data.... Here is a first start. We will use the European\nSpace Agency's WorldCover product which provides global land cover maps for the years 2020 and 2021 at 10 meter\nresolution based on the combination of Sentinel-1 radar data and Sentinel-2 imagery. We will use the 2021 dataset\nfor mapping an area of the Skeena watershed near Houston, British Columbia. \n\n
\n\n\nThis post was inspired - with much of the code copied - from a repository on GitHub from the wonderfully talented\n[Milos Popovic](https://github.com/milos-agathon/esa-land-cover). \n\n
\n\nFirst thing we will do is load our packages. If you do not have the packages installed yet you can change the `update_pkgs` param in\nthe `yml` of this file to `TRUE`. Using `pak` is great because it allows you to update your packages when you want to.\n\n\n::: {.cell}\n\n```{.r .cell-code}\npkgs_cran <- c(\n \"usethis\",\n \"rstac\",\n \"here\",\n \"fs\",\n \"terra\",\n \"tidyverse\",\n \"rayshader\",\n \"sf\",\n \"classInt\",\n \"rgl\",\n \"tidyterra\",\n \"tabulapdf\",\n \"bcdata\",\n \"ggplot\",\n \"ggdark\",\n \"knitr\",\n \"DT\",\n \"htmlwidgets\")\n\npkgs_gh <- c(\n \"poissonconsulting/fwapgr\",\n \"NewGraphEnvironment/rfp\"\n )\n\npkgs <- c(pkgs_cran, pkgs_gh)\n\nif(params$update_pkgs){\n # install the pkgs\n lapply(pkgs,\n pak::pkg_install,\n ask = FALSE)\n}\n\n# load the pkgs\npkgs_ld <- c(pkgs_cran,\n basename(pkgs_gh))\ninvisible(\n lapply(pkgs_ld,\n require,\n character.only = TRUE)\n)\n\nsource(here::here(\"scripts/functions.R\"))\n```\n:::\n\n\n# Define our Area of Interest\nHere we diverge a bit from Milos version as we are going to load a custom area of interest. We will be connecting to\nour remote database using Poisson Consulting's `fwapgr::fwa_watershed_at_measure` function which leverages the in database\n[`FWA_WatershedAtMeasure`](https://smnorris.github.io/fwapg/04_functions.html#fwa-watershedatmeasure) function from \n[Simon Norris'](https://github.com/smnorris) wonderful [`fwapg`](https://github.com/smnorris/fwapg)\npackage. \n\n
\n\nWe use a `blue line key` and a `downstream route measure` to define our area of interest which is the Neexdzii Kwa \n(a.k.a Upper Bulkley River) near Houston, British Columbia.\n\n
\n\nAs per the [Freshwater Atlas of BC](https://catalogue.data.gov.bc.ca/dataset/freshwater-atlas-stream-network/resource/5459c8c7-f95c-42fa-a439-24439c11929d) - the `blue line key`:\n\n> Uniquely identifies a single flow line such that a main channel and a secondary channel with the same watershed code would have different blue line keys (the Fraser River and all side channels have different blue line keys).\n\n
\n\nA `downstream route measure` is:\n\n>\tThe distance, in meters, along the route from the mouth of the route to the feature. This distance is measured from the mouth of the containing route to the downstream end of the feature.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# lets build a custom watersehed just for upstream of the confluence of Neexdzii Kwa and Wetzin Kwa\n# blueline key\nblk <- 360873822\n# downstream route measure\ndrm <- 166030.4\n\naoi <- fwapgr::fwa_watershed_at_measure(blue_line_key = blk, \n downstream_route_measure = drm) |> \n sf::st_transform(4326)\n\n#get the bounding box of our aoi\naoi_bb <- sf::st_bbox(aoi)\n```\n:::\n\n\n# Retrieve the Land Cover Data\nFor this example we will retrieve our precipitation data from Microsofts Planetary Computer. We will use `usethis::use_git_ignore` to add the data to our `.gitignore` file so that we do not commit that insano enormous tiff files to our git repository.\n\n usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n \n\n::: {.cell}\n\n```{.r .cell-code}\n# let's create our data directory\ndir_data <- here::here('posts', params$post_dir_name, \"data\")\n\nfs::dir_create(dir_data)\n\n# usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query <- rstac::stac(\"https://planetarycomputer.microsoft.com/api/stac/v1\")\n\nms_collections <- ms_query |>\n rstac::collections() |>\n rstac::get_request()\n```\n:::\n\n\nNow - we want to understand these datasets a bit better so let's make a little function to view the options for datasets \nwe can download.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Function to extract required fields\n# Function to extract required fields\nextract_fields <- function(x) {\n tibble(\n id = x$id,\n title = x$title,\n time_start = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][1],\n time_end = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][2],\n description = x$description\n )\n}\n\n# Apply the function to each element in ms_collections$collections and combine into a dataframe\ndf <- purrr::map_dfr(ms_collections$collections, extract_fields)\n\nmy_dt_table(df, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n
\n\nHere is the description of our dataset:\n\n\n The European Space Agency (ESA) [WorldCover](https://esa-worldcover.org/en) product provides global land cover maps\n for the years 2020 and 2021 at 10 meter resolution based on the combination of\n [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) radar data and\n [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) imagery. The discrete classification maps\n provide 11 classes defined using the Land Cover Classification System (LCCS) developed by the United Nations (UN)\n Food and Agriculture Organization (FAO). The map images are stored in [cloud-optimized\n GeoTIFF](https://www.cogeo.org/) format. The WorldCover product is developed by a consortium of European service\n providers and research organizations. [VITO](https://remotesensing.vito.be/) (Belgium) is the prime contractor of\n the WorldCover consortium together with [Brockmann Consult](https://www.brockmann-consult.de/) (Germany), \n [CSSI](https://www.c-s.fr/) (France), [Gamma Remote Sensing AG](https://www.gamma-rs.ch/) (Switzerland), [International\n Institute for Applied Systems Analysis](https://www.iiasa.ac.at/) (Austria), and [Wageningen\n University](https://www.wur.nl/nl/Wageningen-University.htm) (The Netherlands). Two versions of the WorldCover\n product are available: - WorldCover 2020 produced using v100 of the algorithm - [WorldCover 2020 v100 User\n Manual](https://esa-worldcover.s3.eu-central-1.amazonaws.com/v100/2020/docs/WorldCover_PUM_V1.0.pdf) - [WorldCover\n 2020 v100 Validation\n Report]() - WorldCover\n 2021 produced using v200 of the algorithm - [WorldCover 2021 v200 User\n Manual]() -\n [WorldCover 2021 v200 Validaton\n Report]() Since the\n WorldCover maps for 2020 and 2021 were generated with different algorithm versions (v100 and v200, respectively),\n changes between the maps include both changes in real land cover and changes due to the used algorithms.\n\n\nHere we build the query for what we want. We are specifying `collect_this <- \"esa-worldcover\"`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncollect_this <- \"esa-worldcover\"\n\nms_esa_query <- rstac::stac_search(\n q = ms_query,\n collections = collect_this,\n datetime = \"2021-01-01T00:00:00Z/2021-12-31T23:59:59Z\",\n bbox = aoi_bb,\n limit = 100\n ) |>\n rstac::get_request()\n\nms_esa_query\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n###Items\n- features (2 item(s)):\n - ESA_WorldCover_10m_2021_v200_N54W129\n - ESA_WorldCover_10m_2021_v200_N54W126\n- assets: input_quality, map, rendered_preview, tilejson\n- item's fields: \nassets, bbox, collection, geometry, id, links, properties, stac_extensions, stac_version, type\n```\n\n\n:::\n:::\n\nNext we need to sign in to the planetary computer with `rstac::sign_planetary_computer()`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query_signin <- rstac::items_sign(\n ms_esa_query,\n rstac::sign_planetary_computer()\n )\n```\n:::\n\n\n
\n\nTo actually download the data we are going to put a chunk option that allows us to just execute the code once and \nupdate it with the `update_gis` param in our `yml`. \n\n\n::: {.cell}\n\n```{.r .cell-code}\nrstac::assets_download(\n items = ms_query_signin,\n asset_names = \"map\",\n output_dir = here::here('posts', params$post_dir_name, \"data\"),\n overwrite = TRUE\n)\n```\n:::\n\n\n
\n\nNice. So now let's read in these data, clip them to our area of interest with `terra::crop` then combine them into one tiff using\n`terra::mosaic`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndir_out <- here::here('posts', params$post_dir_name, \"data/esa-worldcover/v200/2021/map\")\n\nrast_files <- list.files(\n dir_out,\n full.names = TRUE\n)\n\nland_cover_raster_raw <- rast_files |>\n purrr::map(terra::rast) \n\n# Clip the rasters to the AOI\nland_cover_raster_clipped <- purrr::map(\n land_cover_raster_raw,\n ~ terra::crop(.x, aoi, snap = \"in\", mask = TRUE)\n)\n\n# combine the rasters\nland_cover_raster <- do.call(terra::mosaic, land_cover_raster_clipped)\n\n# Optionally, plot the merged raster\n# terra::plot(land_cover_raster)\n```\n:::\n\n\n# Digital Elevation Model\n\nLet's grab the digital elevation model using `elevatr::get_elev_raster` so we can downsample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndem <- elevatr::get_elev_raster(\n locations = aoi,\n z = 11,\n clip = \"bbox\"\n) |>\n terra::rast()\n```\n:::\n\n\n# Resample\nHere we resample the land cover raster to the same resolution as the DEM.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_raster_resampled <- terra::resample(\n land_cover_raster,\n dem,\n method = \"near\"\n)\n\n# terra::plot(land_cover_raster_resampled)\n```\n:::\n\n\n# Plot the Land Cover and DEM\n\n## Get Additional Data\nWe could use some data for context such as major streams and the railway. We get the streams and railway from \ndata distribution bc api using the `bcdata` package. Our `rfp` package calls just allow some extra sanity checks on the \n`bcdata::bcdc_query_geodata` function. It's not really necessary but can be helpful when errors occur (ex. the name of \nthe column to filter on is input incorrectly). \n\n
\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# grab all the railways\nl_rail <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.gba_railway_tracks_sp\"\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() \n\n# streams in the bulkley and then filter to just keep the big ones\nl_streams <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.fwa_stream_networks_sp\",\n col_filter = \"watershed_group_code\",\n col_filter_value = \"BULK\",\n # grab a smaller object by including less columns\n col_extract = c(\"linear_feature_id\", \"stream_order\", \"gnis_name\", \"downstream_route_measure\", \"blue_line_key\", \"length_metre\")\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() |> \n dplyr::filter(stream_order > 4)\n```\n:::\n\n\nNow we trim up those layers. We have some functions to validate and repair geometries and then we clip them to our area of interest.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlayers_to_trim <- tibble::lst(l_rail, l_streams)\n\n# Function to validate and repair geometries\nvalidate_geometries <- function(layer) {\n layer <- sf::st_make_valid(layer)\n layer <- layer[sf::st_is_valid(layer), ]\n return(layer)\n}\n\n# Apply validation to the AOI and layers\naoi <- validate_geometries(aoi)\nlayers_to_trim <- purrr::map(layers_to_trim, validate_geometries)\n\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi)\n) \n```\n:::\n\n\n\n## Get Legend Values\n\nSo we need to map the values in the raster to the actual land cover classes. We can do this by extracting the cross\nreference table from the pdf provided in the metatdata of the data. We will use the `tabulapdf` package to extract the\ntable and do some work to collapse it into a cross-referenceing tool we can use for land cover classifications and \nsubsequent color schemes.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# extract the cross reference table from the pdf\npdf_file <- \"https://esa-worldcover.s3.eu-central-1.amazonaws.com/v200/2021/docs/WorldCover_PUM_V2.0.pdf\"\npage <- 15\n\n# table_map <- tabulapdf::locate_areas(pdf_file, pages = page)\n# table_coords <- list(as.numeric(unlist(table_map[[1]])))\n\ntable_coords <- list(c(94.55745, 74.66493, 755.06007, 550.41094))\n\n\nxref_raw <- tabulapdf::extract_tables(\n pdf_file,\n pages = page,\n method = \"lattice\",\n area = table_coords,\n guess = FALSE\n)\n\n# ##this is how we make a clean dataframe\nxref_raw2 <- xref_raw |> \n purrr::pluck(1) |>\n tibble::as_tibble() |>\n janitor::row_to_names(1) |>\n janitor::clean_names()\n\nxref_raw3 <- xref_raw2 |> \n tidyr::fill(code, .direction = \"down\")\n\n# Custom function to concatenate rows within each group\ncollapse_rows <- function(df) {\n df |> \n dplyr::summarise(across(everything(), ~ paste(na.omit(.), collapse = \" \")))\n}\n\n# Group by code and apply the custom function\nxref <- xref_raw3 |>\n dplyr::group_by(code) |>\n dplyr::group_modify(~ collapse_rows(.x)) |>\n dplyr::ungroup() |> \n dplyr::mutate(code = as.numeric(code)) |> \n dplyr::arrange(code) |> \n purrr::set_names(c(\"code\", \"land_cover_class\", \"lccs_code\", \"definition\", \"color_code\")) |> \n # now we make a list of the color codes and convert to hex. Even though we don't actually need them here...\n dplyr::mutate(color_code = purrr::map(color_code, ~ as.numeric(strsplit(.x, \",\")[[1]])),\n color = purrr::map_chr(color_code, ~ rgb(.x[1], .x[2], .x[3], maxColorValue = 255))) |> \n dplyr::relocate(definition, .after = last_col())\n\n\nmy_dt_table(xref, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\nWe seem to get issues when the colors we have in our tiff does not match our cross-reference table. For this reason we will\nremove any values in the `xref` object that are not in the rasters that we are plotting.\n\n\n
\n\nAlso - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object.\nWe can recover that table with `terra::coltab(land_cover_raster_raw[[1]])`\n\n## Plot \nOk. Let's plot it up. We will use `ggplot2` and `tidyterra` to plot the land cover raster and then add the streams and railway on top of that.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncolor_table <- terra::coltab(land_cover_raster_raw[[1]])[[1]]\n\ncoltab(land_cover_raster_resampled) <- color_table\n\nxref_cleaned <- xref |> \n filter(code %in% sort(unique(terra::values(land_cover_raster_resampled))))\n\n\nmap <- ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_raster_resampled),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_raster_resampled),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n # geom_sf(\n # data = aoi,\n # fill = \"transparent\",\n # color = \"white\",\n # linewidth = .5\n # ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n\nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-region-1.png){width=672}\n:::\n:::\n\n\n# Refine the Area of Interest\n\n\n::: {.cell}\n\n```{.r .cell-code}\nbuffer_size <- 250\nlength_define <- 10000\n```\n:::\n\n\n\nNice. So let's say we want to zoom in on a particular area such as the mainstem of the Wetzin Kwa for the lowest\n10km of the stream for a buffered area at approximately 250m either side of the stream. We can do that by getting the name of the stream. The smallest downstream route measure and the first segments that add to 10000m in linear length. \n\n\n\n::: {.cell}\n\n```{.r .cell-code}\naoi_neexdzii <- layers_trimmed$l_streams |> \n dplyr::filter(gnis_name == \"Bulkley River\") |> \n dplyr::arrange(downstream_route_measure) |> \n # calculate when we get to 10000m by adding up the length_metre field and filtering out everything up to 10000m\n dplyr::filter(cumsum(length_metre) <= length_define) \n\naoi_neexdzii_buffered <- sf::st_buffer(aoi_neexdzii, buffer_size, endCapStyle = \"FLAT\")\n```\n:::\n\n\n## Plot Refined Area\n\nClip the resampled raster to the buffered area.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_sample <- terra::crop(land_cover_raster_resampled, aoi_neexdzii_buffered, snap = \"in\", mask = TRUE, extend = TRUE)\n```\n:::\n\n\n
\n\nSo it looks like we lose our color values with the crop. We see that with `has.colors(land_cover_sample)`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nhas.colors(land_cover_sample)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] TRUE\n```\n\n\n:::\n:::\n\n\n
\n\nLet's add them back in with the `terra::coltab` function.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncoltab(land_cover_sample) <- color_table\nhas.colors(land_cover_sample)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] TRUE\n```\n\n\n:::\n:::\n\n\n
\n\nNow we should be able to plot what we have. Let's re-trim up our extra data layers and add those in as well.\n\n::: {.cell}\n\n```{.r .cell-code}\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi_neexdzii_buffered)\n) \n\nxref_cleaned <- xref |> \n filter(code %in% sort(unique(terra::values(land_cover_sample))))\n\nmap <- ggplot2::ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_sample),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_sample),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n# save the plot\n# ggsave(here::here('posts', params$post_dir_name, \"image.jpg\"), width = 10, height = 10)\n \nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot2-1.png){width=672}\n:::\n:::\n\n\n
\n\nPretty sweet. Next up is to summarize the land cover classes for different areas to build our understanding of\npotential impacts due to land cover changes. We will likely use the `terra::zonal` function to do this. Stay tuned.\n", + "markdown": "---\ntitle: \"Mapping Land Cover with R\"\nauthor: \"al\"\ndate: \"2024-07-01\"\ndate-modified: \"2024-07-02\"\ncategories: [land cover, R, planetary computer, remote sensing]\nimage: \"image.jpg\"\nparams:\n repo_owner: \"NewGraphEnvironment\"\n repo_name: \"new_graphiti\"\n post_dir_name: \"2024-06-30-land-cover\"\n update_pkgs: FALSE\n update_gis: FALSE\nexecute:\n warning: false\nformat: \n html:\n code-fold: true\n---\n\n\nVisualize and quantify remotely sense land cover data.... Here is a first start. We will use the European\nSpace Agency's WorldCover product which provides global land cover maps for the years 2020 and 2021 at 10 meter\nresolution based on the combination of Sentinel-1 radar data and Sentinel-2 imagery. We will use the 2021 dataset\nfor mapping an area of the Skeena watershed near Houston, British Columbia. \n\n
\n\n\nThis post was inspired - with much of the code copied - from a repository on GitHub from the wonderfully talented\n[Milos Popovic](https://github.com/milos-agathon/esa-land-cover). \n\n
\n\nFirst thing we will do is load our packages. If you do not have the packages installed yet you can change the `update_pkgs` param in\nthe `yml` of this file to `TRUE`. Using `pak` is great because it allows you to update your packages when you want to.\n\n\n::: {.cell}\n\n```{.r .cell-code}\npkgs_cran <- c(\n \"usethis\",\n \"rstac\",\n \"here\",\n \"fs\",\n \"terra\",\n \"tidyverse\",\n \"rayshader\",\n \"sf\",\n \"classInt\",\n \"rgl\",\n \"tidyterra\",\n \"tabulapdf\",\n \"bcdata\",\n \"ggplot\",\n \"ggdark\",\n \"knitr\",\n \"DT\",\n \"htmlwidgets\")\n\npkgs_gh <- c(\n \"poissonconsulting/fwapgr\",\n \"NewGraphEnvironment/rfp\"\n )\n\npkgs <- c(pkgs_cran, pkgs_gh)\n\nif(params$update_pkgs){\n # install the pkgs\n lapply(pkgs,\n pak::pkg_install,\n ask = FALSE)\n}\n\n# load the pkgs\npkgs_ld <- c(pkgs_cran,\n basename(pkgs_gh))\ninvisible(\n lapply(pkgs_ld,\n require,\n character.only = TRUE)\n)\n\nsource(here::here(\"scripts/functions.R\"))\n```\n:::\n\n\n# Define our Area of Interest\nHere we diverge a bit from Milos version as we are going to load a custom area of interest. We will be connecting to\nour remote database using Poisson Consulting's `fwapgr::fwa_watershed_at_measure` function which leverages the in database\n[`FWA_WatershedAtMeasure`](https://smnorris.github.io/fwapg/04_functions.html#fwa-watershedatmeasure) function from \n[Simon Norris'](https://github.com/smnorris) wonderful [`fwapg`](https://github.com/smnorris/fwapg)\npackage. \n\n
\n\nWe use a `blue line key` and a `downstream route measure` to define our area of interest which is the Neexdzii Kwa \n(a.k.a Upper Bulkley River) near Houston, British Columbia.\n\n
\n\nAs per the [Freshwater Atlas of BC](https://catalogue.data.gov.bc.ca/dataset/freshwater-atlas-stream-network/resource/5459c8c7-f95c-42fa-a439-24439c11929d) - the `blue line key`:\n\n> Uniquely identifies a single flow line such that a main channel and a secondary channel with the same watershed code would have different blue line keys (the Fraser River and all side channels have different blue line keys).\n\n
\n\nA `downstream route measure` is:\n\n>\tThe distance, in meters, along the route from the mouth of the route to the feature. This distance is measured from the mouth of the containing route to the downstream end of the feature.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# lets build a custom watersehed just for upstream of the confluence of Neexdzii Kwa and Wetzin Kwa\n# blueline key\nblk <- 360873822\n# downstream route measure\ndrm <- 166030.4\n\naoi <- fwapgr::fwa_watershed_at_measure(blue_line_key = blk, \n downstream_route_measure = drm) |> \n sf::st_transform(4326)\n\n#get the bounding box of our aoi\naoi_bb <- sf::st_bbox(aoi)\n```\n:::\n\n\n# Retrieve the Land Cover Data\nFor this example we will retrieve our precipitation data from Microsofts Planetary Computer. We will use `usethis::use_git_ignore` to add the data to our `.gitignore` file so that we do not commit that insano enormous tiff files to our git repository.\n\n usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n \n\n::: {.cell}\n\n```{.r .cell-code}\n# let's create our data directory\ndir_data <- here::here('posts', params$post_dir_name, \"data\")\n\nfs::dir_create(dir_data)\n\n# usethis::use_git_ignore(paste0('posts/', params$post_dir_name, \"/data/**/*.tif\"))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query <- rstac::stac(\"https://planetarycomputer.microsoft.com/api/stac/v1\")\n\nms_collections <- ms_query |>\n rstac::collections() |>\n rstac::get_request()\n```\n:::\n\n\nNow - we want to understand these datasets a bit better so let's make a little function to view the options for datasets \nwe can download.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Function to extract required fields\n# Function to extract required fields\nextract_fields <- function(x) {\n tibble::tibble(\n id = x$id,\n title = x$title,\n time_start = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][1],\n time_end = x[[\"cube:dimensions\"]][[\"time\"]][[\"extent\"]][2],\n description = x$description\n )\n}\n\n# Apply the function to each element in ms_collections$collections and combine into a dataframe\ndf <- purrr::map_dfr(ms_collections$collections, extract_fields)\n\nmy_dt_table(df, \n cols_freeze_left = 0, \n page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\n\n
\n\nHere is the description of our dataset:\n\n\n The European Space Agency (ESA) [WorldCover](https://esa-worldcover.org/en) product provides global land cover maps\n for the years 2020 and 2021 at 10 meter resolution based on the combination of\n [Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) radar data and\n [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) imagery. The discrete classification maps\n provide 11 classes defined using the Land Cover Classification System (LCCS) developed by the United Nations (UN)\n Food and Agriculture Organization (FAO). The map images are stored in [cloud-optimized\n GeoTIFF](https://www.cogeo.org/) format. The WorldCover product is developed by a consortium of European service\n providers and research organizations. [VITO](https://remotesensing.vito.be/) (Belgium) is the prime contractor of\n the WorldCover consortium together with [Brockmann Consult](https://www.brockmann-consult.de/) (Germany), \n [CSSI](https://www.c-s.fr/) (France), [Gamma Remote Sensing AG](https://www.gamma-rs.ch/) (Switzerland), [International\n Institute for Applied Systems Analysis](https://www.iiasa.ac.at/) (Austria), and [Wageningen\n University](https://www.wur.nl/nl/Wageningen-University.htm) (The Netherlands). Two versions of the WorldCover\n product are available: - WorldCover 2020 produced using v100 of the algorithm - [WorldCover 2020 v100 User\n Manual](https://esa-worldcover.s3.eu-central-1.amazonaws.com/v100/2020/docs/WorldCover_PUM_V1.0.pdf) - [WorldCover\n 2020 v100 Validation\n Report]() - WorldCover\n 2021 produced using v200 of the algorithm - [WorldCover 2021 v200 User\n Manual]() -\n [WorldCover 2021 v200 Validaton\n Report]() Since the\n WorldCover maps for 2020 and 2021 were generated with different algorithm versions (v100 and v200, respectively),\n changes between the maps include both changes in real land cover and changes due to the used algorithms.\n\n\nHere we build the query for what we want. We are specifying `collect_this <- \"esa-worldcover\"`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncollect_this <- \"esa-worldcover\"\n\nms_esa_query <- rstac::stac_search(\n q = ms_query,\n collections = collect_this,\n datetime = \"2021-01-01T00:00:00Z/2021-12-31T23:59:59Z\",\n bbox = aoi_bb,\n limit = 100\n ) |>\n rstac::get_request()\n\nms_esa_query\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n###Items\n- features (2 item(s)):\n - ESA_WorldCover_10m_2021_v200_N54W129\n - ESA_WorldCover_10m_2021_v200_N54W126\n- assets: input_quality, map, rendered_preview, tilejson\n- item's fields: \nassets, bbox, collection, geometry, id, links, properties, stac_extensions, stac_version, type\n```\n\n\n:::\n:::\n\nNext we need to sign in to the planetary computer with `rstac::sign_planetary_computer()`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nms_query_signin <- rstac::items_sign(\n ms_esa_query,\n rstac::sign_planetary_computer()\n )\n```\n:::\n\n\n
\n\nTo actually download the data we are going to put a chunk option that allows us to just execute the code once and \nupdate it with the `update_gis` param in our `yml`. \n\n\n::: {.cell}\n\n```{.r .cell-code}\nrstac::assets_download(\n items = ms_query_signin,\n asset_names = \"map\",\n output_dir = here::here('posts', params$post_dir_name, \"data\"),\n overwrite = TRUE\n)\n```\n:::\n\n\n
\n\nNice. So now let's read in these data, clip them to our area of interest with `terra::crop` then combine them into one tiff using\n`terra::mosaic`. In order to visualize NA values we can convert them to a mask and plot.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndir_out <- here::here('posts', params$post_dir_name, \"data/esa-worldcover/v200/2021/map\")\n\nrast_files <- list.files(\n dir_out,\n full.names = TRUE\n)\n\nland_cover_raster_raw <- rast_files |>\n purrr::map(terra::rast) \n\n#testing for NA\n# v1 <- terra::values(land_cover_raster_raw[[1]])\n# v2 <- terra::values(land_cover_raster_raw[[2]])\n# \n# # contains NA? No!!\n# any(is.na(v1))\n# any(is.na(v2))\n# Clip the rasters to the AOI\nland_cover_raster_clipped <- purrr::map(\n land_cover_raster_raw,\n ~ terra::crop(.x, aoi, snap = \"in\", mask = TRUE)\n)\n\n# combine the rasters\nland_cover_raster <- do.call(terra::mosaic, land_cover_raster_clipped)\n\n# in order to visualize NA values we need to convert them to a mask and plot\nna_mask <- is.na(land_cover_raster)\nplot(na_mask, main = \"NA Values in the Raster\", col = c(\"white\", \"red\"))\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/rasters-crop-combine-1.png){width=672}\n:::\n:::\n\n\n# Resampling with Digital Elevation Model \n\nIf we want to we can grab a digital elevation model using [`elevatr::get_elev_raster`](https://github.com/jhollist/elevatr) \nso we can resample the land cover raster to the same resolution as the DEM. We could increase the resolution or decrease\ndepending on the [zoom level](https://github.com/tilezen/joerd/blob/master/docs/data-sources.md#what-is-the-ground-resolution) we choose.\n\n
\n\nThe reason we might choose to resample (upsample or downsample) are summarized in the table below. Because the first \nmap we wish to render (the entire Neexdzii Kwah) is quite large we will downsample so that it does not take 20 minutes\nto render the map. We can see the resolution of the original raster with `terra::cellSize(land_cover_raster)`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nresampling_table <- tibble::tibble(\n Feature = c(\"Definition\", \"Purpose\", \"Method\", \"Effect\", \"Example\"),\n Upsampling = c(\n \"Increasing spatial resolution\",\n \"Match higher-resolution data, enhance detail\",\n \"Interpolation (nearest, bilinear, cubic)\",\n \"More cells, finer detail\",\n \"10m to 5m resolution\"\n ),\n Downsampling = c(\n \"Decreasing spatial resolution\",\n \"Reduce data size, speed up processing\",\n \"Aggregation (average, sum, nearest)\",\n \"Fewer cells, less detail\",\n \"10m to 20m resolution\"\n )\n)\n\nknitr::kable(resampling_table, caption = \"Summary of Upsampling and Downsampling\")\n```\n\n::: {.cell-output-display}\n\n\nTable: Summary of Upsampling and Downsampling\n\n|Feature |Upsampling |Downsampling |\n|:----------|:--------------------------------------------|:-------------------------------------|\n|Definition |Increasing spatial resolution |Decreasing spatial resolution |\n|Purpose |Match higher-resolution data, enhance detail |Reduce data size, speed up processing |\n|Method |Interpolation (nearest, bilinear, cubic) |Aggregation (average, sum, nearest) |\n|Effect |More cells, finer detail |Fewer cells, less detail |\n|Example |10m to 5m resolution |10m to 20m resolution |\n\n\n:::\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nterra::cellSize(land_cover_raster)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\nclass : SpatRaster \ndimensions : 6870, 10028, 1 (nrow, ncol, nlyr)\nresolution : 8.333333e-05, 8.333333e-05 (x, y)\nextent : -126.7469, -125.9112, 54.10183, 54.67433 (xmin, xmax, ymin, ymax)\ncoord. ref. : lon/lat WGS 84 (EPSG:4326) \nsource(s) : memory\nvarname : ESA_WorldCover_10m_2021_v200_N54W126_Map \nname : area \nmin value : 49.86984 \nmax value : 50.56391 \n```\n\n\n:::\n:::\n\n\n
\n\nThis translates to cells ~9m high x ~6m wide (~50m2). For the DEM we chose a zoom level of 11 for the DEM but we can go as \nhigh as 14. Zoom and resolution is related to latitude. For the latitude of our study area (approx 55 degrees) the \nresolution at zoom level 11 is ~30m high x ~18m wide. At zoom level 14 it is ~4m high x ~2m wide.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ndem <- elevatr::get_elev_raster(\n locations = aoi,\n z = 11,\n clip = \"bbox\"\n) |>\n terra::rast()\n```\n:::\n\n## Downsample the Land Cover Raster\nHere we downsample the land cover raster to the same resolution as the DEM for the purposes of rendering our map\nof the larger area in a reasonable amount of time.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Here we resample the land cover raster to the same resolution as the DEM.\nland_cover_raster_resampled <- terra::resample(\n land_cover_raster,\n dem,\n method = \"near\",\n threads = TRUE\n)\n\n# terra::plot(land_cover_raster_resampled)\n```\n:::\n\n\n# Plot Land Cover for Entire Neexdzii Kwah\n\n## Get Additional Data\nWe could use some data for context such as major streams and the railway. We get the streams and railway from \ndata distribution bc api using the `bcdata` package. Our `rfp` package calls just allow some extra sanity checks and \nconvenience moves on the \n`bcdata::bcdc_query_geodata` function. It's not really necessary but can be helpful (ex. can use small cap layer and \ncolumn names and will throw an informative error if the name of the columns specified are input incorrectly). \n\n
\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# grab all the railways\nl_rail <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.gba_railway_tracks_sp\"\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() \n\n# streams in the bulkley and then filter to just keep the big ones\nl_streams <- rfp::rfp_bcd_get_data(\n bcdata_record_id = \"whse_basemapping.fwa_stream_networks_sp\",\n col_filter = \"watershed_group_code\",\n col_filter_value = \"BULK\",\n # grab a smaller object by including less columns\n col_extract = c(\"linear_feature_id\", \"stream_order\", \"gnis_name\", \"downstream_route_measure\", \"blue_line_key\", \"length_metre\")\n) |> \n sf::st_transform(4326) |> \n janitor::clean_names() |> \n dplyr::filter(stream_order > 4)\n```\n:::\n\n\nNow we trim up those layers. We have some functions to validate and repair geometries and then we clip them to our area of interest.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nlayers_to_trim <- tibble::lst(l_rail, l_streams)\n\n# Function to validate and repair geometries\nvalidate_geometries <- function(layer) {\n layer <- sf::st_make_valid(layer)\n layer <- layer[sf::st_is_valid(layer), ]\n return(layer)\n}\n\n# Apply validation to the AOI and layers\naoi <- validate_geometries(aoi)\nlayers_to_trim <- purrr::map(layers_to_trim, validate_geometries)\n\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi)\n) \n```\n:::\n\n\n\n## Get Legend Values\n\nSo we need to map the values in the raster to the actual land cover classes. We can do this by extracting the cross\nreference table from the pdf provided in the metatdata of the data. We will use the `tabulapdf` package to extract the\ntable and do some work to collapse it into a cross-referenceing tool we can use for land cover classifications and \nsubsequent color schemes.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# extract the cross reference table from the pdf\npdf_file <- \"https://esa-worldcover.s3.eu-central-1.amazonaws.com/v200/2021/docs/WorldCover_PUM_V2.0.pdf\"\npage <- 15\n\n# table_map <- tabulapdf::locate_areas(pdf_file, pages = page)\n# table_coords <- list(as.numeric(unlist(table_map[[1]])))\n\ntable_coords <- list(c(94.55745, 74.66493, 755.06007, 550.41094))\n\n\nxref_raw <- tabulapdf::extract_tables(\n pdf_file,\n pages = page,\n method = \"lattice\",\n area = table_coords,\n guess = FALSE\n)\n\n# ##this is how we make a clean dataframe\nxref_raw2 <- xref_raw |> \n purrr::pluck(1) |>\n tibble::as_tibble() |>\n janitor::row_to_names(1) |>\n janitor::clean_names()\n\nxref_raw3 <- xref_raw2 |> \n tidyr::fill(code, .direction = \"down\")\n\n# Custom function to concatenate rows within each group\ncollapse_rows <- function(df) {\n df |> \n dplyr::summarise(across(everything(), ~ paste(na.omit(.), collapse = \" \")))\n}\n\n# Group by code and apply the custom function\nxref <- xref_raw3 |>\n dplyr::group_by(code) |>\n dplyr::group_modify(~ collapse_rows(.x)) |>\n dplyr::ungroup() |> \n dplyr::mutate(code = as.numeric(code)) |> \n dplyr::arrange(code) |> \n purrr::set_names(c(\"code\", \"land_cover_class\", \"lccs_code\", \"definition\", \"color_code\")) |> \n # now we make a list of the color codes and convert to hex. Even though we don't actually need them here...\n dplyr::mutate(color_code = purrr::map(color_code, ~ as.numeric(strsplit(.x, \",\")[[1]])),\n color = purrr::map_chr(color_code, ~ rgb(.x[1], .x[2], .x[3], maxColorValue = 255))) |> \n dplyr::relocate(definition, .after = last_col())\n\n\nmy_dt_table(xref, cols_freeze_left = 0, page_length = 5)\n```\n\n::: {.cell-output-display}\n\n```{=html}\n
\n\n```\n\n:::\n:::\n\n\nWe seem to get issues when the colors we have in our tiff does not match our cross-reference table. For this reason we will\nremove any values in the `xref` object that are not in the rasters that we are plotting.\n\n\n
\n\nAlso - looks like when we combined our tiffs together with `terra::mosaic` we lost the color table associated with the SpatRaster object.\nWe can recover that table with `terra::coltab(land_cover_raster_raw[[1]])`\n\n## Plot \nOk. Let's plot it up. We will use `ggplot2` and `tidyterra` to plot the land cover raster and then add the streams and railway on top of that.\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncolor_table <- terra::coltab(land_cover_raster_raw[[1]])[[1]]\n\ncoltab(land_cover_raster_resampled) <- color_table\n\nxref_cleaned <- xref |> \n filter(code %in% sort(unique(terra::values(land_cover_raster_resampled))))\n\n\nmap <- ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_raster_resampled),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_raster_resampled),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n # geom_sf(\n # data = aoi,\n # fill = \"transparent\",\n # color = \"white\",\n # linewidth = .5\n # ) +\n geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n\nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-region-1.png){width=672}\n:::\n:::\n\n\n# Refine the Area of Interest\n\n\n::: {.cell}\n\n```{.r .cell-code}\nbuffer <- 250\nlength_m <- 10000\n```\n:::\n\n\n\nNice. So let's say we want to zoom in on a particular area such as the mainstem of the Neexdzii Kwah for the lowest\n10000m of the stream for a buffered area at approximately 250m either side of the stream. \nWe can do that by filtering by the name of the stream, arranging by downstream route measure and keeping only the first \nsegments that add to less than or equal to 10000m in linear length. \n\n
\n\nNot totally sure what to do with the individual stream segments at this point. For now we merge them together into one \nand then buffer them. This reslts in one shape vsa bunch so that buffered segments don't overlap each other. Interestingly\nwe end up with a slightly larger raster when we merge the stream segments together before buffering. Below are the results:\n\n dimensions : 259, 951, 1 multiple stream segments\n dimensions : 262, 958, 1 single stream segment \n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n#here we merge our features\naoi_refined_raw <- layers_trimmed$l_streams |> \n dplyr::filter(gnis_name == \"Bulkley River\") |> \n dplyr::arrange(downstream_route_measure) |> \n # calculate when we get to length_m by adding up the length_metre field and filtering out everything up to length_m\n dplyr::filter(cumsum(length_metre) <= length_m) |> \n sf::st_union()\n\naoi_refined <- sf::st_sf(aoi_refined_raw)\n\naoi_refined_buffered <- sf::st_buffer(aoi_refined, buffer, endCapStyle = \"FLAT\") \n\n\nsum(sf::st_area(aoi_refined_buffered))\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n4310883 [m^2]\n```\n\n\n:::\n\n```{.r .cell-code}\n#7323123 multiple stream segments\n#4310883 single stream segment\n```\n:::\n\n\n## Plot Refined Area\n\nFirst we need to clip the landcover raster to the buffered area. We are not going to use the resampled raster because\nwe want a more detailed view of the land cover classes for this much smaller area. The computational time to render\nthe plot will be fine at the original resolution.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\nland_cover_sample <- terra::crop(land_cover_raster, aoi_refined_buffered, snap = \"near\", mask = TRUE, extend = TRUE)\n#dimensions : 259, 951, 1 multiple stream segments\n#dimensions : 262, 958, 1 single stream segment\n```\n:::\n\n\n
\n\nWe lose our color values with the crop. We see that with `has.colors(land_cover_sample)`.\n\n\n::: {.cell}\n\n```{.r .cell-code}\nhas.colors(land_cover_sample)\n```\n\n::: {.cell-output .cell-output-stdout}\n\n```\n[1] FALSE\n```\n\n\n:::\n:::\n\n\n
\n\nLet's add them back in with the `terra::coltab` function.\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncoltab(land_cover_sample) <- color_table\n```\n:::\n\n\n
\n\nNow we should be able to plot what we have. Let's re-trim up our extra data layers to the refined area of interest and add those in as well.\n\n::: {.cell}\n\n```{.r .cell-code}\n# clip them with purrr and sf\nlayers_trimmed <- purrr::map(\n layers_to_trim,\n ~ sf::st_intersection(.x, aoi_refined_buffered)\n) \n\nxref_cleaned <- xref |> \n dplyr::filter(code %in% sort(unique(terra::values(land_cover_sample))))\n\nmap <- ggplot2::ggplot() +\n tidyterra::geom_spatraster(\n data = as.factor(land_cover_sample),\n use_coltab = TRUE,\n maxcell = Inf\n ) +\n tidyterra::scale_fill_coltab(\n data = as.factor(land_cover_sample),\n name = \"ESA Land Cover\",\n labels = xref_cleaned$land_cover_class\n ) +\n ggplot2::geom_sf(\n data = layers_trimmed$l_rail,\n color = \"black\",\n size = 1\n ) +\n ggplot2::geom_sf(\n data = layers_trimmed$l_streams,\n color = \"blue\",\n size = 1\n ) +\n ggdark::dark_theme_void()\n\n# save the plot\n# ggsave(here::here('posts', params$post_dir_name, \"image.jpg\"), width = 10, height = 10)\n \nmap\n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot2-1.png){width=672}\n:::\n:::\n\n\n# Summary Statistics\n\nNext up is to summarize the land cover classes for different areas to build our understanding of\npotential impacts due to land cover changes. Let's start with our refined area of interest.\n\n\n## Neexdzii Kwah\nNow lets make a table and a simple bargraph to present the results.\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Calculate the area of each cell (assuming your raster is in lat/lon coordinates)\ncell_area <- terra::cellSize(land_cover_raster_resampled, unit = \"ha\")\n\n# Summarize the raster values\nland_cover_summary_raw <- terra::freq(land_cover_raster_resampled, digits = 0) |> \n dplyr::mutate(area_ha = round(count * cell_area[1]$area),1) |> \n # make a column that is the percentage of the total area\n dplyr::mutate(percent_area = round((area_ha / sum(area_ha) * 100), 1))\n\n# now we add the xref code and land_cover_class to the summary\nland_cover_summary <- land_cover_summary_raw |> \n dplyr::left_join(xref, by = c(\"value\" = \"code\")) |> \n dplyr::select(land_cover_class, area_ha, percent_area) |> \n dplyr::arrange(desc(area_ha))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_caption <- \"Land Cover Class by Refined Area of Interest\"\n\nland_cover_summary |> \n knitr::kable(caption = my_caption)\n```\n\n::: {.cell-output-display}\n\n\nTable: Land Cover Class by Refined Area of Interest\n\n|land_cover_class | area_ha| percent_area|\n|:------------------------|-------:|------------:|\n|Tree cover | 194158| 84.3|\n|Grassland | 28940| 12.6|\n|Permanent water bodies | 3731| 1.6|\n|Moss and lichen | 2850| 1.2|\n|Built-up | 329| 0.1|\n|Cropland | 264| 0.1|\n|Bare / sparse vegetation | 158| 0.1|\n|Herbaceous wetland | 9| 0.0|\n|Shrubland | 1| 0.0|\n\n\n:::\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nplot_title <- \"Land Cover Class for Area of Interest\"\n\n# \nland_cover_summary |> \n # convert land_cover_class to factor and arrange based on the area_ha\n dplyr::mutate(land_cover_class = forcats::fct_reorder(land_cover_class, area_ha)) |>\n ggplot2::ggplot(ggplot2::aes(x = land_cover_class, y = area_ha)) +\n ggplot2::geom_col() +\n ggplot2::coord_flip() +\n ggplot2::labs(title= plot_title,\n x = \"Land Cover Class\",\n y = \"Area (ha)\") +\n ggplot2::theme_minimal() \n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-stats-1.png){width=672}\n:::\n\n```{.r .cell-code}\n # cowplot::theme_minimal_grid()\n```\n:::\n\n\n\n## Refined Area of Interest \nNow lets make a table and a simple bargraph to present the results.\n\n\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# Calculate the area of each cell (assuming your raster is in lat/lon coordinates)\ncell_area <- terra::cellSize(land_cover_sample, unit = \"ha\")\n\n# Summarize the raster values\nland_cover_summary_raw <- terra::freq(land_cover_sample, digits = 0) |> \n dplyr::mutate(area_ha = round(count * cell_area[1]$area),1) |> \n # make a column that is the percentage of the total area\n dplyr::mutate(percent_area = round((area_ha / sum(area_ha) * 100), 1))\n\n# now we add the xref codde and land_cover_class to the summary\nland_cover_summary <- land_cover_summary_raw |> \n dplyr::left_join(xref, by = c(\"value\" = \"code\")) |> \n dplyr::select(land_cover_class, area_ha, percent_area) |> \n dplyr::arrange(desc(area_ha))\n```\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nmy_caption <- \"Land Cover Class for Refined Area\"\n\nland_cover_summary |> \n knitr::kable(caption = my_caption)\n```\n\n::: {.cell-output-display}\n\n\nTable: Land Cover Class for Refined Area\n\n|land_cover_class | area_ha| percent_area|\n|:------------------------|-------:|------------:|\n|Tree cover | 283| 66.6|\n|Grassland | 89| 20.9|\n|Built-up | 19| 4.5|\n|Permanent water bodies | 19| 4.5|\n|Cropland | 10| 2.4|\n|Bare / sparse vegetation | 4| 0.9|\n|Moss and lichen | 1| 0.2|\n\n\n:::\n:::\n\n::: {.cell}\n\n```{.r .cell-code}\nplot_title <- \"Land Cover Class for Refined Area\"\n\n# \nland_cover_summary |> \n # convert land_cover_class to factor and arrange based on the area_ha\n dplyr::mutate(land_cover_class = forcats::fct_reorder(land_cover_class, area_ha)) |>\n ggplot2::ggplot(ggplot2::aes(x = land_cover_class, y = area_ha)) +\n ggplot2::geom_col() +\n ggplot2::coord_flip() +\n ggplot2::labs(title= plot_title,\n x = \"Land Cover Class\",\n y = \"Area (ha)\") +\n ggplot2::theme_minimal() \n```\n\n::: {.cell-output-display}\n![](index_files/figure-html/plot-stats-refined-1.png){width=672}\n:::\n:::\n", "supporting": [], "filters": [ "rmarkdown/pagebreak.lua" diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-1.png new file mode 100644 index 0000000..566fc95 Binary files /dev/null and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-1.png differ diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-refined-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-refined-1.png new file mode 100644 index 0000000..7d48a39 Binary files /dev/null and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot-stats-refined-1.png differ diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png index fc3496b..500dfb1 100644 Binary files a/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/plot2-1.png differ diff --git a/_freeze/posts/2024-06-30-land-cover/index/figure-html/rasters-crop-combine-1.png b/_freeze/posts/2024-06-30-land-cover/index/figure-html/rasters-crop-combine-1.png new file mode 100644 index 0000000..2209db6 Binary files /dev/null and b/_freeze/posts/2024-06-30-land-cover/index/figure-html/rasters-crop-combine-1.png differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData index 9d291aa..a87bd2e 100644 Binary files a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData and b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.RData differ diff --git a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb index 7ce2d09..3c46740 100644 Binary files a/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb and b/posts/2024-06-30-land-cover/index_cache/html/dl-layers_5c9c7b85fa6d4227c40a0e70e9e66d54.rdb differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-1.png new file mode 100644 index 0000000..566fc95 Binary files /dev/null and b/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-1.png differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-refined-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-refined-1.png new file mode 100644 index 0000000..7d48a39 Binary files /dev/null and b/posts/2024-06-30-land-cover/index_files/figure-html/plot-stats-refined-1.png differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png index fc3496b..500dfb1 100644 Binary files a/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png and b/posts/2024-06-30-land-cover/index_files/figure-html/plot2-1.png differ diff --git a/posts/2024-06-30-land-cover/index_files/figure-html/rasters-crop-combine-1.png b/posts/2024-06-30-land-cover/index_files/figure-html/rasters-crop-combine-1.png new file mode 100644 index 0000000..2209db6 Binary files /dev/null and b/posts/2024-06-30-land-cover/index_files/figure-html/rasters-crop-combine-1.png differ