10 Worked Examples

This chapter focuses on reviewable cases, not just runnable scripts.

10.1 Worked Example A — Fraser Sockeye gap-fill and layer review

10.1.1 Scenario

You received updated Sockeye annual inputs and need refreshed CU outputs. The main review question is not merely “did the script run?” but:

do the trend layer, abundance layer, and documented exceptions still make sense?

10.1.2 Run

cd FRSK-WSPDataPrep
Rscript CODE/Prep4_CleanedFlatFile_Fraser_Sk_KDEdits2024.R

10.1.3 Review before release

  1. Inspect DATA_TRACKING/FraserSockeye_MatchingCheck.csv.
  2. Review DATA_TRACKING/Sites not in CUs.csv.
  3. Check CU-specific prep tables in DATA_TRACKING/FraserSockeyePrep/.
  4. Update exception-register.csv for any suppressions, timing overrides, or manual fills used in the run.
  5. Confirm you can explain which stream set supports cu_trend and which supports cu_abundance.

10.1.4 Minimum QC

cu <- read.csv("DATA_OUT/Cleaned_FlatFile_ByCU_FraserSockeye.csv")
stopifnot(!any(duplicated(cu[c("CU_ID", "Year")])))
stopifnot(all(c("SpnForAbd_Total", "SpnForTrend_Total") %in% names(cu)))

10.1.5 What to write in the run notes

  • any hard-coded file-name handling,
  • any CU-specific method family that changed,
  • any reviewer concern raised by the tracking files.

10.2 Worked Example B — Interior Fraser Coho decomposition review

10.2.1 Scenario

You need updated Coho CU and pop outputs. The main review question is:

is the split between the CU layer and the broader pop layer still intentional, and do the natural/hatchery decomposition tables support the final outputs?

10.2.2 Run

cd FRCo-WSPDataPrep
Rscript CODE/Prep5_CleanedFlatFile_Fraser_Coho.R

10.2.3 Review before release

  1. Inspect DATA_PROCESSING/Calc.Nat Table for <CU>.csv for one or more key CUs.
  2. Review DATA_PROCESSING/All IFC CUs BY Table_EC.max=NA_infill=FALSE.csv.
  3. Confirm the pop output is intentionally broader than the CU output.
  4. Record how Final.Estimate.Type == "NO" was handled.
  5. Update the exception register if any naming or estimate-type patches were required.

10.2.4 Minimum QC

cu <- read.csv("DATA_OUT/Cleaned_FlatFile_ByCU_FraserCoho.csv")
stopifnot(!any(duplicated(cu[c("CU_ID", "Year")])))
stopifnot(min(cu$Year, na.rm = TRUE) >= 1998)

pop <- read.csv("DATA_OUT/Cleaned_FlatFile_ByPop_FraserCoho.csv")
stopifnot(all(c("Pop_ID", "Pop_Name", "Year") %in% names(pop)))

10.2.5 What to write in the run notes

  • whether the active path used infill,
  • whether any tributary names needed crosswalk repair,
  • whether the broader pop layer was reviewed and accepted explicitly.

10.3 Worked Example C — Lower Fraser Chum composition review

10.3.1 Scenario

You need current Chum CU and pop outputs. The main review question is:

can you explain why the CU totals and pop sums are not the same thing?

10.3.2 Run

cd FRCm-WSPDataPrep
Rscript "CODE/Prep7_Create Fraser Chum2.R"

10.3.3 Review before release

  1. Drop any unnamed first columns before QC.
  2. Confirm the CU layer reflects the intended Harrison treatment.
  3. Confirm the pop layer retains the expected decomposition logic.
  4. Write a plain-language note explaining the expected CU/pop non-equality.
  5. Record any special handling in exception-register.csv if a manual patch was required.

10.3.4 Minimum QC

drop_unnamed <- function(d) d[, names(d) != "", drop = FALSE]

cu <- drop_unnamed(read.csv("DATA_OUT/Cleaned_FlatFile_ByCU_FraserChum.csv"))
stopifnot(!any(duplicated(cu[c("CU_ID", "Year")])))

pop <- drop_unnamed(read.csv("DATA_OUT/Cleaned_FlatFile_ByPop_FraserChum_all.csv"))
stopifnot(all(c("Pop_ID", "Year") %in% names(pop)))

10.3.5 What to write in the run notes

  • how Harrison and Squawkum were handled,
  • why pop sums should not be expected to reproduce CU totals,
  • any year-range truncation relative to the previous release.

10.4 Worked Example D — Fraser Pink historical transition review

10.4.1 Scenario

You need updated Pink outputs. The main review question is:

does the release still preserve the official CU series while carrying the historical context layer correctly?

10.4.2 Run

cd FRPink-WSPDataPrep
Rscript "CODE/Prep6_Create Fraser Pink_2.R"

10.4.3 Review before release

  1. Confirm the CU file still reflects the official CU series.
  2. Inspect the pop/context file around the pre-1993 to post-1993 transition.
  3. Confirm the Fraser aggregate row is present where expected.
  4. Confirm its trend fields remain intentionally NA where that is the method.
  5. Document any omitted or suppressed historical years.

10.4.4 Minimum QC

drop_unnamed <- function(d) d[, names(d) != "", drop = FALSE]

cu <- drop_unnamed(read.csv("DATA_OUT/Cleaned_FlatFile_ByCU_FraserPink.csv"))
stopifnot(!any(duplicated(cu[c("CU_ID", "Year")])))

pop <- drop_unnamed(read.csv("DATA_OUT/Cleaned_FlatFile_ByPop_FraserPink_All_Nuseds.csv"))
stopifnot(all(c("Pop_ID", "CU_ID", "Year") %in% names(pop)))

10.4.5 What to write in the run notes

  • how the historical layer was treated,
  • why the Fraser aggregate row has special semantics,
  • any historical-year omissions that reviewers should expect.

10.5 Worked Example E — Final CU/WSP bundle handoff

10.5.1 Scenario

Any species run has passed species-level QC and now needs a stable handoff for WSP metrics, packaging, or a downstream consumer.

10.5.2 Assemble the bundle

  • cu_timeseries.csv
  • wsp_metric_specs.csv
  • optional wsp_cyclic_benchmarks.csv
  • optional cu_metadata.csv
  • run-log.md
  • decision-log.md
  • qc-summary.md
  • metadata_notes.md

10.5.3 Review before release

  1. Confirm the CU IDs in the time series and metric specs match.
  2. Confirm intentional NA patterns are documented.
  3. Confirm the exception register has been reviewed, not just populated.
  4. Confirm key intermediate QC artifacts are linked from qc-summary.md.
  5. Record any downstream consumer export as a derived file.

10.5.4 Minimum QC

cu <- read.csv("cu_timeseries.csv")
spec <- read.csv("wsp_metric_specs.csv")

stopifnot(!any(duplicated(cu[c("CU_ID", "Year")])))
stopifnot(setequal(unique(cu$CU_ID), unique(spec$CU_ID)))

10.5.5 What to write in the run notes

  • which downstream tool or consumer this bundle is for,
  • exact repo/version used for any rapid-status run,
  • whether any consumer bridge files were generated after bundle QC.