Release Review Guide¶
Guide Maps¶
graph TD
publish["publish/v1/"]
manifest["manifest.json"]
params["params.yaml"]
metrics["metrics.json"]
report["report.md"]
decision["downstream trust decision"]
publish --> manifest
publish --> params
publish --> metrics
publish --> report
manifest --> decision
params --> decision
metrics --> decision
report --> decision
flowchart LR
candidate["Candidate release bundle"] --> inventory["Check inventory and hashes"]
inventory --> meaning["Check control surface and metrics meaning"]
meaning --> review["Read the human review report"]
review --> decide["Approve, reject, or ask for more evidence"]
Use this guide when the question is no longer "did the pipeline run?" and is now "what exactly can a downstream reviewer trust?"
Release review order¶
manifest.jsonfor inventory and integritydata-profile.jsonfor the promoted population storyparams.yamlfor the promoted control surfacemodel.jsonfor the promoted scoring behaviormetrics.jsonfor the promoted quantitative storyreport.mdfor the human-readable summarypredictions.csvfor deeper inspection
What this route proves¶
- the promoted boundary is explicit and auditable
- the downstream control surface is small and reviewable
- the release bundle can be defended without forcing a reviewer through the entire repository
Read PREDICTION_REVIEW_GUIDE.md when the release question depends on concrete false negatives, false positives, or borderline promoted rows.
What this route does not prove¶
- that every internal experiment has been reviewed
- that the local cache is durable
- that the publish bundle replaces
dvc.lockfor internal provenance questions
Read CONTROL_SURFACE_GUIDE.md when a release question is really a comparability question about params, thresholding, or metric movement.