EDORA
Skip to content

EDORA Learn — Methods

← Back to Learning Center

Risk Assessment Tools: Scoring, Cutoffs, and Validation

Many jurisdictions use structured instruments to estimate risk of re-arrest, failure to appear, or supervision non-compliance. This page explains how those tools are built, how scores influence decisions, and how to read validation and fairness results reported in technical notes.

What We Track

  • Inputs: Static factors (age at first contact, prior petitions) and dynamic factors (attendance, compliance, family stability) gathered at intake or review.
  • Scoring: Items are weighted and summed to a total score; some tools group items into domains (legal history, school, peers) with domain caps.
  • Bands & cutoffs: Scores are mapped to categories such as Low / Moderate / High. Decision guides link bands to recommendations (e.g., release, supervision, detention screening).
  • Outcomes: Common targets include new petitions within 6–12 months, failure to appear, or supervision violations.

Typical Flow

  1. Administer tool at intake or pre-disposition; collect required items.
  2. Compute score (sum or weighted sum) and assign a risk band.
  3. Apply decision rule tied to the band (e.g., offer diversion if Low).
  4. Record outcomes over a fixed follow-up window.
  5. Validate predictive performance and recalibrate if needed.

Validation & Performance

  • Discrimination (AUC/ROC): How well the score ranks higher-risk above lower-risk cases. Values near 0.5 indicate no better than chance; higher is better, with context.
  • Calibration: Whether predicted risk aligns with observed rates within score bands. A tool can rank well but misestimate absolute risk if calibration drifts.
  • Stability over time: Periodic revalidation checks whether performance changes as policy, practice, or populations shift.

Fairness & Drift Checks

  • Group comparisons: Examine error rates (false positives/negatives) and calibration by race/ethnicity, gender, and locality where legally and ethically appropriate.
  • Benchmarking: Compare tool recommendations to actual decisions to detect selective overrides concentrated in certain groups.
  • Definition transparency: Clearly state the outcome used to train/validate (e.g., arrests vs. adjudications); different choices can embed systemic differences.

Data & Methods

The research source notes that instruments vary widely in items, weights, and outcomes. Read technical appendices for item lists, scoring tables, validation samples, and follow-up windows. When publishing charts, label which version of the tool is in use, the date of last validation, and any recalibration or cutoff changes that create series breaks.

Related

Transparency note: Always publish the tool version, cutoff table, validation date, and the outcome used. Recalibration and policy overrides should be annotated to prevent misinterpretation.