Skip to main content

One post tagged with "adaptive-thresholds"

View All Tags

Tournament helpers — configurability, strictness telemetry, and dynamic thresholds

· 4 min read
Max Kaido
Architect

Problem statement

We were asked to:

  • Check helpers in dike/tournaments/ for crafting validation formulas and scoring so they can be defined per tournament via config.
  • Review the latest versions of Momentum Strength Buy and Volatility Breakout Buy.
  • Answer three questions:
    1. Can their custom validation and scoring be implemented via helpers/configs, or are they too custom for universalization?
    2. For tournaments with too few candidates, which validation parts are overly strict, and how should this be surfaced (dashboard/report) for quick tuning?
    3. Outline a path to a dynamic system that adjusts formulas based on prior scoring to reach a desired candidate count.

Answers at a glance

    1. Short answer: partially yes. Both tournaments can mostly be expressed via helpers/configs if we extend the validation/scoring helpers to support gating logic (anyOf/OR), sequential “state machine” gates, conditional thresholds, and bonuses/penalties. With the current helper primitives (simple required/optional lists) they are too custom to fully replace bespoke methods.
    1. Represent strictness via a “validation funnel” + per-gate telemetry: log which gate fails and by how much, aggregate into a daily funnel, show distributions near thresholds, and provide “what‑if” threshold curves. A simple JSON/DB + Grafana or a markdown/CSV report is enough to quickly tune formulas.
    1. Dynamic system: yes—use quantile-based threshold control (or a simple PID-like controller) to hit a target candidate count subject to score-quality guardrails. Start with offline suggestions, then enable adaptive adjustments with smoothing and bounds.

Evidence of custom logic that exceeds today’s helpers

  • Momentum Strength Buy has conditional “rescue” logic that relaxes volume if enough optional confirmations exist:
if (!isValidLocal && hasOptionalConfirmations) {
const volBreakLoose = volumeAnalysis.volumeTrend > 0.5;
isValidLocal = hasTrendOrMomentum && volBreakLoose && hasEmaAlignment;
}
  • Volatility Breakout Buy uses a two-step state machine (primed→confirmed) and risk/volume guards:
const isPrimed = isSqueeze && volumeRatio >= 1.2 && bbWidthPct < 20 && hasMinVolume;
const isConfirmed = isPrimed && priceDistanceFromUpperBb >= 0.005;
const currentPrice = analysis.context.currentPrice;
const slDistance = (currentPrice - slBand.minSl) / currentPrice;
const riskGuard = slDistance <= 0.05;

These patterns need expressions, sequential gating, and metric-aware guards—beyond a flat required/optional list.

1) Can helpers replace custom methods?

  • Momentum Strength Buy
    • Feasible with extensions: core gates (trend/momentum AND volume AND EMA) map to required. Rescue logic needs conditional thresholds (“if optional confirmations ≥ k, lower volume threshold”). Scoring already returns a continuous totalScore and is consumed from .scoring; can be modeled by a config-driven feature+weight DSL.
  • Volatility Breakout Buy
    • Feasible with extensions: requires a 2-step gate (primed then confirmed), plus DI-gap tolerance, volume-USD floor, and SL risk guard. All are expressible if the helper supports sequential gates with named expressions and threshold parameters.

Conclusion: Don’t abandon helpers—extend them. Without extensions, these v3 implementations are too custom. With a small DSL upgrade, both fit.

Recommended helper extensions:

  • Validation DSL
    • Gates pipeline with steps: allOf/anyOf/not, group weights, and “if A then relax B by Δ”.
    • Named metrics from analysis paths; each gate emits pass/fail, value, threshold.
    • Soft validation mode: allow pass-by-score with warnings below a score floor.
  • Scoring DSL
    • Features = path + normalizer (ramp, sigmoid, min-max) + weight.
    • Bonuses/penalties and caps.
    • Total score formula defined in config; still returned as .scoring.

2) How to see what’s too strict (fast tuning)

Implement “validation funnel + telemetry”:

  • Instrument each validator to log a trace per market:
    • market, tournament, gateName, pass/fail, value, threshold, distance-to-threshold, and the final totalScore.
  • Aggregate to a daily report with:
    • Funnel counts: total → after each named gate (e.g., squeeze → volumeUSD → trendFailSafe → riskGuard).
    • Per-gate pass rate and median distance-to-threshold; show P5/P50/P95 near thresholds.
    • Top-k failure reasons and their contribution to drop-off.
    • What‑if curves: simulate moving a threshold ±X% and estimate candidates gained/lost using recorded distributions. Delivery options:
  • Quick: write JSON/CSV artifacts and generate a markdown report per run.
  • Better: persist to DB table tournament_validation_trace and make a Grafana/Metabase dashboard:
    • Panels: funnel, strictness heatmap by gate, threshold proximity histograms, candidate count over time, quality vs quantity scatter (score vs acceptance).

This lets you instantly see which gate is over-filtering and by how much, so you can edit config instead of code.

3) Toward a dynamic, self-adjusting system

  • Target: desiredCandidates per run or per day.
  • Controller per tournament:
    • For each tunable gate threshold T with monotonic effect, compute the metric’s empirical CDF from recent traces.
    • Set T to the quantile that yields the target acceptance for that stage (e.g., pass rate target after this gate).
    • Apply smoothing (EMA) and bounds; respect quality constraints (e.g., keep totalScore ≥ minScoreQuantile).
    • Re-evaluate periodically (e.g., daily) and write “suggested thresholds” to preview; once stable, enable auto-apply with guardrails.
  • Optional: add a global score floor that auto-adjusts to maintain final candidate count while preserving average quality.

Minimal initial schema additions:

  • In each tournament config:
    • validation.gates: array of steps with expressions, thresholds, relaxations, and enabled flags.
    • telemetry: enabled, samplingRate.
    • adaptive: enabled, desiredCandidates, smoothing, minScore, bounds per threshold.

What to do next (low-effort, high-impact)

  • Standardize validation traces in both helpers to emit gate-level telemetry.
  • Add a daily “Validation Funnel Report” job that prints a markdown table and “what‑if” suggestions.
  • Extend helpers to support anyOf and sequential gates; add named thresholds in config.
  • Phase 2: switch MSB/VBB validators to the new DSL; keep equivalence with current logic.
  • Phase 3: add quantile-based adaptive controller producing suggested thresholds; gate behind a flag.

Summary

  • Current helpers can almost cover MSB/VBB if extended with expressions, sequential gates, and conditional thresholds; otherwise bespoke code is justified.
  • Add per-gate telemetry and a funnel report to quickly see which parts over-filter; a simple dashboard or markdown report suffices.
  • With traces in place, introduce a quantile-based controller to adapt thresholds toward a target candidate count, with quality guardrails and smoothing.