Katana vs Built‑in TPSL – Toward a Fair Objective
· 2 min read
Why “better” needs a clear objective
When comparing TPSL methods (built‑in heuristic vs Katana combinatorics), plain weighted R:R can reward far TPs and penalize robust SLs. We need an objective that reflects utility under uncertainty, not just geometry.
Observed asymmetries
- Built‑in can nudge SL continuously (e.g., to meet R:R≥1.5), while Katana currently samples discrete SL candidates (swing/ATR/heuristic). This can yield higher R:R on built‑in without necessarily overfitting.
- Allocation grid: 50/50 sometimes wins; Katana limits TP1 to 30–50%. If RR2 uplift is modest, shifting to 30/70 can hurt the weighted result.
- Precision/rounding: Minor differences move RR.
Example (MYRIAUSDT): built‑in with SL 0.00093309 (50/50) beats Katana using SL 0.00091865 (30/70). The difference stems from SL granularity and allocation grid.
A fair, robust objective
Replace naive weighted R:R with risk‑aware utility variants:
- Harmonic mean: penalizes imbalance across TPs.
- Concave utility: utility = Σ alloc_i · sqrt(RR_i).
- Capped RR: utility = Σ alloc_i · min(RR_i, RR_cap) to avoid chasing distant TP2.
Penalties (configurable):
- Risk cost: − α · riskPercent (discourage very wide SLs).
- Distance cost: − β · max(0, tp2DistancePct − k) (penalize far, low‑probability targets).
Candidate and allocation policy
- SL candidates: swing lows, ATR 1.5×, heuristic; optional “RR‑tuned within band” (off by default) to keep parity with built‑in continuous nudge.
- TP candidates: ATR 1.5×, ATR 2.5×, H1 VWAP; add D1 swing if available.
- Allocations: TP1 ∈ [30%, 60%] step 5; 2‑ or 3‑level with min 10% on TP3.
- Constraints: no‑trade ±1%; SL below entry and inside ATR band; ordered TP levels.
Selection and stability
- Stability filter: prefer combos that stay top‑quartile across rolling windows.
- Tie‑breakers: lower riskPercent, then semantic SL (swing > ATR) for explainability.
Implementation plan (incremental)
- Add utility variants to Katana (harmonic, concave, capped).
- Optional RR‑tuned SL candidate (disabled by default) and exact rounding parity.
- Expand allocation sweep to 30–60% for TP1.
- Report columns: builtin_utility, katana_utility_[variant], risk%, flags (rr_tuned_used).
- Run across full universe; compare win‑rates per variant and by regime.
Expected outcome
We trade single‑run optimality for robustness: fewer fragile wins, more consistent, explainable decisions that respect risk. If built‑in still “wins,” it will do so under the same objective—then we adopt that behavior into Katana transparently.
