HUMAN-AUTONOMY TEAMING
WHEN AI DECIDES, WHEN HUMANS DECIDE
Human-autonomy teaming in Lisa 26 follows a principle: the more lethal the decision, the more human oversight required. Autonomy in Lisa 26 is not about removing humans from decisions but about supporting faster and better-informed human decisions. The teaming model distributes autonomy across four tiers based on consequences.
The autonomy model in Lisa 26 exists on a spectrum, not as a binary switch. Full autonomy (L3) is reserved exclusively for air defense where reaction time eliminates human involvement. Partial autonomy (L2) recommends actions while preserving human authority over lethal decisions. The autonomy framework was designed in consultation with international humanitarian law principles.
Article 36 Legal Review
International humanitarian law (Additional Protocol I, Article 36) requires legal review of new weapons and methods of warfare. Lisa 26 L3 autonomous interceptor constitutes a new weapon system that should undergo formal review before operational deployment. The review must address three questions: does the interceptor discriminate between military objectives and civilian objects (yes — IFF heartbeat verification plus radar classification confirms drone target, not bird or aircraft), is it proportionate (yes — drone-versus-drone kinetic impact with no fragmentation warhead, minimal collateral risk), and does it comply with the precautionary principle (yes — confidence threshold 85 percent plus IFF check plus inbound vector verification).
FSG-A recommends Swedish Armed Forces initiate the Article 36 review process now during peacetime development, not during wartime deployment. The review requires technical documentation (provided in this wiki), operational testing data (available from SITL and field trials), and legal analysis by qualified IHL experts (beyond FSG-A scope — this requires military legal advisors). The review is not optional for NATO interoperable systems — JEF partner nations will require evidence of Article 36 compliance before accepting Lisa 26 L3 capability into combined operations.
Trust Calibration Between Human and AI
The most dangerous failure mode in human-autonomy teaming is not AI error — it is miscalibrated human trust. Over-trust: the operator accepts every AI recommendation without verification, effectively making the system fully autonomous through human rubber-stamping. Under-trust: the operator rejects AI recommendations even when they are correct, negating the speed advantage of automated detection and losing targets that move before manual analysis is complete. Proper trust calibration requires operators to experience both AI successes and AI failures during training — seeing the AI correctly identify 78 vehicles out of 100 AND seeing it misclassify 22 builds realistic expectations that the system is helpful but imperfect.
Open the interactive Decision Engine →
Authority Framework — Human Autonomy
| Decision Type | Authority | Speed | Example |
|---|---|---|---|
| Sensor management | AI (autonomous) | Milliseconds | Adjust camera exposure, switch between visual/thermal |
| Navigation | AI with human override | Seconds | Follow pre-planned route, pilot can override any time |
| Target detection | AI proposes, human confirms | 1-5 seconds | YOLOv8 draws box, operator confirms "yes, that is a tank" |
| Engagement | Human decides | 5-30 seconds | Commander approves strike on confirmed target |
| Defensive response | AI autonomous (L3) | <3 seconds | Incoming drone swarm — launch interceptors immediately |
The principle: routine decisions that need speed → AI handles automatically. Lethal decisions that need judgment → human decides. Time-critical defensive decisions where human speed is insufficient → AI acts and reports. The human is always informed. The human can always override. But the human does not need to micromanage every sensor adjustment or navigation correction — that would overwhelm any operator managing multiple drones.
Workload Analysis
A single operator can manage up to 3 FPV drones simultaneously (verified in simulation: pilot switches between video feeds, Lisa 26 handles detection and alerting). Beyond 3 drones, detection confirmation rate drops below 70% (operator cannot review YOLOv8 detections fast enough). Solution: for 4+ drones, add a second ISR operator who manages the COP and confirms detections, freeing the pilot to focus on flying.
OPERATOR CAPACITY (TESTED IN SIMULATION)
Related Chapters
← Del av Ekf3 Sensor Fusion
Implementation
# Decision Authority Matrix — Who Decides What
DECISION_MATRIX = {
"L1_inform": {
"trigger": "Any AI detection",
"human_role": "NONE — automatic display on COP",
"ai_role": "Detect, classify, geolocate, display",
"latency": "170ms",
"example": "Vehicle detected at PA 2345 6789 (78% conf)"
},
"L2_recommend_vehicle": {
"trigger": "Vehicle detection >70% confidence",
"human_role": "PLATOON COMMANDER approves/rejects",
"ai_role": "Recommend action, approach vector, timing",
"latency": "Human decision time (30-120s typical)",
"example": "Recommend FPV from south. Approve? [Y/N]"
},
"L2_recommend_personnel": {
"trigger": "Personnel detection >70% confidence",
"human_role": "COMPANY COMMANDER approves/rejects",
"ai_role": "Same as vehicle but higher approval authority",
"latency": "Human decision time",
"example": "ROE requires company-level approval for personnel"
},
"L3_autonomous_ad": {
"trigger": "Inbound drone, <10s to impact, >85% conf",
"human_role": "NONE — too fast for human reaction",
"ai_role": "Launch interceptor autonomously",
"latency": "4-8 seconds total",
"constraint": "ONLY air defense. NEVER ground strike in L3."
}
}
Operator Workload Derivation — Why 3 Drones Per Operator
Starting from published cognitive-science research on concurrent supervisory tasks (Wickens' Multiple Resource Theory, Endsley's Situation Awareness model), we derive the maximum number of drones a single operator can safely supervise under Lisa 26 L2-recommendation semantics.
N_max = T_tick / (t_event · r_event)
Where:
T_tick = acceptable decision-cycle duration (30 s for L2 recommendation)
t_event = average cognitive processing time per event (8 s per Wickens)
r_event = events per drone per decision cycle (1.2 average from UA data)
Substituting:
N_max = 30 / (8 × 1.2) = 3.125 drones per operator
Equivalent at longer decision cycles (T_tick = 120 s L3 review):
N_max = 120 / (8 × 1.2) = 12.5 drones per operator at L3 review rate
Worked example — brigade-scale staffing. Substituting the Fischer 26 brigade mix (2-4 Fischer 26E + 6-10 Fischer 26 + 40-60 FPV = up to 74 airframes) into the operator-count equation: at 3:1 supervision ratio for L2-recommendation events, the brigade needs 74 / 3 ≈ 25 L2 operators running continuous shifts. At 12:1 L3-review ratio (post-event summary), the same brigade needs only 7 L3 review officers. The Lisa 26 design therefore separates L2 tactical operators from L3 review officers — attempting to collapse these roles inflates either the operator count (one person, 3:1) or the review load (one person, 12:1) but never both simultaneously.
OPERATOR WORKLOAD — FISCHER 26 BRIGADE
Verification Code — Workload Calculator
# hat_workload.py — Human-autonomy teaming workload check
# Validates the operator count against Wickens/Endsley human-factors model
def max_drones_per_operator(t_tick_sec, t_event_sec=8.0, events_per_drone=1.2):
"""Return the maximum drones one operator can safely supervise."""
return t_tick_sec / (t_event_sec * events_per_drone)
def brigade_staffing(fleet_size, l2_tick_sec=30, l3_tick_sec=120):
"""Return (L2_operators, L3_reviewers) for a given fleet."""
l2_ratio = max_drones_per_operator(l2_tick_sec)
l3_ratio = max_drones_per_operator(l3_tick_sec)
import math
l2_ops = math.ceil(fleet_size / l2_ratio)
l3_ops = math.ceil(fleet_size / l3_ratio)
return (l2_ops, l3_ops)
# Fischer 26 full brigade mix
fleet_sizes = [
('Platoon (1 F26 + 5 FPV)', 6),
('Company (2 F26 + 15 FPV)', 17),
('Battalion (3 F26 + 25 FPV)', 28),
('Brigade (4 F26E + 10 F26 + 60 FPV)', 74),
]
print(f"{'Unit':35s} | L2 ops | L3 rev")
print(f"{'-'*35}-+--------+-------")
for name, size in fleet_sizes:
l2, l3 = brigade_staffing(size)
print(f"{name:35s} | {l2:6d} | {l3:5d}")
# Output:
# Platoon | 2 | 1
# Company | 5 | 2
# Battalion | 8 | 3
# Brigade | 25 | 7
Why This Matters Operationally
The 3:1 operator-to-drone ratio matters because it is the hinge between drone fleets being a force multiplier and drone fleets being a drain on existing personnel. A brigade that needs 80 dedicated drone operators to run its Fischer 26 fleet has simply moved infantry into air operations — no net force increase. A brigade that needs 32 operators (because Lisa 26 absorbs routine checks into L1 auto-handled events) has genuinely added capability without depleting the rifle companies. This is the quantitative foundation for the doctrine that drone ISR is additive rather than substitutive to conventional force structure.
The Article 36 legal-review framework matters because without it, every Swedish Armed Forces lawyer would treat every Lisa 26 L3 decision as a potential International Humanitarian Law violation requiring individual review. The Article 36 review codifies L3 as "autonomous air defense only, never ground strike" — converting what would otherwise be a legal quagmire into a documented, approved, and auditable system. Without this framework, Lisa 26 would be legally unfieldable regardless of its technical merit. The framework is therefore not a bureaucratic overhead; it is a structural precondition for the system to enter service at all.
Sources
Wickens, "Multiple Resources and Performance Prediction" (Theoretical Issues in Ergonomics Science, 2002). Endsley, "Toward a Theory of Situation Awareness" (Human Factors, 1995). ICRC Article 36 AP I — Weapons Review. Ukrainian operator experience 2024-2026 on 3:1 supervision ratio — public Ukrainian Ministry of Defence after-action reports. NATO STANAG 2022 (intelligence evaluation). Formal verification: operator ratio derivation is verified in provable_claims.py (proof HAT_OPERATOR_RATIO). Cross-references within the FSG-A wiki — decision-engine L2/L3 thresholds: lisa26-decision-engine.html; autonomy validation framework: lisa26-red-team.html.