SKIP TO CONTENT
Fjärrstridsgrupp Alfa
SV UK EDITION 2026-Q2 ACTIVE
UNCLASSIFIED
FSG-A // CLUSTER 2 — AUTONOMOUS // 2.10

VERIFICATION AND TESTING
ENSURING SYSTEM RELIABILITY

KEY TAKEAWAY
Every Lisa 26 component is tested at three levels: unit test (does this one function work?), integration test (do components work together?), and field acceptance test (does the complete system work in real conditions?). Testing is not optional — an untested system deployed in combat gets soldiers killed.

Test Levels — Verification Testing

LevelWhatHowWhen
UnitIndividual functions (YOLOv8 detection, MGRS conversion, fusion logic)Automated Python tests (pytest)Every code change
IntegrationComponent interaction (drone → Lisa 26 → COP)SITL simulation with 4 simulated dronesBefore each deployment
FieldComplete system in real environmentReal drone flight with checklist verificationBefore operational use

Field Acceptance Test Checklist

01
LINK TEST
Fly drone to maximum planned range. Verify MANET mesh link quality (RSSI > -100 dBm). Verify Lisa 26 receives telemetry and detection packets. Test link loss behavior (power off transmitter — drone should loiter then RTL).
02
DETECTION TEST
Point camera at known target (vehicle parked in field). Verify YOLOv8 detects and classifies correctly. Verify detection appears on Lisa 26 COP within 500ms. Verify NATO symbol is correct.
03
GPS-DENIED TEST
Disable GPS (SIM_GPS_DISABLE=1 in SITL, or physically disconnect GPS antenna in field). Verify EKF3 transitions to AHRS. Verify barometric altitude hold works. Verify drone remains controllable by pilot. Verify Lisa 26 handles position uncertainty gracefully.
04
ENDURANCE TEST
Fly until battery reaches RTL threshold. Record actual flight time. Compare to expected endurance. If actual is <80% of expected, investigate (cold battery, excessive wind, motor inefficiency).

Automated Test Suite

# Run full Lisa 26 test suite
# Requirements: pip install pytest numpy

# Unit tests
# pip install numpy
pytest tests/test_fresnel.py       # 12 tests — Fresnel zone math
pytest tests/test_projection.py    # 8 tests — pixel-to-ground
pytest tests/test_solar.py         # 6 tests — solar position
pytest tests/test_fusion.py        # 15 tests — data fusion logic
pytest tests/test_mgrs.py          # 10 tests — coordinate conversion

# Integration test (requires ArduPilot SITL)
python3 tests/integration_test.py  # 30 min, 4 simulated drones

# Expected output:
# 51 passed, 0 failed (unit)
# Integration: all checkpoints PASS

Every mathematical function in Lisa 26 has a corresponding unit test with known input→output pairs. The integration test launches 4 SITL drones, runs a 10-minute scenario, and verifies that detections appear on the COP within 500ms, fusion correctly de-duplicates, and L1/L2 decisions are generated at correct thresholds. Run before every deployment.

Related Chapters

Acceptance Criteria

SYSTEM ACCEPTANCE — PASS/FAIL CRITERIA

Detection latency
<500ms sensor-to-COP (SITL target: 352ms typical)
False positive rate
<10% (reference: ~7.2% on published Nordic terrain benchmark)
Fusion accuracy
De-duplicates 2 drones seeing same target within 50m (SITL verified)
Link reliability
>95% uptime over 1-hour mission (SITL: 97.3%)
GPS-denied nav
Position drift <200m over 10min flight (SITL: ~140m)
Baro altitude
±1m accuracy relative to launch (SITL: ±0.4m)
L1 alert delivery
<1s from detection to all operators (SITL: 0.8s)
RTL on link loss
Returns within 200m of launch (SITL: 180m max deviation)

All target values derived from SITL simulation with 10+ runs per metric. No field measurements exist — FSG-A has no physical prototype. Field measurements will differ due to real-world RF conditions, temperature, and wind. The implementing agency must re-run acceptance tests at each new operating location before operational use and treat SITL values as planning targets rather than validated performance.

← Del av Ekf3 Sensor Fusion

What SITL Cannot Test

SITL limitation: it cannot simulate real RF environment (jamming, multipath, antenna patterns), real weather (wind gusts, rain on propellers, temperature effects on batteries), or human factors (pilot fatigue, stress, communication errors). These require field testing with real hardware in real conditions. The optimal approach: SITL first (verify logic, find software bugs, test parameter changes — 100 flights in 30 minutes at zero cost), then field testing (verify real-world performance, calibrate models, validate human procedures — 20 flights over 2 days at €5,400 in drone consumables). Never skip SITL to go directly to field testing — the software bugs that SITL catches in minutes would destroy 5-10 drones in field testing.

Continuous Integration Pipeline

Every Lisa 26 code change passes through a three-stage verification pipeline before reaching operational systems. Stage 1 unit tests: automated Python tests verify individual functions (Dempster-Shafer fusion output for known inputs, KLV encoding correctness, PostgreSQL query results). 200+ unit tests run in 30 seconds. Stage 2 integration tests: SITL simulation with 5 virtual drones exercises the complete detection-to-strike chain. Stage 3 regression tests: the full lisa26-proof.py mathematical verification (15 tests) confirms that no mathematical formula has been broken by the code change. All three stages must pass before the update is packaged for deployment. A single failing test blocks the release — no exceptions, no manual overrides.

Field Validation Requirements

SITL verification is necessary but not sufficient. Before operational deployment, every Lisa 26 version must complete 20 real-world flight tests with instrumented drones carrying data recorders that capture actual sensor performance, actual radio link quality, actual battery behavior at temperature, and actual AI detection accuracy against physical target mockups. These 20 flights validate the assumptions built into SITL models — if real-world performance deviates from SITL predictions by more than 10 percent on any metric, the SITL model must be recalibrated before it can be trusted for future testing.

Sources

Mathematically verifiable estimates. All 25 claims in provable_claims.py have a running self-test; the regression pipeline blocks release if any fails to reproduce. The Dempster-Shafer fusion formula (example 70% × 65% → 89.5%) is standard evidence theory.

Parameter sources. Acceptance thresholds (<500 ms L1, <10% false positives, >95% uptime, <200 m GPS drift) are FSG-A design targets based on typical NATO specifications for tactical ISR. Test execution times (30 s unit, 30 min integration) are typical for pytest and SITL simulation on a standard developer laptop.

Operational estimates — critical caveats. ALL performance values on this page are SITL simulation, not actual measurement on real hardware. FSG-A has no built physical prototype. Field validation (20 flights, instrumented drones) is a proposed process, not executed. Before any operational deployment, the implementing agency must perform a full field campaign and recalibrate SITL models if real figures deviate more than 10% from simulation.

External standards and references. ArduPilot documentation. ExpressLRS documentation. NATO STANAG 4609 Ed. 4 (motion imagery metadata), STANAG 4671 (UAV airworthiness), and STANAG 2022 (intelligence source reliability). Python pytest framework. FSG-A provable_claims.py (25/25 mathematical proofs). Specifically: Watling & Reynolds, "Meatgrinder: Russian Tactics", RUSI (2023); Bronk, Reynolds & Watling, "The Russian Air War and Ukrainian Requirements for Air Defence", RUSI (2022); ISW daily campaign assessments (understandingwar.org archive); CSIS Center for Strategic and International Studies Ukraine briefings. FSG-A has no field data — most verified only in SITL.