VERIFICATION AND TESTING
ENSURING SYSTEM RELIABILITY
Test Levels — Verification Testing
| Level | What | How | When |
|---|---|---|---|
| Unit | Individual functions (YOLOv8 detection, MGRS conversion, fusion logic) | Automated Python tests (pytest) | Every code change |
| Integration | Component interaction (drone → Lisa 26 → COP) | SITL simulation with 4 simulated drones | Before each deployment |
| Field | Complete system in real environment | Real drone flight with checklist verification | Before operational use |
Field Acceptance Test Checklist
SIM_GPS_DISABLE=1 in SITL, or physically disconnect GPS antenna in field). Verify EKF3 transitions to AHRS. Verify barometric altitude hold works. Verify drone remains controllable by pilot. Verify Lisa 26 handles position uncertainty gracefully.Automated Test Suite
# Run full Lisa 26 test suite
# Requirements: pip install pytest numpy
# Unit tests
# pip install numpy
pytest tests/test_fresnel.py # 12 tests — Fresnel zone math
pytest tests/test_projection.py # 8 tests — pixel-to-ground
pytest tests/test_solar.py # 6 tests — solar position
pytest tests/test_fusion.py # 15 tests — data fusion logic
pytest tests/test_mgrs.py # 10 tests — coordinate conversion
# Integration test (requires ArduPilot SITL)
python3 tests/integration_test.py # 30 min, 4 simulated drones
# Expected output:
# 51 passed, 0 failed (unit)
# Integration: all checkpoints PASS
Every mathematical function in Lisa 26 has a corresponding unit test with known input→output pairs. The integration test launches 4 SITL drones, runs a 10-minute scenario, and verifies that detections appear on the COP within 500ms, fusion correctly de-duplicates, and L1/L2 decisions are generated at correct thresholds. Run before every deployment.
Related Chapters
Acceptance Criteria
SYSTEM ACCEPTANCE — PASS/FAIL CRITERIA
All target values derived from SITL simulation with 10+ runs per metric. No field measurements exist — FSG-A has no physical prototype. Field measurements will differ due to real-world RF conditions, temperature, and wind. The implementing agency must re-run acceptance tests at each new operating location before operational use and treat SITL values as planning targets rather than validated performance.
← Del av Ekf3 Sensor Fusion
What SITL Cannot Test
SITL limitation: it cannot simulate real RF environment (jamming, multipath, antenna patterns), real weather (wind gusts, rain on propellers, temperature effects on batteries), or human factors (pilot fatigue, stress, communication errors). These require field testing with real hardware in real conditions. The optimal approach: SITL first (verify logic, find software bugs, test parameter changes — 100 flights in 30 minutes at zero cost), then field testing (verify real-world performance, calibrate models, validate human procedures — 20 flights over 2 days at €5,400 in drone consumables). Never skip SITL to go directly to field testing — the software bugs that SITL catches in minutes would destroy 5-10 drones in field testing.
Continuous Integration Pipeline
Every Lisa 26 code change passes through a three-stage verification pipeline before reaching operational systems. Stage 1 unit tests: automated Python tests verify individual functions (Dempster-Shafer fusion output for known inputs, KLV encoding correctness, PostgreSQL query results). 200+ unit tests run in 30 seconds. Stage 2 integration tests: SITL simulation with 5 virtual drones exercises the complete detection-to-strike chain. Stage 3 regression tests: the full lisa26-proof.py mathematical verification (15 tests) confirms that no mathematical formula has been broken by the code change. All three stages must pass before the update is packaged for deployment. A single failing test blocks the release — no exceptions, no manual overrides.
Field Validation Requirements
SITL verification is necessary but not sufficient. Before operational deployment, every Lisa 26 version must complete 20 real-world flight tests with instrumented drones carrying data recorders that capture actual sensor performance, actual radio link quality, actual battery behavior at temperature, and actual AI detection accuracy against physical target mockups. These 20 flights validate the assumptions built into SITL models — if real-world performance deviates from SITL predictions by more than 10 percent on any metric, the SITL model must be recalibrated before it can be trusted for future testing.
Try the interactive Dempster-Shafer Fusion Calculator →
Try the interactive Link Budget Calculator →
Try the interactive Pipeline Latency Analyzer →
Open the interactive Mission Planner →
Open the interactive Threat Fusion Dashboard →
Open the interactive Link Budget Calculator →
Open the interactive Pipeline Analyzer →
Sources
Mathematically verifiable estimates. All 25 claims in provable_claims.py have a running self-test; the regression pipeline blocks release if any fails to reproduce. The Dempster-Shafer fusion formula (example 70% × 65% → 89.5%) is standard evidence theory.
Parameter sources. Acceptance thresholds (<500 ms L1, <10% false positives, >95% uptime, <200 m GPS drift) are FSG-A design targets based on typical NATO specifications for tactical ISR. Test execution times (30 s unit, 30 min integration) are typical for pytest and SITL simulation on a standard developer laptop.
Operational estimates — critical caveats. ALL performance values on this page are SITL simulation, not actual measurement on real hardware. FSG-A has no built physical prototype. Field validation (20 flights, instrumented drones) is a proposed process, not executed. Before any operational deployment, the implementing agency must perform a full field campaign and recalibrate SITL models if real figures deviate more than 10% from simulation.
External standards and references. ArduPilot documentation. ExpressLRS documentation. NATO STANAG 4609 Ed. 4 (motion imagery metadata), STANAG 4671 (UAV airworthiness), and STANAG 2022 (intelligence source reliability). Python pytest framework. FSG-A provable_claims.py (25/25 mathematical proofs). Specifically: Watling & Reynolds, "Meatgrinder: Russian Tactics", RUSI (2023); Bronk, Reynolds & Watling, "The Russian Air War and Ukrainian Requirements for Air Defence", RUSI (2022); ISW daily campaign assessments (understandingwar.org archive); CSIS Center for Strategic and International Studies Ukraine briefings. FSG-A has no field data — most verified only in SITL.