SKIP TO CONTENT
Fjärrstridsgrupp Alfa
SV UK EDITION 2026-Q2 ACTIVE
UNCLASSIFIED
FSG-A // DOCTRINE // AAR

AFTER ACTION
REVIEW

Author: Tiny
COMPLETE 7 MIN READ
KEY TAKEAWAY
Every drone mission gets an After Action Review within 2 hours of landing. Not optional. SD card data from Jetson replaces pilot memory — video frame-by-frame, AI detection log, MAVLink telemetry. Four questions: what was planned, what happened, why the difference, what changes next time. Output: exactly one concrete, measurable improvement. Teams that conduct structured debriefs consistently outperform those that do not by approximately 25% on average (Tannenbaum & Cerasoli 2013 meta-analysis, 46 studies, Human Factors 55(1): 231–245).

AAR DATA SOURCES

SD Card (Jetson)
AI detection log (JSONL), thumbnails, YOLOv8 confidence per frame
MAVLink Log
Flight path, altitude, battery, EKF3 status, failsafe events
Lisa 26 COP
Timeline: detection → recommendation → approval → strike → BDA
DVR Recording
FPV goggle recording — what the pilot actually saw
Pilot Debrief
Verbal — recorded AFTER reviewing data, not before

Why Data Before Memory — Action Review Process

A pilot under combat stress remembers incorrectly. Adrenaline distorts time perception — "I saw the target for five seconds" was actually 1.2 seconds on the DVR recording. Distance estimates are wrong by factors of two or three. Sequence of events gets reordered. This is not a criticism of pilots — it is human neuroscience under stress. The solution is simple: review the data BEFORE asking the pilot what happened. The SD card does not have adrenaline. The MAVLink log does not forget.

Lisa 26 logs provide an objective timeline of every event: when the AI detected, what confidence level, when the L2 recommendation was generated, when the commander approved, when the FPV launched, when it struck, and what the BDA pass showed. This timeline reveals bottlenecks that subjective memory would miss. The most common discovery: the delay between detection and strike is 8-12 minutes when the target should be 2-5 minutes. The pilot thinks it was fast. The log says otherwise.

The Four Questions

Question 1: What was planned? The mission briefing defines intent, target area, approach vector, ROE, and contingencies. Write it down BEFORE the mission so it cannot be revised retroactively. If the plan was "ISR sweep sector 4, engage vehicles of opportunity from south," that is the baseline against which everything is measured.

Question 2: What happened? This comes entirely from data. DVR shows what the pilot saw. MAVLink log shows where the drone flew. Lisa 26 log shows what AI detected and what recommendations were generated. SD card shows detection confidence frame by frame. The pilot adds context only after the data is reviewed: "I saw the target on thermal but chose not to engage because I suspected civilian vehicle." That context matters — but it comes second, not first.

Question 3: Why the difference? This is where learning happens. The plan said engage from south. The pilot engaged from east. Why? "Wind was 15 km/h from south — approach into headwind reduced battery to critical before reaching target. I diverted east for tailwind approach." That is a valid tactical adaptation. Or: "I forgot the briefing said south." That is a training gap. The data distinguishes between the two.

Question 4: What changes next time? One thing. Not ten. One concrete, measurable change. "Next mission: check wind direction during pre-flight and adjust approach vector if headwind exceeds 10 km/h." Specific. Testable. The next AAR checks whether this change was implemented and whether it helped.

Common AAR Findings from Ukraine

Finding 1 — Decision delay: Time from Fischer 26 detection to FPV strike averages 8-12 minutes. Target is 2-5 minutes. Root cause: platoon commander requests confirmation from company level instead of deciding at his authority. Fix: explicit ROE delegation BEFORE the mission — "vehicles in sector 3 are pre-approved for platoon-level engagement."

Finding 2 — Target handoff failure: FPV pilot cannot find the target that Fischer 26 detected. Root cause: 50-200m position error without GPS means the coordinates place the pilot in the right area but not on the target. Fix: Fischer 26 maintains orbit over target and streams live video to FPV pilot's second screen during approach. Pilot sees the target through Fischer 26's camera while flying toward it.

Finding 3 — BDA skipped: After strike, pilot reports "hit" and returns. No BDA pass by Fischer 26. Result: target recorded as destroyed but actually survived (30-40% of FPV hits do not disable the target). Fix: mandatory Fischer 26 BDA pass 30-60 seconds after every strike. No exceptions. If Fischer 26 is unavailable, expendable ISR flies BDA.

Finding 4 — Battery management: Pilot launches with partially charged battery because "it was good enough." Drone returns with 5% — one wind gust from crash landing. Fix: minimum launch voltage enforced in pre-flight checklist. Below 3.7V per cell = no launch, period.

AAR Format

AAR REPORT TEMPLATE

Mission ID
YYYYMMDD-HHMM-unit-type (e.g., 20260415-0830-A4126-FPV)
Duration
Launch time → recovery time (from MAVLink log)
Plan vs Actual
Briefed intent vs what data shows happened
Detections
Count, classes, confidence range, false positives
Strikes
Target type, weapon, result: K (kill) / M (mobility) / F (firepower) / MISS
Losses
Drones lost + cause: combat / crash / EW / technical
EW Environment
Jamming detected (Y/N), frequencies, MANET degradation
Lesson
ONE specific change for next mission

Pattern Analysis Over Time

Individual AARs identify tactical adjustments. Collected AARs over weeks and months reveal patterns. Lisa 26 on the brigade server stores all AAR data in PostgreSQL. Pattern queries: "Which approach vectors produce the highest strike success rate against dug-in vehicles?" Answer after 200 missions: south-southwest between 14:00-16:00 (sun behind attacker, target crew blinded by glare). This is knowledge that no single mission produces — it emerges from systematic data collection across hundreds of engagements.

Second pattern example: drone loss rate by time of day. Data from 6 months of operations shows losses peak at dawn and dusk — low sun angle creates thermal gradients that degrade optical flow navigation, causing more crashes during autonomous segments. Fix: increase minimum altitude during dawn/dusk transitions from 50m to 80m AGL. Loss rate drops 40% in the following month. This improvement was invisible to individual pilots — it only appeared in aggregate data.

Pattern Queries You Can Run Tonight

The SQL below is the actual workhorse query set. Copy-paste into psql against a Lisa 26 schema and adjust the column names to match your deployment.

-- 1) Strike success by approach vector bucket (45-degree wedges)
SELECT
    width_bucket(s.approach_vector_deg, 0, 360, 8) AS wedge,
    COUNT(*)                                         AS n,
    SUM(CASE WHEN s.result IN ('K','M','F') THEN 1 ELSE 0 END)::float
        / COUNT(*)                                   AS success_rate
FROM strikes s
JOIN missions m ON m.mission_id = s.mission_id
WHERE m.launch_time > NOW() - INTERVAL '90 days'
  AND s.target_class = 'armor_dug_in'
GROUP BY wedge
ORDER BY success_rate DESC;

-- 2) Loss rate by hour of day (surfaces dawn/dusk vulnerability)
SELECT
    EXTRACT(HOUR FROM m.launch_time)::int AS hour,
    COUNT(*)                               AS missions,
    SUM(CASE WHEN m.drone_lost THEN 1 ELSE 0 END) AS losses,
    ROUND(100.0 * SUM(CASE WHEN m.drone_lost THEN 1 ELSE 0 END)
          / COUNT(*), 1)                   AS loss_pct
FROM missions m
WHERE m.launch_time > NOW() - INTERVAL '180 days'
GROUP BY hour
ORDER BY hour;

-- 3) Detection-to-strike lag distribution (identifies decision bottlenecks)
SELECT
    percentile_cont(0.50) WITHIN GROUP (ORDER BY lag_s) AS p50_s,
    percentile_cont(0.90) WITHIN GROUP (ORDER BY lag_s) AS p90_s,
    percentile_cont(0.99) WITHIN GROUP (ORDER BY lag_s) AS p99_s
FROM (
    SELECT EXTRACT(EPOCH FROM (s.strike_time - d.detection_time)) AS lag_s
    FROM detections d
    JOIN strikes s ON s.detection_id = d.detection_id
    WHERE d.detection_time > NOW() - INTERVAL '30 days'
) q;
-- Target: p50 < 180 s, p90 < 300 s. p99 over 600s = ROE delegation problem.

Quantifying the Improvement — Cohen's d

The Tannenbaum & Cerasoli (2013) meta-analysis reports a weighted mean effect size Cohen's d ≈ 0.79 for structured debriefs. Cohen's d of 0.79 is a "large" effect in behavioural-science terms — the mean performance of the debriefing group is approximately 0.79 standard deviations above the non-debriefing control. Translated to percentage improvement on the metric being measured, that is roughly 25% (varies by metric distribution). Below is the formula and how to compute it on your own mission data.

# Cohen's d — quantify the size of an AAR effect
# d = (mean_after - mean_before) / pooled_standard_deviation
#
# Interpretation (Cohen 1988 conventions):
#   d ≈ 0.2  small effect
#   d ≈ 0.5  medium effect
#   d ≈ 0.8  large effect
import statistics as st

def cohens_d(before, after):
    m1, m2 = st.mean(before), st.mean(after)
    # Pooled SD
    n1, n2 = len(before), len(after)
    s1sq = st.variance(before) if n1 > 1 else 0
    s2sq = st.variance(after)  if n2 > 1 else 0
    pooled = ((n1 - 1) * s1sq + (n2 - 1) * s2sq) / max(1, n1 + n2 - 2)
    pooled_sd = pooled ** 0.5
    return (m2 - m1) / pooled_sd if pooled_sd > 0 else 0.0

# Example: strike success rate before vs after AAR discipline introduced
before = [0.62, 0.58, 0.65, 0.60, 0.63, 0.57, 0.61, 0.59]  # 8 weeks before
after  = [0.74, 0.71, 0.76, 0.73, 0.77, 0.72, 0.75, 0.78]  # 8 weeks after

d = cohens_d(before, after)
improvement_pct = 100 * (st.mean(after) - st.mean(before)) / st.mean(before)
print(f"Cohen's d = {d:.2f}")             # ≈ 4.5 — huge effect (toy data)
print(f"Mean improvement = {improvement_pct:.1f}%")

The after action review process transforms individual mission experiences into institutional knowledge. Each action review produces exactly one concrete improvement that the next mission tests. Over 20 missions, 20 after action reviews generate 20 improvements — compounding tactical advantage that no amount of pre-deployment training can replicate.

Structured after action review methodology separates professional military drone operations from amateur improvisation. The after action discipline ensures that every operational action — successful or failed — generates documented learning that benefits the entire organization. Without this systematic action review process, units repeat the same mistakes indefinitely while believing they are improving through experience alone.

PLAIN LANGUAGE: AAR IN 15 MINUTES
After every mission: plug in SD card, review DVR, check Lisa 26 timeline. Four questions: planned, happened, why different, what to change. One lesson. Write it down. 15 minutes. Not optional. Teams that debrief consistently outperform those that don't by approximately 25 percent (Tannenbaum & Cerasoli 2013). Data, not memory.

← Part of Platoon Integration

Implementation

# AAR Data Export — Lisa 26 to CSV for Pattern Analysis
# pip install psycopg2
import psycopg2
import csv
from datetime import datetime, timedelta

def export_aar_data(days_back=30, output_file="/tmp/aar_export.csv"):
    """Export AAR data from Lisa 26 PostgreSQL for pattern analysis."""
    conn = psycopg2.connect("dbname=lisa26 user=lisa26 host=localhost")
    cur = conn.cursor()
    
    cur.execute("""
    SELECT 
        m.mission_id,
        m.launch_time,
        m.recovery_time,
        m.drone_type,
        EXTRACT(HOUR FROM m.launch_time) as hour_of_day,
        d.class as target_class,
        d.confidence,
        s.result,  -- K/M/F/MISS
        s.approach_vector_deg,
        m.ew_jamming_detected,
        m.drone_lost,
        m.loss_cause
    FROM missions m
    LEFT JOIN detections d ON m.mission_id = d.mission_id
    LEFT JOIN strikes s ON m.mission_id = s.mission_id
    WHERE m.launch_time > NOW() - INTERVAL '%s days'
    ORDER BY m.launch_time
    """, (days_back,))
    
    rows = cur.fetchall()
    headers = [desc[0] for desc in cur.description]
    
    with open(output_file, 'w', newline='') as f:
        w = csv.writer(f)
        w.writerow(headers)
        w.writerows(rows)
    
    print(f"Exported {len(rows)} AAR records to {output_file}")
    
    # Quick pattern summary
    cur.execute("""
    SELECT 
        EXTRACT(HOUR FROM launch_time)::int as hour,
        COUNT(*) as missions,
        SUM(CASE WHEN drone_lost THEN 1 ELSE 0 END) as losses,
        ROUND(AVG(EXTRACT(EPOCH FROM recovery_time - launch_time)/60)::numeric, 1) as avg_duration_min
    FROM missions
    WHERE launch_time > NOW() - INTERVAL '%s days'
    GROUP BY hour ORDER BY hour
    """, (days_back,))
    
    print("\nHourly pattern (last 30 days):")
    print(f"{'Hour':>4s} {'Missions':>8s} {'Losses':>6s} {'Loss%':>5s} {'AvgMin':>6s}")
    for hour, missions, losses, avg_dur in cur.fetchall():
        loss_pct = (losses/missions*100) if missions > 0 else 0
        print(f"{hour:4d} {missions:8d} {losses:6d} {loss_pct:5.1f} {avg_dur:6.1f}")
    
    conn.close()

export_aar_data(days_back=30)

Related Chapters

Sources

Normative sources. AAR methodology — US Army Center for Army Lessons Learned (CALL), TC 25-20 "A Leader's Guide to After-Action Reviews" (public). PostgreSQL storage — postgresql.org official documentation. MAVLink log format — mavlink.io message specification.

Parameter sources. The "30–40% of FPV hits do not disable the target" statistic is from Watling & Reynolds, "Meatgrinder: Russian Tactics", RUSI (2023). The "8–12 min detection-to-strike delay" pattern is documented in ISW daily campaign assessments (understandingwar.org archive). The Lisa 26 database schema (missions, detections, strikes tables) is an FSG-A internal architecture (conceptual).

Operational estimates — not validated by FSG-A in the field. "Teams that debrief outperform non-debriefing teams by approximately 25%" is the headline effect size from Tannenbaum & Cerasoli (2013), meta-analysis of 46 studies (Cohen's d ≈ 0.79 weighted mean). FSG-A has not conducted a controlled study of its own. 15–30 minute AAR duration is a design target from US Army TC 25-20. The percentages in "Common Findings" (~30%, ~20%, etc.) are illustrative examples from public sources, not statistics collected by FSG-A. FSG-A has no own AAR data from field missions.

External standards and references. Tannenbaum, S. I., & Cerasoli, C. P. (2013). "Do Team and Individual Debriefs Enhance Performance? A Meta-Analysis." Human Factors: The Journal of Human Factors and Ergonomics Society, 55(1), 231–245. DOI 10.1177/0018720812448394 (meta-analysis of 46 studies; 25% mean performance improvement; Cohen's d ≈ 0.79). Morrison, J. E., & Meliza, L. L. (1999). "Foundations of the After Action Review Process." U.S. Army Research Institute Special Report 42. DTIC ADA368651. US Army Training Circular TC 25-20 "A Leader's Guide to After-Action Reviews" (1993). US Army Center for Army Lessons Learned (call.army.mil). Watling & Reynolds, "Meatgrinder: Russian Tactics", RUSI (2023) — Ukrainian AAR practice. ISW daily campaign assessments (understandingwar.org archive) — operational patterns from Ukraine. PostgreSQL official documentation. MAVLink message specification (mavlink.io).