M3E3: Commercial Imagery, SAR, and GEOINT for the Open-Source Analyst

Commercial Imagery, SAR, and Geospatial Intelligence for the Open-Source Analyst

The Broken Monopoly

For most of the Cold War, geospatial intelligence (GEOINT) was a single unified discipline held together by a single unifying constraint: collection was hard. The National Reconnaissance Office built spacecraft that cost hundreds of millions of dollars each, required specialized ground infrastructure to downlink, and produced imagery that flowed through secure channels to cleared analysts who had spent careers learning to read it. The scarcity of collection was precisely what made interpretation valuable. You didn't need to think hard about what you were looking at, because the bottleneck upstream — getting the image at all — was so severe that anything reaching an analyst's desk was presumptively worth analysis.

That constraint has collapsed. The commercial satellite industry has entered a phase where launch costs and satellite manufacturing costs have fallen far enough that companies can build constellations of dozens or hundreds of spacecraft for what a single government satellite programme would have cost a decade ago. The consequence is not just that imagery is cheaper. GEOINT has split into two problems that no longer belong to the same workflow, requiring different skills, different tradecraft, and different organizational assumptions. Collection is now cheap, abundant, and increasingly automated. Interpretation remains expert-intensive, poorly taught outside of government intelligence programs, and the actual constraint on analytic output.

Every vendor in the commercial imagery market will tell you that AI is closing that gap. Some of it is true. Much of it is not. The analyst who cannot distinguish between what automated systems do and what vendors claim they do is not just buying oversold software — they are systematically misreading their own intelligence products. This episode builds the mechanical understanding needed to make that distinction: what each major provider offers, how the physics of synthetic aperture radar (SAR) differs from optical collection in ways that matter operationally, how non-imagery GEOINT tools fill in the gaps that imagery leaves, and where human expert judgment remains irreplaceable despite a decade of AI investment in this domain.


The Commercial Constellation: Four Providers, Four Different Value Propositions

Understanding commercial GEOINT requires resisting the temptation to treat "commercial satellite imagery" as a single category. The four dominant providers operate on fundamentally different architectures, and those architectural choices determine what questions each can answer — and which ones it cannot.

Planet Labs built its business around a radical idea: instead of building a few very capable satellites, build hundreds of small ones and accept lower individual resolution in exchange for global daily coverage. The company operates a constellation of more than 200 PlanetScope Dove satellites, which photograph the entire landmass of the Earth every 24 hours at 3.7-metre resolution. At 3.7 metres per pixel, you can see that a large building exists, that a parking lot is full or empty, that a field has been plowed. You cannot read a license plate, identify a specific vehicle type, or count personnel. The Dove constellation is a change-detection instrument, not a detail-analysis instrument. Its value is in the baseline — the ability to establish what the world looked like yesterday, and to flag where it looks different today.

Planet recognized this ceiling and built around it rather than pretending it wasn't there. The Pelican constellation improves over Planet's existing SkySat constellation, which provides 0.5-metre spatial resolution with 15 satellites, compared to the Dove constellation at 5 metres. Pelican achieves a higher revisit rate than SkySat with 32 satellites imaging at 0.3-metre spatial resolution. Planet plans to begin launching its second-generation Pelicans later in 2026, with the full constellation designed for revisit rates of up to ten times per day globally and up to 30 times at mid-latitudes. The Pelican satellites carry NVIDIA Jetson GPUs (graphics processing units used for parallel computation) for on-orbit processing, which means they can run analytics before downlink — reducing the time between collection and delivery. Future Pelicans are designed for near-real-time tasking via intersatellite links, enabling imagery capture in as little as five minutes. Planet does not sell individual images. It sells subscriptions to a continuously updated data feed covering the entire planet. That business model distinction matters: Planet is optimized for monitoring pipelines, not on-demand tasking.

Maxar Technologies occupies the opposite end of the spectrum. Where Planet maximizes revisit frequency at the cost of resolution, Maxar built its business on the highest commercially available resolution, with correspondingly lower revisit rates and significantly higher per-image cost. The WorldView Legion constellation brings six satellites providing 30-centimeter imagery — sufficient to identify vehicle types, detect equipment configurations, and count aircraft on a ramp. When the Biden administration wanted to publicly release imagery of Russian force buildup along Ukraine's border in late 2021, they went to Maxar precisely because the resolution could withstand scrutiny. For twenty dollars, anyone could access the originals and verify the intelligence. That combination — verified resolution, commercial availability, no classification caveats — made Maxar imagery useful in a way that classified collection could not be: it could be shared, published, and contested in public.

Maxar's limitation is revisit rate. A constellation optimized for resolution requires physically larger optics, which means larger satellites, which means fewer satellites, which means any given location will see a Maxar pass infrequently unless specifically tasked. When GEOINT findings about Bucha gained traction, Russian authorities questioned Maxar's impartiality, alleging that its contractual relationships with U.S. governmental institutions compromised the company's neutrality. Kremlin spokesman Dmitry Peskov specifically dismissed the imagery's legitimacy in a Sky News interview, citing Maxar's contract with the National Geospatial-Intelligence Agency (NGA), and alleged that Maxar's images could not be reliably assigned to a precise date or time. That attack failed — the timestamps held, and independent cross-corroboration with Planet imagery confirmed the findings — but it illustrates a real vulnerability: commercially produced intelligence with known government client relationships will be challenged on source credibility, not just technical validity.

BlackSky occupies a distinct niche that neither Planet nor Maxar fully fills: high-frequency, high-resolution, rapid-delivery imagery optimized for near-real-time operational intelligence. BlackSky's Gen-3 satellite constellation features 35-centimeter imagery rated at NIIRS-5+, where NIIRS stands for the National Imagery Interpretability Rating Scale — the standardized metric used across the intelligence community to characterize what an analyst can extract from imagery at a given resolution. For tactical-level clarity, BlackSky offers multiple passes per day over critical areas, delivering intelligence in as little as 60 minutes after collection.

NIIRS-5 means you can identify military vehicles by type and configuration, see aircraft tail numbers, detect individual people and their shadows. Not marketing language — a technical threshold with defined analytic implications.

BlackSky successfully delivered its first very high-resolution imagery from its second Gen-3 satellite just 12 hours after launch in June 2025, demonstrating its ability to provide real-time space-based intelligence at what it calls "warfighting speed." Gen-3 revisit rate is 60 minutes with the constellation as configured. BlackSky's Spectra software platform layers automated change detection and object classification onto the imagery pipeline, so what arrives at the analyst is a parsed product: new vehicles flagged, infrastructure changes highlighted, activity anomalies surfaced.

Capella Space is the commercial SAR provider that completes the collection toolkit, operating on entirely different physical principles than any of the optical constellations. Capella is the first American company to bring commercial SAR data to the broader market. Founded in 2016, its SAR satellites deliver high-resolution imagery with the fastest order-to-delivery time in the commercial SAR sector. Understanding what Capella provides requires understanding what SAR is — which is where most vendor presentations skip the physics and most analyst training gaps begin.


How SAR Works: Earning the Physics Before Trusting the Product

An optical satellite is essentially a very large, very precise camera. It collects reflected sunlight. Clouds block sunlight. Darkness eliminates it. The image you receive is a photograph, and its interpretation requires roughly the same cognitive process as interpreting any photograph: you recognize shapes, textures, shadows, and relationships between objects.

SAR works on entirely different principles, and the difference matters operationally. A SAR satellite does not collect light — it generates its own energy, emits radar pulses toward the Earth's surface, and measures the characteristics of the energy that returns. SAR uses radar signals to penetrate atmospheric conditions, providing near-real-time, all-weather visibility both day and night. A SAR pass over a target obscured by monsoon cloud cover, or on the dark side of the Earth, produces imagery where an optical pass produces nothing. For intelligence collection in tropical regions, in winter at high latitudes, or in active weather systems, this is the difference between collection and a gap.

What vendor materials consistently underemphasize: SAR imagery looks nothing like a photograph, and interpreting it correctly is a distinct skill set that takes significant time to develop. The return signal depends on the geometry of the surface relative to the radar beam, the material properties of the surface, and the angle of incidence. Metal objects return radar energy intensely and appear very bright. Water surfaces, when calm, reflect radar energy away from the satellite and appear very dark. Vegetation scatters energy in complex patterns. Buildings produce distinctive double-bounce returns when radar bounces off the ground and then off a vertical wall before returning to the satellite. The image is grainy with a characteristic noise pattern called speckle that has no equivalent in optical imagery. Roads that an experienced optical analyst would recognize instantly may appear unremarkable or invisible in SAR. Military vehicles parked under tree canopy that would be invisible to optical sensors may appear as subtle geometric anomalies in SAR — but only if the analyst knows what to look for.

Capella's SAR satellites capture high-precision imagery at sub-0.25-metre resolution, with revisit times under three hours over key regions. The spotlight ultra mode delivers 0.25-metre azimuth resolution with up to 52 seconds of dwell time, extracting fine details like motion and obscured man-made objects. Capella's Colorized Sub-aperture Imaging (CSI) product represents a meaningful processing innovation: CSI highlights man-made structures and moving objects in vivid color by dividing the synthetic aperture into multiple sub-apertures, each captured from a different angle, assigning distinct colors to enhance visual contrast. This exploits a real physical property of SAR to make moving objects and metal structures visually distinguishable from their surroundings in ways that standard grayscale SAR images do not provide.

In August 2022, Capella announced its next generation of SAR satellites, called Acadia, with increased radar bandwidth from 500 MHz to 700 MHz, providing better resolution, higher imaging quality, and shorter order-to-delivery times. The bandwidth increase has a direct mechanical consequence: wider bandwidth enables finer range resolution, because resolution in the range dimension of a SAR image depends directly on the bandwidth of the transmitted pulse.

The practical limit for the non-specialist analyst is this. Optical satellites have long been the standard in Earth imaging but now share the stage with SAR technology that operates on an entirely different wavelength. SAR is no longer a supporting actor. But elevated access to SAR data does not equal elevated capacity to interpret it. An analyst trained exclusively on optical imagery who begins using SAR products without dedicated training will make systematic errors: misidentifying radar artifacts as targets, missing targets that appear as anomalies rather than recognizable shapes, and over- or under-estimating confidence in SAR-derived conclusions.


The Non-Imagery GEOINT Stack

Commercial satellite imagery is only part of the open-source GEOINT toolkit, and in some analytic contexts it is not the most useful part. A complete GEOINT workflow integrates at least three additional data streams: Automatic Identification System (AIS) maritime tracking, civil aviation transponder data via aggregators like Flightradar24, and NASA's Fire Information for Resource Management System (FIRMS) fire detection data.

AIS operates on a simple principle: ships above a certain tonnage are legally required to continuously broadcast their identity, position, course, speed, and destination via radio transponder. Each vessel has a unique Maritime Mobile Service Identity (MMSI) number — the maritime equivalent of a tail number. AIS receivers on shore, on other vessels, and increasingly on dedicated satellites aggregate these broadcasts into near-continuous tracking of global maritime traffic. Platforms like MarineTraffic make this data accessible to anyone with a browser. At its best, AIS lets an analyst trace the complete movement history of a specific vessel across an entire voyage, correlate port visits with cargo manifests, and identify patterns of behavior that suggest sanctioned trade, illicit cargo, or deceptive routing.

The limitation is so well-exploited by actors operating outside legal norms that it has spawned its own analytic tradecraft. AIS can be spoofed, disabled, or manipulated. Vessels engaged in sanctions evasion, illicit oil transfers, or military operations routinely go dark — disabling or falsifying their transponder — precisely when they are doing something they don't want tracked. The absence of AIS data is not the same as the absence of vessel activity, and interpreting a gap in the AIS record requires integrating other sources. This is exactly where SAR becomes valuable in maritime contexts. The synthetic aperture radar instrument on each Sentinel-1 satellite (the European Space Agency's freely available SAR constellation) can detect ships, which appear as bright spots in the ocean; combining that detection with AIS records highlights ships that are not broadcasting their identity and location. Such silent vessels could indicate illegal activity, prompting further investigation by maritime authorities.

The SAR-AIS fusion approach is not theoretical. The open-source intelligence community this week includes a case about a ship allegedly carrying stolen Ukrainian grain being tracked from Israel — exactly the kind of problem where multi-source maritime GEOINT produces answers that no single source can provide alone. Multi-source here means AIS position history, SAR detection to confirm location against AIS claims, and port monitoring imagery to verify loading. The investigative methodology is established; the bottleneck is knowing which sources to fuse and how to weight conflicting signals.

Flightradar24 aggregates civil aviation transponder data in the same way MarineTraffic aggregates AIS, covering aircraft equipped with ADS-B (Automatic Dependent Surveillance-Broadcast) transponders. The utility for intelligence purposes ranges from detecting unusual military aircraft in civil airspace to tracking the movements of executive aircraft associated with sanctioned individuals. In Ukraine, open-source analysts tracked Russian military transport aircraft routes in the early days of the invasion using Flightradar24 data, establishing logistics patterns that corroborated satellite imagery of ground movements. The limits mirror AIS: military aircraft routinely operate without civil transponders, and sophisticated actors know that ADS-B creates a public record of their movements.

NASA's FIRMS detects active fires and thermal anomalies globally using data from the MODIS (Moderate Resolution Imaging Spectroradiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) instruments aboard NASA and NOAA satellites, delivering near-real-time alert products. Bellingcat — the open-source investigative outlet — subscribes to Planet for daily satellite imagery, but also uses NASA FIRMS data to determine whether a fire took place at a certain time or place, helping verify video of a particular strike. FIRMS is powerful for this purpose: a claimed strike on an ammunition depot, an alleged bombing of a facility, or a reported fire at an industrial site can be corroborated or refuted against FIRMS data in minutes. The thermal signature is independent of what any party claims happened. The limits are resolution — FIRMS operates at 375-metre pixel resolution for VIIRS, sufficient to confirm that a large fire occurred at a rough location but insufficient to characterize what burned or why.

Together, AIS, Flightradar24, and FIRMS constitute a non-imagery GEOINT layer that complements satellite collection rather than duplicating it. An analyst relying only on imagery misses the behavioral patterns embedded in movement data. An analyst relying only on movement data cannot see what happened at the locations being monitored. The skilled practitioner fuses these streams, using each source to fill the gaps that the others leave — and maintaining explicit awareness of what none of them can see.


Ukraine as the Proving Ground

The Russian invasion of Ukraine was, among other things, a large-scale operational test of whether open-source GEOINT could perform at a level previously reserved for classified collection. The verdict is complex. Simplifying it in either direction — claiming that open source fully replaced classified GEOINT, or dismissing its contributions as derivative — produces an inaccurate picture.

The collection side of the case is well-documented. On February 28, 2022, Maxar made headlines when it released images of a Russian military vehicle convoy stretched for 40 miles along the road to Kyiv, within hours of collection. Prior to the invasion, the Biden administration publicly shared commercial satellite imagery and online maps that laid bare the lines of tanks poised to pour across the border. Previously, such awareness of troop movements would have been classified and shared only through diplomatic channels. A U.S. administration choosing to declassify strategic warning by pointing to commercially available imagery that anyone could verify independently — that is a structural change in intelligence practice, not just a tactical adaptation.

The Bucha investigation illustrates the interpretation challenge that the collection revolution did not solve. When evidence of civilian atrocities emerged after Russian forces withdrew from the Kyiv suburbs in early April 2022, the core evidentiary question was temporal: were the civilian bodies present before Russian withdrawal, proving Russian responsibility, or placed afterward, as Russian information operations claimed? Maxar imagery answered that question. Bodies were visible on Yablunska Street as early as March 19, while the area was unambiguously under Russian occupation. The New York Times on April 4 released an investigation drawing on high-resolution Maxar imagery, verifying the presence of civilian bodies on that street during a period in which the city was undoubtedly under Russian control.

The interpretation was neither automatic nor performed by novices. The independent findings of Benjamin Strick and the London-based Centre for Information Resilience (CIR) introduced an additional layer of verification through cross-corroboration of Maxar imagery with GEOINT from Planet Labs. CIR's analysts extensively mapped footage from Bucha, geolocating events and assigning temporal and spatial coordinates that pinned them to specific locations in the satellite record. This required analysts who understood both platforms' temporal characteristics, could identify landmarks at sub-metre scale, and knew how to establish confidence intervals around timestamp claims.

Weather and dense cloud cover above Ukraine posed a persistent challenge for optical imaging satellites. SAR sensor satellites could operate through conditions that rendered optical collection unavailable — precisely the conditions that prevailed during much of the winter campaign. The complementarity of optical and SAR was operationally necessary, not theoretical.

Zelenskyy's intelligence sharing with NBC News about Russian satellite surveillance patterns illustrates the other side of the GEOINT equation. According to reporting from NBC News in late March 2026, Ukrainian intelligence showed that Russian satellites photographed Prince Sultan Air Base on March 20, 23, and 25 — and one day after the third image, Iran attacked the base. Zelenskyy's explanation of the pattern — "If they make images once, they are preparing. If they make images a second time, it's like a simulation. The third time it means that in one or two days, they will attack" — is an experienced analyst's description of collection signatures, derived from Ukraine's own experience being watched. That is pattern-of-life analysis applied to the collector, not the target. No automated system produced that tradecraft insight. It came from sustained exposure to what specific imaging patterns predict.


Practical Limits: What the Specifications Don't Tell You

The gap between a satellite provider's capability specifications and what those specifications mean for an actual analytic problem is where most practitioner disappointment lives.

Revisit rates are not coverage guarantees. A constellation offering "ten revisits per day" globally is telling you what is physically possible under optimal tasking conditions with full constellation capacity. What you receive depends on how many other customers are tasking that same constellation over the same region, whether your target is a priority the provider's automated tasking system accommodates, and whether weather — for optical systems — cooperated during any of those ten possible passes. This is not an edge case in operationally relevant regions. Southeast Asia, Central Africa, and much of South Asia have cloud cover rates that render optical imagery of specific targets unavailable for days or weeks at a time. An analyst monitoring a facility in a tropical monsoon zone using an optical constellation may receive only three or four usable collects per month during the wet season.

SAR addresses weather penetration but introduces a different coverage constraint: incidence angle. SAR images look to the side of the satellite's ground track, not straight down. The geometry of that look angle determines both what is visible and what is shadowed. A building on one side of a hill is illuminated; the same building on the far side may be in radar shadow. Tall buildings in urban areas create distinctive layover and shadow patterns that can obscure adjacent structures. Interpreting these geometric artifacts requires training that most open-source analysts currently lack.

Resolution is a floor, not a ceiling. The resolution figure tells you the smallest feature that is, in principle, detectable. At 30-centimeter resolution, a trained imagery analyst can identify a specific aircraft type. An untrained analyst looking at the same image will see something that might be an aircraft. The skill of the interpreter determines what the image yields, independent of what the sensor can theoretically resolve.

Change detection is not event explanation. Automated change detection — comparing imagery from two time periods and flagging pixels that have changed — is genuinely useful and genuinely achievable with current AI. Planet's platform, BlackSky's Spectra, Capella's analytics layer, and third-party platforms all offer change detection products that flag newly appeared vehicles, modified infrastructure, and altered terrain features. What these systems cannot do is explain what changed, why it changed, whether the change is significant, and what the change implies for the analytic question at hand. A parking lot that was empty yesterday and is full today is a change-detection product. Whether that parking lot belongs to a military facility, whether the vehicles are military equipment or civilian cars, whether this is a routine pattern or a departure from baseline — these are interpretation tasks that require context, domain knowledge, and analytic tradecraft.


Where AI Enters — and Where It Doesn't

The AI claims attached to commercial GEOINT products are real in aggregate and misleading in specifics. Untangling the two requires precision about what specific functions automated systems perform and what they do not.

Automated object detection and classification — identifying tanks, aircraft, ships, and structures in imagery — has genuinely improved to the point of operational utility. BlackSky's Spectra platform runs object-level analytics on incoming imagery, classifying vehicle types and flagging activity anomalies without requiring analyst involvement for routine monitoring. With a growing library of AI models and very high-resolution imagery, it can automatically detect and classify objects to reduce analyst workload and accelerate mission-critical decisions. The automation frees human attention for the interpretive tasks that machines cannot perform. That trade is real and worth taking.

Planet's Dove constellation feeds automated monitoring pipelines that combine near-daily monitoring with AI-enabled vessel detection to remove maritime blind spots. The Pelican generation adds on-orbit processing via the NVIDIA Jetson platform, enabling real-time analytic applications directly from space. This matters because latency has been a persistent limitation of commercial Earth observation: imagery collected on one side of the Earth must be downlinked to a ground station before analysis can begin. On-orbit processing means the computation happens in space, and what arrives on the ground is a processed result rather than raw data waiting for analysis.

The outer limit of current AI performance is exactly where the tradecraft gets hard. Consider detecting deception and denial. Operators who know they are being watched by commercial satellites will use camouflage, dispersion, timing, and decoy operations to confuse automated detection. Moving equipment at night, operating under tree canopy or netting, or scattering vehicle positions are countermeasures that trained adversaries employ. Automated systems trained on characteristic signatures of equipment in open terrain will systematically miss equipment deliberately placed outside those signatures.

The most consequential limitation is what might be called the first-look problem. Automated change detection can tell you that something changed. It cannot tell you whether the change matters. If vegetation grows across a road in summer, a building gets repainted, or construction equipment appears for routine maintenance — all detectable changes — the system will flag them. A human analyst with contextual knowledge of the facility's function, the operational environment, the regional calendar, and the specific analytic question will dismiss most of these flags in seconds. Without that contextual anchor, automated change detection generates noise at scale rather than signal.

This is why the Army's own doctrinal thinking is moving toward what a recent article in Military Intelligence Professional Bulletin describes as dual transformation: institutionalizing open-source intelligence not as a collection supplement but as a discipline with its own analytic standards and human expertise requirements. The collection problem is largely solved. The interpretation problem requires sustained investment in human skill that no software update will replace.

The completion of all four Sentinel-1 satellites — Sentinel-1D entering full operational service just this week — means the freely available SAR data baseline for open-source analysts has doubled in revisit capacity compared to the degraded constellation that operated between Sentinel-1B's failure in 2022 and Sentinel-1C's launch in December 2024. The Sentinel-1C and Sentinel-1D satellites are also equipped with AIS signal antennas, allowing vessel tracing and identification. Free, open, and fused with AIS. That is a meaningful capability upgrade for analysts who know how to use it — and a technical increment that changes nothing for analysts who do not.


The Constraint That Vendors Won't Name

The commercial GEOINT revolution has produced a genuine paradox. Collection has become so abundant that the analytic system is overwhelmed by data it cannot fully interpret. The constraint on intelligence output is no longer whether an image exists — for most targets, on most days, in most conditions, something can be collected. The constraint is whether an analyst with the right interpretive skills is available to say what the image means.

This creates a specific failure mode that is absent from vendor materials and policy discussions alike. An organization that subscribes to commercial imagery and deploys automated analytics without investing in interpretation expertise will systematically over-trust automated outputs, misread imagery artifacts as intelligence, and miss what trained eyes would recognize as significant. The automation looks like capacity. It is not. It is the front end of a workflow that still requires expert judgment at the output end.

The Dutch navy story from earlier this month captures something adjacent from a different angle. A journalist tracked HNLMS Evertsen for 24 hours using a five-euro Bluetooth tracker concealed in a postcard, exploiting the gap between what the navy assumed about its information security and what a creative adversary could observe. The navy's defensive investment was in the wrong layer. The collection was trivially easy. The failure was in understanding what an adversary with creativity and minimal resources could see.

Commercial GEOINT presents the same structure in reverse. Collection has become trivially accessible. The question of what an analyst without specialist training can extract from that collection is systematically overestimated in the conversations where it gets asked. Platform providers have strong commercial incentives to claim that their AI layers close the interpretation gap. The intelligence services that built their GEOINT expertise over decades of classified practice know that the gap is real and closing it is long-duration work.

The analyst who understands this distinction — who knows what Planet provides, what Capella provides, what those products can resolve and what they cannot, and where the automated analytics fail — is equipped to use these tools correctly and to push back when vendors or colleagues overstate what the imagery proves. The gap between collection and interpretation is not closing on its own. The analyst who decides to close it for their own practice, through dedicated study of SAR interpretation, through building familiarity with the specific failure modes of automated detection, through sustained use of the full non-imagery GEOINT stack — that analyst is building a competitive advantage that will not be automated away any time soon.

The imagery is abundant. The question is who can read it.