M6E2: Synthetic Media, Deepfakes, and Graph-Based Verification
Module 6, Episode 2: Synthetic Media, Deepfakes, and Graph-Based Verification
The Pollution Problem: What Synthetic Media Does to OSINT
Start with what the threat is, precisely, because the discourse around deepfakes has been so saturated with hyperbole that analysts have developed a kind of fatigue — a sense that the warnings are theatrical. They are not.
Using advanced frameworks such as Generative Adversarial Networks and diffusion models, deepfakes are capable of producing highly realistic yet fabricated content. That is the technical fact. The operational consequence is that every image, video, or voice recording that enters an OSINT collection pipeline now carries a non-trivial probability of being synthetic — and that probability is rising faster than most practitioners' mental models have updated.
The capability threshold crossed in 2025 was not incremental. Video realism made a significant leap thanks to video-generation models designed specifically to maintain temporal consistency — producing videos with coherent motion, consistent identities of the people portrayed, and content that holds together from one frame to the next. These models disentangle identity representation from motion information, so that the same motion can be mapped to different identities. The result: stable, coherent faces without the flicker, warping, or structural distortions around the eyes and jawline that once served as reliable forensic evidence.
That last sentence is the one to sit with. The artifacts that trained analysts had relied on for years — the uncanny shimmer at the hairline, the lighting discontinuity at the jawline, the wrongly-blinking eye — are being systematically eliminated by the next generation of generation models. Not because anyone specifically optimized against forensic detection as a primary goal, but because they were optimizing for perceptual realism, and realism requires exactly the physical consistency that detection relied on as a failure mode. The forensic tells are disappearing as collateral consequence of general improvement.
Voice is further along than video. High-fidelity voice cloning technology now requires less than three seconds of audio to create a precise, emotive replica. Three seconds of audio is the length of a conference room greeting. It is shorter than a single sentence in an earnings call transcript. The attack surface for voice cloning is, practically speaking, any executive who has ever appeared in public-facing media — which describes most of the people at the top of organizations that analysts study, monitor, or protect.
The documented fraud cases make the operational reality concrete. The most prominently documented case involved a finance employee who transferred 15 separate transactions totaling $25.6 million after a video conference in which every participant, including the apparent CFO, was an AI-generated deepfake. This was the Arup case, Hong Kong, early 2024. The employee had initially suspected phishing, but the live video call — with convincing AI-generated colleagues, synchronized facial movements, and realistic voice replication — overcame his skepticism entirely. The fraud was discovered only through manual verification with corporate headquarters some time later. This case shattered the assumption that video calls are inherently trustworthy and established the operational template for the deepfake video campaigns that followed throughout 2025 and into 2026.
The template has been replicated. In February 2025, the Milan Public Prosecutor opened an investigation into a voice cloning scam against Italian entrepreneurs using the cloned voice of Italian Defense Minister Guido Crosetto. Frauds documented in 2025 and 2026 show a convergence of three historically separate attack vectors: the email layer, carrying the initial fraudulent instruction; the audio layer, a "confirmation" call using cloned voice that reassures; and the visual layer, a meeting with the executive's face on screen. The result is a multi-channel attack that neutralizes legacy controls based on "second confirmation through an alternative channel." If the email is fake, the verification call is fake, and the approval video meeting is fake, second confirmation does not add safety. It multiplies by zero.
That mathematical observation is where the epistemological problem begins for intelligence analysis. The standard practice of triangulating across sources depends on sources being independent. When one forgery can simultaneously populate multiple channels, the assumption of independence breaks down. You have three confirmations of the same lie.
The OSINT-specific threat is not just that individual pieces of content are forged. Synthetic media is being injected into the ecosystem at scale, targeting the specific credibility mechanisms that analysts have built workflows around. The rise of generative AI has turbocharged the ability of state actors and propagandists to fabricate convincing satellite imagery during major conflicts. As the US-Israeli war against Iran raged in early 2026, Tehran Times, a state-aligned English daily, posted on X a "before vs. after" image it claimed showed "completely destroyed" US radar equipment at a base in Qatar. It was an AI-manipulated version of a Google Earth image from the previous year of a US base in Bahrain. The manipulated photo garnered millions of views as it spread across social media in multiple languages.
Satellite imagery was one of the last categories that OSINT analysts treated as relatively trustworthy — photographic overhead imagery carries an implicit physics argument that is harder to fake than ground-level video. Reports of fake satellite imagery created or edited using AI also followed the Russia-Ukraine conflict and the four-day war between India and Pakistan. The threat is not hypothetical and it is not new, but the quality has reached the point where detection requires active effort rather than a cursory glance.
The fabricated satellite images follow the emergence of imposter OSINT accounts on social media that appear to undermine the work of credible digital investigators. This is the most sophisticated version of the threat: not just fake content, but fake OSINT accounts distributing fake content using the visual grammar and citation conventions of legitimate investigation. The aim is to package disinformation to look like the work generated and published by a legitimate OSINT organization, piggybacking on the transparency industry's hard-earned reputation for probity. An early example gained prominence in 2014, when Russia countered Bellingcat's claims about MH-17 (the Malaysia Airlines flight shot down over Ukraine in 2014) by doctoring satellite images and photographs to falsely implicate Ukraine — mimicking the presentational style of OSINT practitioners, including visual aids, text boxes, and false references.
When the disinformation is designed to look like OSINT, it attacks the verification process itself, not just the conclusion.
What Media Forensics Can Still Do — and Where It Stops
Writing off pixel-level forensics entirely would be a mistake. The field has real capability, and dismissing it wholesale in favor of network analysis would leave analysts without tools that still add genuine value in specific circumstances. The question is understanding what those circumstances are.
The foundational techniques retain utility when applied appropriately. Digital forensic investigators analyze how many times a video has been compressed, verify whether the metadata story adds up, and trace the chain of custody from creation to the courtroom. Compression analysis — examining the signature layering of multiple lossy compressions — can reveal whether a file has been processed in ways inconsistent with its claimed provenance. Original video recorded directly to device and uploaded once has a different compression signature than video that has been downloaded, edited, and re-uploaded. This signal degrades as content circulates, but in early stages of a disinformation event, before content has been widely reshared, it provides real forensic value.
Metadata analysis retains diagnostic value while carrying significant caveats. EXIF data embedded in image files records technical details including camera model, GPS coordinates, and timestamp — when it is present. Even completely legitimate content can get mangled along the way. Upload an untouched photo to social media, and the platform automatically compresses, crops, or adjusts the colors, making it harder to verify later. X (formerly Twitter) strips EXIF metadata from uploaded images as a matter of platform policy. Telegram similarly processes images in ways that destroy metadata. By the time most social media content reaches an analyst, the metadata that would have been most useful has already been removed — not by an adversary, but by routine platform processing.
This is the fundamental problem with basing a verification workflow on metadata: absence of metadata cannot be interpreted as evidence of tampering, because legitimate content loses its metadata constantly.
AI-based forensic detection adds another layer, but carries its own severe limitations. The deepfake detection landscape in 2026 resembles a high-stakes arms race. AI-generated images, video, and audio have reached a quality level where human detection is essentially impossible, and even automated detectors face increasingly sophisticated adversarial techniques. Deepfake detection systems have achieved impressive accuracy on conventional forged images; however, they remain vulnerable to anti-forensic or adversarial samples deliberately crafted to evade detection. Anti-forensic techniques — often derived from adversarial attack paradigms — are designed to conceal or suppress the traces that detectors rely on. By introducing imperceptible perturbations to forged content, these methods can drastically reduce the confidence of a detection model, producing false negatives.
The arms race dynamic has a structural asymmetry that matters for operational planning. A detection model that achieves 94% accuracy on existing deepfakes becomes partially obsolete the moment a new generation model is released, while the generation side has no comparable constraint — it only needs to fool a human or a detector, not maintain any particular accuracy standard against a benchmark. Detection is on defense. Generation sets the terms.
Transformer-based architectures show significantly better cross-dataset generalization — an 11.33% performance decline — than CNN-based approaches, which suffer more than 15% decline when tested on deepfakes generated by methods different from those in the training data. A detector trained on today's deepfakes will perform worse on tomorrow's deepfakes, and tomorrow's will arrive on a timeline determined by adversaries, not analysts.
Studies in 2025 and 2026 show humans perform only slightly better than random guessing when identifying high-quality deepfakes. When an analyst uses their own perception as the primary forensic instrument — watching the video, listening to the audio, checking whether it "feels" right — they are performing random chance with a confidence display attached. The subjective sense of certainty when consuming a convincing deepfake is not correlated with accuracy. It is a feature of the forgery.
A 2025 Reuters survey found that 67% of respondents "often doubt whether video content is real," up from 33% in 2023. This "liar's dividend" — where the existence of deepfakes allows real content to be dismissed as fake — may ultimately be more damaging than deepfakes themselves. For intelligence analysis, the liar's dividend has an immediate practical form: genuine, important video evidence can be contested and neutralized by simply asserting it is AI-generated, forcing analysts to prove a negative in a threat environment where proof is technically difficult. When President Trump said Iran was responsible for a missile strike on a girls' elementary school in early 2026, he preempted an ongoing military investigation and contradicted forensic analyses by multiple news outlets. "In my opinion, based on what I've seen, that was done by Iran," Trump told reporters. The subsequent OSINT investigation — led by Bellingcat and corroborated by the New York Times, AP, and CNN — demonstrated the power of layered verification, but it also illustrated the friction: when official narrative and forensic evidence conflict, and when the credibility of visual media is systematically contested, the analyst's burden of proof rises sharply.
The honest assessment of media forensics in 2026: it works at the margins, it adds value as one layer in a deeper stack, and it fails precisely when you need it most — when the content is sophisticated, when it has been processed through platform pipelines that destroy metadata, and when the adversary has invested in anti-forensic techniques. The field is not obsolete. Building a verification strategy on it as the primary instrument, however, is building on terrain that is being systematically eroded.
The meaningful line of defense will shift away from human judgment toward infrastructure-level protections — secure provenance, such as media signed cryptographically using the Coalition for Content Provenance and Authenticity specifications — as well as multimodal forensic tools. Examining pixels harder will no longer be adequate.
The Pivot: Why Network Context Is Harder to Forge Than Content
If a specific piece of content can be convincingly fabricated, the question that follows is whether the environment around the content can be fabricated as well. The answer is: sometimes, but much less reliably, and at a cost that scales with the complexity of the network being counterfeited.
This is the core logic of graph-based verification. A deepfake video requires one or several actors to fabricate the content itself. Fabricating the relational context of that content — the independent accounts that shared it, the timeline of publication across multiple platforms, the corroborating eyewitness accounts from people with distinct and verifiable digital histories, the satellite imagery from commercial providers with separate authentication chains — requires coordinating a much larger and more complex operation. Each additional independent source that would need to be fabricated or compromised adds cost and creates additional points of potential failure for the disinformation campaign.
OSINT investigations highlight a critical distinction in one of OSINT's long-standing assumptions: geolocating, chronolocating, and corroborating footage lends credibility to the depicted event, but not necessarily authenticity to the claim. Collapsing these two layers collapses the distinction between the reality of an incident and the truthfulness of the footage depicting it. When analysts geolocate a video to a specific street corner, they confirm that the footage is consistent with that location — not that the footage is real. A geolocated video may suggest that an event plausibly occurred at a particular location, but it does not guarantee the footage is genuine.
Here is the asymmetry that graph-based verification exploits: a sophisticated adversary can generate AI footage that geolocates correctly. They can generate a "before" satellite image with the right coordinates. What is considerably harder to generate is a network of independent actors — real people with verifiable digital histories extending back years, distinct posting patterns, non-overlapping social graphs — all independently reporting the same event from different vantage points, with consistent but not identical details, at the time the event would have occurred.
Investigating Coordinated Inauthentic Behavior — the organized manipulation of online discourse by botnets and troll farms — is one of the most critical and complex tasks in modern analysis. State-sponsored disinformation networks use synthetic amplification to give false narratives the illusion of genuine popularity. They establish networks of bots and trolls that engage in coordinated activities: simultaneous posting, topic hijacking, and targeted harassment. This is the attack the adversary deploys — not simply forging content, but constructing the apparent network consensus around content. Graph analysis catches what pixel forensics misses.
Behavioral corroboration uses account creation dates of bot clusters. If hundreds of accounts were created on the same day or week, cross-referencing that date with known national or global political events can reveal a timeline of orchestration. The network tells the story that the content was designed to obscure. Accounts created in temporal clusters correlated with political events, accounts with follower networks that overlap suspiciously with known state media affiliates, posting cadences that suggest automation rather than human behavior, geographic metadata inconsistencies between claimed location and observed infrastructure — none of these signals appear in the video frame. They are only visible when you look at the topology of the network distributing the content rather than the content itself.
Network mapping lays out the relationships between users by tracking how they interact through follows, mentions, replies, and reposts — identifying highly connected groups and recognizing collusion among suspected disinformation spreaders within the same network. The practical toolkit includes Maltego (an entity relationship mapping platform), Gephi (an open-source tool for large-scale graph visualization and community detection), and enterprise platforms like Palantir AIP and Linkurious that integrate graph analytics with structured intelligence data. The graph does not prove a specific piece of content is fake. It tells you whether the amplification structure around that content looks like organic human behavior or a coordinated operation — a distinction that is independent of the content's pixels.
Scalable provenance standards is the technical term for what amounts to shifting verification from content to chain of custody — from asking "does this pixel look right?" to asking "can we establish an unbroken documented record of where this content came from and how it got here?"
The Tehran Times Case: What Graph Analysis Caught and Why
The AI-fabricated satellite imagery that circulated during the 2026 US-Iran conflict is a case study in how synthetic media is deployed specifically against OSINT credibility mechanisms — and how detection worked in ways that illuminate both the capabilities and limits of each verification approach.
In early March 2026, Tehran Times posted on X a "before vs. after" image it claimed showed "completely destroyed" US radar equipment at a base in Qatar. It was an AI-manipulated version of a Google Earth image from the previous year of a US base in Bahrain. The subtle visual giveaway: a row of cars parked in identical positions in both the authentic satellite photo and the manipulated image.
The pixel-level error — the cloned row of cars — is the kind of mistake that current generation tools still produce when compositing elements across images. A physically realistic composite requires that every element in the scene be consistent with the claimed time, location, and lighting conditions. The parking lot detail was missed by millions of people who shared the image, and identified only by researchers who knew to look and had reference imagery to compare against. That is a high bar for routine verification at intelligence tempo.
The network-level signal was louder and more accessible. A graph analysis of the image's distribution would have shown: the source account is Tehran Times, a documented state-aligned outlet with a known disinformation history. The image spread along a network heavily populated by accounts with amplification patterns consistent with Iranian state media ecosystems. The "before" reference image was traceable to Google Earth imagery of a different base in a different country. The claimed location — Qatar — was inconsistent with the operational context of the conflict at the time of publication, which cross-referencing with commercial satellite data from providers like Maxar or Planet would have confirmed within minutes.
An analyst running a graph-based workflow would have flagged the network context as suspicious before examining a single pixel. The source attribution is state-aligned. The amplification network shows coordination signatures. The claimed imagery has a traceable reference origin in a different geography. Each of those signals is network-level. None of them require pixel forensics.
Journalists and researchers from Bellingcat, the New York Times, the Associated Press, and CNN analyzed satellite images, verified videos, and US statements about military positioning to reach their conclusions about responsibility for the school strike that became one of the most contested attribution questions of the conflict. Bellingcat was the first to analyze footage showing a Tomahawk cruise missile striking an area adjacent to the girls' school. Bellingcat's head of research, Carlos Gonzales, told PolitiFact his team had been following the situation in Iran since January, monitoring how weapons and armament were building up around the region. The team corroborated the footage with what was known about the attack, matching elements in the video to satellite imagery by manually comparing features such as trees, buildings, walls, power poles, and cables — confirming that the video was shot in the school's direction.
The Bellingcat workflow demonstrates the mature form of network corroboration in practice. The conclusion did not rest on any single piece of evidence. It rested on the intersection of multiple independent verification chains: weapon identification from video footage (a Tomahawk, a weapon only the US operates in this theater), geolocation of the footage to a specific physical location, cross-referencing of that location with satellite imagery from commercial providers on the relevant date, and consistency of the finding with the broader pattern of US military operations the team had been tracking longitudinally. None of these chains depend on the same underlying data. All of them have to point in the same direction for the conclusion to hold.
To defeat this analysis, an adversary would need to fabricate not just one video, but also the satellite imagery from a commercial provider, the weapon identification evidence, and the pattern of prior operational activity that serves as context — and make all of those fabricated elements mutually consistent across multiple independent institutional sources with their own verification processes. The cost and complexity of that fabrication scales with the richness of the independent corroboration network.
OSINT investigations should be led by contextually grounded practitioners. Investigators with deep knowledge of local social, political, and linguistic conditions are far better equipped to detect AI manipulation and its subtle failures. Their contextual expertise allows them to spot implausible cues, interpret narratives embedded in AI-generated content, evaluate source reliability, corroborate findings with other evidence, and navigate the evolving information ecosystem more effectively than analysts working remotely or solely through automated tools. This is an argument about what contextual knowledge provides within a graph-based verification framework: it enables analysts to identify which nodes in the network are genuinely independent and which are interconnected in ways that undermine their value as corroborating sources.
Building the Verification Stack When Content Cannot Be Trusted
The practical architecture of a verification workflow in 2026 has to be designed around the premise that any specific piece of content is potentially fabricated. The verification question is therefore not "is this content real?" but "is the network context around this content consistent with genuine organic documentation of a real event?"
The stack has four layers, applied in order because each layer conditions the interpretation of the next.
Source and network provenance comes first. Before examining content at all, the analyst establishes who published it, through what infrastructure, when, and with what amplification pattern. Is the publishing account of known provenance with a verifiable history? Does the amplification network show signatures of organic spread or coordinated inauthentic behavior — account creation timing, posting cadence, follower network topology? A quick check of a site's WHOIS record, DNS history, or analytics tags can reveal who runs it — or at least when it was created. Coordinated disinformation often relies on freshly registered domains with similar infrastructure patterns. This layer is where Maltego, ShadowDragon (a commercial link-analysis and social media intelligence platform), and similar tools do their primary work — not on the content itself, but on the infrastructure distributing it.
Temporal and geographic cross-referencing is the second layer. For imagery and video, does the content's claimed time and location hold up against independent reference data? A clip claims to show a strike from "today." Reverse video search finds that the footage was archived a year earlier. The local weather then was rain; now it is clear. Chronolocation trumps instinctive plausibility and confirmation bias. Commercial satellite data from Maxar, Planet, or the EU's Sentinel program — all four Sentinel-1 synthetic aperture radar satellites became fully operational in early 2026, providing dense temporal coverage of most of the earth's surface — provides a reference layer for claimed events. If the content purports to show damage that cannot be corroborated in satellite imagery from the claimed period, that negative finding has evidentiary value.
Multi-source corroboration is the third layer, and it is where the independence assumption must be aggressively tested. The question is not how many sources report the same thing, but whether those sources are genuinely independent — whether they could have all been seeded from the same origin, whether they share infrastructure, whether their reporting contains identical language suggesting a common template rather than independent observation. The "cross-reference in threes" heuristic is sound, but it requires that the three sources actually be independent. In a well-resourced disinformation operation, three apparently independent sources may have been seeded from the same infrastructure.
Content forensics sits at the fourth layer, not the first. By the time you reach it, the source network assessment and temporal cross-reference have already established a prior on the content's credibility. Content forensics then adds precision — catching errors that survived the network analysis, particularly in the earlier phases of a disinformation campaign before amplification networks have propagated the content widely and before metadata has been stripped by platform processing. BBC Verify uses a layered mix of open-source intelligence, forensic techniques, metadata and provenance checks, geolocation and satellite imagery, audio analysis, and human cross-checking to test viral video identity claims. The layering is the point. No single method is sufficient; reliability comes from the intersection.
The emerging piece of the stack is cryptographic provenance through the C2PA standard. C2PA (the Coalition for Content Provenance and Authenticity, a cross-industry technical specification) provides cryptographic metadata attached at creation, recording device, software, and edit history, secured by PKI (public key infrastructure, the same certificate-based trust system underlying HTTPS). When content is created by a C2PA-enabled device or tool — certain Nikon and Sony cameras, content from Adobe products, some AI generation tools — it carries a manifest recording the chain of custody from creation through any subsequent edits. A multi-layered defense strategy combining C2PA provenance standards, invisible watermarking, and AI-powered forensic analysis is beginning to shift the balance — at least for content distributed through mainstream platforms.
The limitation is structural and significant. C2PA only works for content created with C2PA-enabled tools. Content created with non-compliant tools, content that predates the standard's adoption, content shared through platforms that strip metadata — none of this carries the provenance signal. In a conflict zone where most documentation is recorded on consumer smartphones using stock camera apps, C2PA coverage is sparse. The standard is most useful for professional media environments and least useful for the user-generated content that constitutes the majority of OSINT collection in high-tempo operational contexts.
The honest description of where the stack currently stands: it is stronger at catching coordinated campaigns than individual sophisticated forgeries. A well-resourced state actor creating a single high-quality deepfake for targeted use — delivered to one specific analyst or decision-maker through a trusted channel — presents a much harder verification problem than a mass disinformation campaign distributing fabricated satellite imagery through social media at scale. Mass campaigns require coordination infrastructure that leaves network signatures. Single-target deceptions can be designed with minimal infrastructure footprint. The verification stack's strengths and weaknesses are not evenly distributed across the threat environment.
From "Is This Real?" to "Does This Network Hold?"
The practice change this analysis demands is concrete. Abandon the intuitive model of verification — where an analyst examines a piece of content and makes a judgment about whether it looks authentic — and replace it with a network model where the question is whether the relational structure around the content is consistent with genuine documentation of a real event.
This shift has downstream effects on what skills matter, what tools to prioritize, and how to document analytical conclusions in ways that are defensible.
On skills: the analyst who is most valuable in this environment is not the one who can spot deepfake artifacts — though that skill retains value — but the one who can read network topology, who understands what organic spread looks like versus coordinated amplification, who knows which commercial satellite providers offer what temporal resolution for what geographies, and who has the contextual knowledge to assess whether a claimed event is consistent with the documented operational environment. Verification now requires dual literacy: mastery of traditional open-source methods and technical understanding of AI's behaviors, biases, and affordances.
On tools: the priority investment is in graph analysis infrastructure, not detection classifiers. A detection classifier will be partially obsolete within months of the next generation of generative models. A graph analysis capability that can map amplification networks, identify infrastructure overlaps between accounts, and cross-reference posting patterns against known disinformation actor profiles retains its value because the network signatures of coordinated operations are more stable than the pixel signatures of a specific generation architecture.
On documentation: analysts must now document not just their conclusions about content, but their verification chain — precisely which independent sources were consulted, how their independence was confirmed, what the network topology of amplification looked like, and what residual uncertainties remain. Metadata is the first level to detect inconsistencies, but you need to go beyond it because it can be falsified. Any assessment that rests on a single source, or on multiple sources whose independence has not been verified, is exposed — and in a legal or policy context, that exposure matters enormously.
The Bellingcat investigation of the Tomahawk strike shows what rigorous documentation looks like. The conclusion was not "we are confident the US carried out this strike." It was: here is the weapon identified in independent video, here is its geographic location confirmed by satellite reference imagery, here are the weapon's operational characteristics that constrain which actors could have deployed it, here is the consistency of this finding with the broader operational pattern we have been tracking. Each link in that chain is stated explicitly, with its own evidence base, allowing a reader or critic to identify precisely where they would need to contest the analysis rather than simply asserting the conclusion is wrong.
The adversary's advantage in this environment is asymmetric access to forgery capability. The analyst's advantage — and it is real — is that building a convincing false network of independent corroboration at scale is harder and more expensive than generating a convincing individual piece of synthetic media. A sophisticated deepfake can be created by one person in an afternoon. Fabricating the independent documentation ecosystem that would allow it to survive rigorous graph-based verification requires an operation: infrastructure, coordination, the kind of footprint that leaves traces of its own.
The analyst's task is to force adversaries to pay that cost. Every additional independent verification chain that must be counterfeited to sustain a fabrication is a friction point. Every documented inconsistency in the network topology around suspect content is an exploitable weakness. Inserting doubt is enough to delegitimize something in this context — no complex fake required. That cuts both ways: the adversary inserts doubt about genuine content by asserting fabrication; the analyst inserts productive doubt about fabricated content by mapping the network that fails to sustain it.
Media forensics is not dead. Network corroboration is not infallible. The arms race continues on both sides, and no methodology available to working analysts today provides certainty against a well-resourced adversary operating at the frontier of generative capability. The graph-based verification stack provides something more modest and more durable: it shifts the burden of fabrication upward, forces coordinated deception to leave larger and more detectable footprints, and gives the analyst a set of verification questions whose answers are less susceptible to being overridden by the next release cycle of a generation model. When the content itself can no longer be trusted to carry its own verification, the network around the content is where the evidence lives.