M2E1: The Intelligence Cycle and Its Discontents

Module 2, Episode 1: The Intelligence Cycle and Its Discontents


The Diagram That Survives Its Own Autopsy

You probably believe the following: the intelligence cycle is a useful teaching tool that simplifies a more chaotic reality, its limitations are well-documented, and with enough reform effort—better collection management, faster dissemination, improved analyst-policymaker relationships—it could approximate how intelligence works. That belief is wrong in a specific and consequential way. Wrong because you've diagnosed the failure incorrectly, which means you've also misidentified what AI disrupts when it enters the process.

In the modern era, almost all intelligence professionals study the Intelligence Cycle as a kind of gospel of how intelligence functions. Yet it is not a particularly good model, since the cyclical pattern does not describe what really happens. That assessment comes from Arthur Hulnick, a former CIA officer who spent his post-agency career at Boston University systematically documenting the gap between the cycle's depiction and actual practice. He spent decades in an organization built around the cycle, trained on it repeatedly, and emerged to write its most thorough critique. His 2006 paper in Intelligence and National Security is the standard reference for what's broken about the model. His critique is accurate on every empirical point. But the conclusion most readers draw from it—that the cycle should be replaced with something more realistic—misses what the cycle is doing.

The standard diagram shows five stages proceeding in order: planning and direction, collection, processing and exploitation, analysis and production, dissemination, and back again. Policymakers identify requirements. Collection managers task sources. Analysts synthesize product. The policymaker reads it. Feedback loops. The cycle restarts. Clean. Sequential. Legible. In practice, policy officials rarely give collection guidance. Collection and analysis, which are supposed to work in tandem, work more properly in parallel. And the idea that decision-makers wait for the delivery of intelligence before making policy decisions is equally wrong.

None of this is news to working analysts. The gap between diagram and practice is so well-documented, so universally acknowledged inside intelligence organizations, that it functions almost as a running institutional joke. What's not well-understood is why the diagram persists despite that universal acknowledgment. Why sixty-plus years of documented failure hasn't produced a replacement. Why every reform effort accepts the cycle as its starting point. The answer to those questions is the subject of this episode, and it will change how you think about inserting AI into the intelligence function.


Where the Bottlenecks Form

To understand what the cycle obscures, you have to trace where work accumulates and stalls. The bottlenecks don't distribute evenly across the five stages. They cluster in specific places, and those clusters reveal organizational incentives that no reform has successfully dislodged.

The first major accumulation point is collection management. The cycle diagram implies a clean handoff: requirements flow down, collection tasks flow out, intelligence flows back. In practice, collection management operates as a contested allocation problem under permanent resource scarcity, with multiple competing consumers and fundamentally incompatible timelines. The Defense Intelligence Agency (DIA) offers a Basic Collection Management Course, a two-week classroom experience covering collection management tasks, roles, and relationships within the joint intelligence process. The existence of that formal training infrastructure tells you something: the gap between analytic demand and collection capacity is persistent enough to require its own professional discipline, its own career track, its own certification programs. This isn't a coordination failure that a better diagram would fix. It's a structural condition.

Analysts writing requirements into collection management systems are performing an act of institutional optimism. They know, with reasonable probability, that the most time-sensitive requirements will be deprioritized by collection managers serving theater commanders whose needs outrank strategic analysis. They know that signals intelligence (SIGINT) collection timelines operate in a different bureaucratic universe than human intelligence (HUMINT) development cycles, which operate in a different universe from imagery scheduling. They write the requirements anyway, because writing the requirement is the only move available to them, and because having a documented requirement is how they later explain gaps in their assessments. The requirement becomes evidence not of successful direction but of institutional effort.

The second major bottleneck is the analytic backlog—more visible and more misunderstood than collection management friction. The image of intelligence analysis tends toward the finished product: the National Intelligence Estimate (NIE), the President's Daily Brief (PDB) item, the current intelligence brief. What that image doesn't capture is the mass of processed and unprocessed collection sitting between raw reporting and finished assessment. Signals intercepts requiring language processing. Imagery requiring exploitation. HUMINT reports requiring validation against existing source assessments and operational reporting. The volume of collected material has increased faster than analytic capacity at every point in the modern history of intelligence—and that asymmetry is not a temporary condition pending a hiring surge. It is the structural state of an intelligence apparatus in an era of abundant collection capability.

The backlog has a behavioral consequence the cycle diagram can't represent: analysts routinely make assessments about things they know they don't fully understand, on timelines that preclude anything resembling systematic review of available reporting. They do this not because they're lazy or reckless but because the alternative—waiting for full coverage before producing—would mean producing nothing at all. The cycle's "analysis and production" stage implies a transformation from complete, processed inputs to finished judgment. What it depicts is an analyst making the best call they can on whatever fraction of available reporting they've had time to absorb, under the implicit pressure of a customer who needed the answer yesterday.

The third bottleneck is what practitioners call the last-mile problem, and it's the one least amenable to organizational fixes. Finished intelligence products are produced in volume. Policymakers have limited bandwidth, specific preoccupations, and established mental models of the situations they're managing. The gap between what analysts write and what decision-makers absorb is enormous, persistent, and almost entirely invisible to the analytic workforce. The Senior Executive Intelligence Brief lands. The daily threat assessment circulates. Whether the key judgment on page three changed anyone's thinking is essentially unknowable to the people who wrote it, because the feedback mechanisms that would tell them—honest, specific, operational feedback from consumers—almost never operate as the cycle diagram suggests they should.

Tension over policymaker criticism of intelligence performance on hot-button issues is normal. What that sanitized formulation from the CIA's own institutional research conceals is that "normal tension" means analysts frequently receive either no feedback or feedback shaped by the policymaker's policy preferences rather than genuine engagement with the analytic content. The last-mile problem isn't that finished intelligence doesn't reach decision-makers. It's that the feedback loop the cycle depends on—policymakers consuming intelligence, evaluating it, and returning calibrated requirements—operates intermittently at best, and often not at all.

These three bottlenecks—collection management friction, analytic backlogs, last-mile absorption failure—don't exist because of correctable management failures. They exist because the incentive structures inside intelligence organizations, and between those organizations and their consumers, reliably produce these outcomes under any organizational design. No reform since the Church Committee has touched those incentive structures. The 2004 Intelligence Reform and Terrorism Prevention Act reorganized authorities and created the Office of the Director of National Intelligence (ODNI). It did not change the fact that analysts are evaluated on production volume, not on downstream decision quality. It did not change the fact that collection managers serve multiple masters with competing priorities. It did not change the fact that policymakers who ignore intelligence face no institutional consequence for doing so.


The Customer's Preferred Conclusion

The cycle's "planning and direction" stage is where the model is most idealistic and most dangerous. The diagram implies that direction flows rationally from policymaker requirements to collection management to analytic tasking—a clean transmission of informational need. What happens in the most consequential cases is the reverse: the policymaker's preferred conclusion flows upstream, deforming both what gets collected and how it gets analyzed. This isn't a matter of individual bad actors or deficient tradecraft. It's a structural feature of the producer-consumer relationship under political pressure.

In the modern era, policy officials want intelligence to support policy rather than to inform it. Hulnick wrote that line in 2006, but the pattern he identified had been visible since at least Vietnam. The history of the intelligence cycle under conditions of high political salience is largely a history of the direction stage running backwards—customers transmitting preferred conclusions that shape what gets collected, what gets analyzed, and what gets expressed as uncertainty versus confidence.

The Vietnam case runs from the mid-1960s through the early 1970s and is among the best-documented examples of systematic policymaker push. The order of battle dispute between CIA analysts and the Military Assistance Command, Vietnam (MACV) is the clearest instance: military commanders consistently pushed for lower enemy troop strength estimates because higher estimates would have undermined the official narrative of progress. Sam Adams, the CIA analyst who documented the discrepancy most aggressively, found his analysis suppressed not through any single act of political interference but through sustained institutional pressure that shaped what could be said, how confidently, and in what venue. The problem wasn't that individual analysts lacked integrity. The organizational relationship between MACV, the White House, and the intelligence community had structured honesty out of the product. The cycle's direction stage, which should have transmitted genuine informational requirements, instead transmitted political requirements dressed as informational ones.

Iraq is the more recent and more thoroughly documented case. The Iraq WMD intelligence failure reveals three interrelated analytical problems: intelligence analysts failed to place their WMD assessment in strategic and political context and to understand Iraqi leadership's motivations and intentions; analysts assumed Iraq was hiding weapons and therefore pursued only one working hypothesis; and analysts failed to convey explicitly to policymakers the ambiguity of their evidence and the distance of their conclusions from analytical certainty. Each of those failures traces upstream to the direction stage. The single working hypothesis wasn't a lapse of tradecraft—it was what the customer wanted. Expressing ambiguity more explicitly would have required resisting institutional pressure to deliver the confident assessment the policy process demanded. The analytic conclusions were shaped before the analysis was fully conducted, because the direction coming from policymakers wasn't "tell us what you find" but closer to "find support for what we've already decided."

The mechanism is worth understanding precisely, because it operates without requiring anyone to explicitly order an analyst to lie or distort. It operates through a subtler channel: the feedback signal analysts receive about which products get read, which get praised, which get cited in policymaker speeches, which get ignored. Analysts learn quickly what the customer wants. They learn which formulations get traction and which get questioned. They learn that expressing high confidence on the favored hypothesis moves product forward while expressing doubt generates requests for revision. No direct instruction required. The institutional incentive structure does the work.

Analysts were trapped by a mindset—assuming Iraq possessed WMD and therefore pursuing only one working hypothesis. That trap wasn't set by any individual. It was set by a sustained pattern of customer feedback that had, over the preceding months of policy deliberation, made the alternative hypothesis—that Iraq didn't have the weapons—effectively unthinkable inside the production process. The cycle's direction stage was running exactly as the diagram shows. It just happened to be transmitting the wrong signal from the wrong direction.

The structural argument here is not that policymaker push only happens in cases of bad faith or political manipulation. It happens any time a policymaker holds a strong prior belief about the situation being analyzed, because strong prior beliefs generate feedback patterns that shape what analysts produce, which shapes what collection gets tasked, which shapes the evidentiary base the analysis rests on. The cycle diagram has no mechanism for representing this feedback. It depicts direction as flowing cleanly from requirement to collection. It cannot show you the policymaker's preferred conclusion infiltrating every stage from the top down.


Why the Cycle Survives: The Accountability Frame

Here is the thesis this episode is building toward, and it requires abandoning a comfortable assumption: the intelligence cycle survives not because organizations fail to recognize its descriptive inadequacy, but because it performs a function that has nothing to do with describing how intelligence works.

The cycle is an accountability frame. It makes legible who was supposed to do what, and when, and in what sequence. When something goes wrong—when an attack isn't warned against, when an assessment proves catastrophically wrong, when collection fails to answer a critical question—the cycle provides the vocabulary for attributing responsibility. The direction stage was deficient. The collection was inadequate. The analysis failed to challenge its assumptions. The dissemination didn't reach the right people in time. Each stage is a potential site of accountability assignment, and the assignment can be made in terms that are institutionally legible, legally defensible, and politically manageable.

This is not a cynical observation. It is an observation about how large bureaucratic organizations under democratic oversight manage the problem of distributed responsibility for complex failures. Intelligence failures involve hundreds of decisions, made by dozens of people across multiple organizations, under conditions of genuine uncertainty. The intelligence cycle provides a shared map of the process that allows post-hoc accountability to be assigned without the process collapsing into an incomprehensible tangle of individual choices. An investigation can find that "collection management failed to task imagery assets adequately." That finding is actionable. It names a stage, points to a function, implies a person or unit responsible. Without the cycle as shared referent, you don't have that.

Hulnick identifies the cycle's persistence as something close to institutional inertia—organizations continue teaching a model they know is wrong because it's established. That explanation is insufficient. Organizations abandon established models when they find better ones. The cycle persists specifically because no alternative serves the accountability function as well. A more realistic model of intelligence—depicting parallel processes, feedback loops operating in non-sequential ways, collection and analysis proceeding simultaneously, policymaker preferences shaping the analytic product throughout—would be descriptively accurate. It would also be an accountability nightmare. If everything happens in parallel and the process is non-linear, where do you assign responsibility when it goes wrong?

As Hulnick himself wrote: "When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle. It is really not a very good description of the ways in which the intelligence process works." What's missing from that critique is the institutional question his former employer might have answered differently: not "does this describe the process accurately?" but "what happens to the institution's ability to manage oversight, congressional reporting, and post-failure investigation without it?"

The Hulnick critique is descriptively correct and normatively incomplete. Correct: the cycle doesn't describe how intelligence works. Incomplete: it treats that gap as a bug rather than asking what function the persistence of the diagram is performing. The critique, having identified that the emperor has no clothes, concludes the emperor is foolish. The less comfortable conclusion is that the emperor has reasons.

Every reform attempt since the 1980s has proposed modifications to the cycle—add stages, create feedback loops, represent parallel processes—while preserving the cycle as the underlying frame. That is not evidence of intellectual stubbornness. It is evidence that the accountability function is non-negotiable. You can redesign the workflow. You cannot, within a democratically accountable intelligence apparatus, abandon the shared vocabulary through which responsibility for failure is assigned. The cycle is that vocabulary.


What AI Disrupts

Now the question becomes precise. If the cycle's primary function is accountability legibility—not workflow description—then AI disrupts the cycle not by changing how intelligence work gets done, but by dissolving the conditions under which accountability can be assigned at all.

The operational implications are already playing out. The DIA has deployed "ChatDIA," its first large language model running on the top-secret Joint Worldwide Intelligence Communication System (JWICS) network, and the tool has already "saved hundreds of hours." The Digital Modernization Accelerator was created on March 1 as the permanent incarnation of the ad hoc Task Force Sabre, institutionalizing AI deployment across the agency and its combatant command partners. DIA's next goal is implementing agentic AI—software that performs sequenced tasks autonomously under human direction rather than just responding to queries: "Our intention is to take the applications we're building, tie them together and build agents… I would say that we're moving very rapidly towards the deployment of agents within the classified fabric."

Agentic AI in the intelligence context means an agent that can triage collection, draft assessment language, flag relevant reporting, and surface conclusions before an analyst has read the underlying material. The cycle's accountability vocabulary assumes human authorship at each stage. The "analysis and production" stage assumes an analyst produced the assessment—that a named, credentialed professional exercised judgment, applied tradecraft, and took institutional responsibility for the key judgments. When an AI system drafts the assessment and the analyst reviews and approves it, that assumption breaks down in ways the existing vocabulary cannot handle.

In the Maven Smart System (Maven is a Department of Defense AI program that integrates machine learning into military intelligence workflows) as deployed against Iran, Claude (an AI assistant developed by Anthropic) functions as the reasoning layer between raw intelligence and human decision-makers, embedded within Maven to help analysts sort through incoming data, summarize assessments, and surface recommendations. According to sources with knowledge of the integration, Claude does not directly issue targeting commands; it functions as a reasoning layer. That distinction—between sorting intelligence and providing targeting advice—is the accountability boundary the entire architecture is designed to preserve. It is a boundary the system's speed makes increasingly formal rather than substantive.

For U.S. military officials, Maven's speed in selecting and reselecting targets represents a distinct combat advantage, allowing forces to disable Iranian combat capabilities swiftly and incessantly, preventing their reconstitution. But speed at scale breaks the cycle's accountability function at precisely the point where it matters most. Maven was processing the entirety of the Iran target list, generating over 1,000 targets per day. At one target every 86 seconds, the question becomes unavoidable: if an AI system classified a location as a military target and the error wasn't caught, whether anyone could have caught it is no longer a rhetorical question.

The accountability problem isn't hypothetical. When the Pentagon briefed Congress on AI use in targeting, the relevant officials described Claude as helping analysts "sort through intelligence" while insisting it "does not directly provide targeting advice." That distinction is exactly the kind of accountability boundary the cycle's language was designed to preserve. The analysis stage belongs to the analyst. The AI assists. The human authorizes. The record reflects human judgment at each node.

"Sorting through intelligence" is not a neutral pre-analytic function. What gets surfaced, what gets highlighted, what gets ranked as high-confidence versus low-confidence, what gets excluded from the synthesis—these are analytic decisions. When an AI system makes them at a rate that precludes human review, the analyst's authorization becomes a signature on a document they did not write, approving a judgment they did not form, based on reasoning they cannot fully reconstruct. The cycle says "analysis and production." The cycle says the analyst is accountable for the assessment. The epistemic situation is that attribution of judgment has become ambiguous in ways the cycle's accountability vocabulary cannot resolve.

DIA uses a "quality assurance framework" to ensure any AI it uses is "explainable" and meets the agency's tradecraft requirements. The word "explainable" is doing enormous institutional work in that sentence. Explainability is the AI accountability architecture's answer to the cycle's accountability frame: if the AI can generate a chain of reasoning for its outputs, the human who approved the output can be held accountable for not catching an error in that reasoning. The institutional logic is intact. The practical reality is that reviewing an AI's reasoning chain for a complex intelligence assessment, under the time pressure that makes AI assistance attractive in the first place, is a different cognitive task than producing the assessment from scratch—and one that current research on human-AI collaboration suggests humans perform considerably worse than they believe they do.

The accountability frame's deeper problem with AI isn't the specific tools; it's the category dissolution they represent. The cycle assigns accountability by function: the collection manager is responsible for collection management decisions; the analyst is responsible for the assessment; the dissemination officer is responsible for ensuring the product reached the right people. AI systems don't map cleanly onto those functional categories. ChatDIA, operating across classified networks, performs something that is simultaneously collection triage, analytic synthesis, and information management. It doesn't belong to the "collection" stage or the "analysis" stage—it operates across both, under human oversight that is real in principle and intermittent in practice.

The speed of AI target selection worries many observers, given the risk that sites will be chosen for attack without adequate human oversight. That worry is accurate, but it understates the problem. The concern is about oversight failure—humans not reviewing what AI selects. The deeper problem is that even when humans do review what AI selects, the institutional vocabulary for assigning accountability for errors has become ambiguous in ways the cycle was specifically designed to prevent. Was it an analytic failure? A collection management failure? A failure of the AI system—in which case, who is responsible for deploying an AI system with those failure modes? A review failure by the human analyst who approved the output? The cycle gave you clean answers to those questions.

AI gives you fog.


The Decision Analysts and Managers Must Now Make

The analyst who has followed this argument to its conclusion is equipped for a specific judgment that the analyst who treats the cycle as a flawed workflow description is not: where to insert AI and where to preserve unambiguous human authorship of the intelligence record.

This is a practical management question that analysts and their supervisors are making right now, often without recognizing its full implications. The DIA is deploying ChatDIA and moving toward agentic AI. The agency has started sending out small "mission integration teams" of three or four AI experts to combatant commands, helping them not just to use new technology but to reorganize their staff processes and workflows. Those reorganizations are happening now, in operational settings, by people who understand AI deployment but may not have thought deeply about what the intelligence cycle's accountability frame requires.

The practical distinction that should guide those decisions is this: AI can safely absorb functions the cycle's accountability frame treats as pre-analytic or logistical—triage, search, summarization, pattern-flagging—because errors in those functions are catchable and correctable before they enter the record. AI should not, without exceptional and documented review processes, author the key judgments that the intelligence record attributes to human analysts. The accountability difference is between "AI helped me find the relevant reporting" and "AI wrote the assessment I approved." The second is the one that breaks the cycle's core function.

That distinction is genuinely hard to maintain under operational tempo, and the institutional pressures run entirely in the wrong direction. Speed is rewarded. Volume is rewarded. The analyst who reviews AI-generated assessments at throughput can serve more consumers faster than the analyst who constructs assessments from primary material. The incentive to let AI draft and to review rather than author is powerful and mostly invisible until something goes wrong. When it does go wrong, and the post-mortem investigation invokes the cycle's accountability vocabulary to assign responsibility, what investigators will find is a record that says "analyst X produced assessment Y" and a reality in which neither the analyst nor anyone else can fully reconstruct the judgment that underlay it.

No concept is more deeply enshrined in the intelligence literature than the intelligence cycle. Hulnick studied it as an undergraduate in Sherman Kent's book on strategic intelligence and again when he attended the Air Force Intelligence School in 1957. Sixty-nine years of continuous teaching, across every institutional reform, every technological shift, every documented failure. It survived because the accountability function it serves is indispensable to democratic oversight of secret organizations. AI doesn't threaten the workflow the cycle describes. AI threatens the accountability structure the cycle enables—and that structure is considerably harder to replace than any workflow.

The analyst or manager who understands this will ask, before deploying an AI drafting tool: not "does this save hours?" but "when this produces a wrong assessment, who is accountable, and can the record show why?" They will insist that the intelligence record preserve a distinction between AI-assisted collection triage and AI-authored analytic judgment. They will recognize that the fastest path to an accountability crisis isn't deploying AI—it's deploying AI in the stages of the cycle where human authorship is the only mechanism that makes oversight legible.

The cycle was never a workflow model. It was always an accountability architecture. What you do with that recognition is the work.


Module 2, Episode 1 — Intelligence Analysis in the Age of AI: Tradecraft, OSINT, and Frontier Models