← Methodology library
Methodology · Post-Mortem Analysis Engine

Why we publish only after the breach is confirmed.

The Post-Mortem Analysis Engine correlates Cipherwake's longitudinal scan history with publicly-confirmed breach disclosures. Its purpose is retrospective — never predictive in public. This page explains the correlation logic, the manual review gate, the rejection criteria, and the strict editorial line that defines what we will and will not publish.

The hard rule Cipherwake will not publicly state that a specific named third party "is being attacked," "may have been breached," or "is at elevated risk" before the affected party or a regulator has publicly confirmed an incident. Soft-framed warnings ("FYI we noticed unusual activity at X.com") are not permitted either. This rule is not a technical limitation; it is an editorial principle that holds even when our signal is high-confidence.

What this tool measures

For every domain Cipherwake has ever scanned, we maintain a longitudinal record:

The Post-Mortem Engine does not introduce a new measurement. It answers a different question: given a confirmed breach disclosure, what does our existing scan history say about the affected domain in the days, weeks, and months prior?

How we monitor for confirmed breaches

A scheduled cron job pulls from public, authoritative sources of breach disclosures:

We treat a breach as "confirmed" when it appears in at least one of: an SEC 8-K, an official statement from the affected company, a regulatory filing, or HIBP. Press reporting alone is not sufficient. Threat-actor claims (paste sites, leak forums) are not sufficient.

How retrospective drafts are generated

For each newly confirmed breach, the engine looks up the affected domain(s) in our scan history and computes:

  1. Score trajectory in the 30 / 90 / 365 days before disclosure.
  2. Cert issuance velocity — was there a 3σ deviation above baseline in the lead-up?
  3. Key-reuse changes — did historical key reuse patterns change shortly before the disclosure?
  4. Subdomain emergence — were unusual new subdomains observed?
  5. Cipher policy regression — did supported ciphers degrade?
  6. Lead time — earliest unambiguous anomaly date vs. public disclosure date.

If any of these surface a clear, technically defensible correlation, the engine generates a draft retrospective. Drafts include the timeline, the specific signal, and a plain-language technical explanation.

The manual review gate

Drafts are never auto-published. Each goes through a manual review with three explicit kill criteria:

  1. Causation gap. Is there a plausible non-breach explanation for the signal — cloud migration, M&A activity, legitimate cert lifecycle ops, CT-issuer policy changes? If yes, kill.
  2. Reproducibility. Could a competent analyst independently reach the same conclusion from public data? If our claim depends on a private artifact or non-replicable inference, kill.
  3. Materiality. Is the signal lead time + magnitude actually meaningful, or are we publishing for the press cycle? "We saw 0.05σ above baseline 12 hours before disclosure" is not material; kill.

Empirically we expect most drafts to die at this gate. That is the point. The published rate is intentionally low. A single false attribution costs more credibility than fifty correct ones earn.

How it scores

The engine does not publish a score. Each draft is annotated internally with a confidence band:

What this tool does NOT claim

This is the section that earns the trust:

Trust signal The "What we don't claim" list above is longer than the "what we measure" list. That asymmetry is intentional. Anyone publishing post-mortem content faces strong pressure to overstate signal — for the press cycle, for sales decks, for follow-on coverage. We've codified the inverse pressure.

Permitted public surfaces

The Post-Mortem Engine produces three kinds of public output, in order of frequency:

  1. Long-form retrospectives at /post-mortems/<slug> — published only after a breach is publicly confirmed; only "Strong" confidence band; only when manual review survives.
  2. Aggregate Cert Anomaly Index — quarterly trend report; sector-level statistics with no individual domain naming.
  3. Pro-tier private alerts — when one of our paying customers' own domains or vendor-portfolio domains shows a signal, we alert them privately. This is not a public surface.

Limitations + edge cases

Why this rule, written down

Real-time public claims about specific named third parties carry three categories of harm:

  1. Defamation / tortious interference. Even hedged statements that imply fault toward a named entity create legal exposure. "FYI we noticed activity" is not a safe harbor.
  2. Market manipulation. Signal about a publicly-traded company before regulator-confirmed disclosure can move stock.
  3. False alarm at scale. A 60% precision rate sounds reasonable until you imagine the 40% — a major bank, a hospital network, a SaaS vendor — wrongly named on Twitter for a benign infrastructure event.

The post-confirmed-only rule eliminates all three. The cost is timeliness; the benefit is permanence — a published Cipherwake retrospective is something a serious analyst can cite without caveat.

Try it